Generative AI has taken the contact center by storm. Offering intuitive, intelligent support for everything from outreach automation to self-service, and employee assistance, Gen AI tools are becoming a must-have in the modern CX landscape. Unfortunately, generative AI comes with challenges to overcome.
A reliance on access to high volumes of data, alongside unpredictable models, and ever-evolving capabilities makes preserving compliance, security, and privacy standards complex.
Regulations are needed to establish a framework and rules of engagement that are understood by all parties. However, in lieu of clear regulation, we are already seeing legal challenges from end users, with courts now tasked with ruling on whether current practices breach existing laws.
The outcomes of legal action will no doubt have a short-term impact, but with widespread regulations pending, how can contact centers get ahead of incoming Gen AI regulations?
Regulating Generative AI: The Rise of New Rules
While generative AI might seem to be everywhere these days, it’s still a relatively new and complex concept that industry leaders are struggling to deploy and govern. However, this doesn’t mean that regulatory guidelines aren’t beginning to emerge. As Gen AI and LLMs extend further into business landscapes, governments and institutions are taking action to protect users and customers.
President Biden signed the Executive Order of AI Safety in 2023, outlining standards for ensuring AI is transparent, safe, secure, and trustworthy. The European Parliament also introduced the EU AI Act in 2024, described as its first regulation on artificial intelligence.
The EU and US aren’t the only regions investing in new regulatory requirements, though they do represent some of the biggest markets for many contact centers. Everywhere you look, government groups are working together to craft a future where we can access generative AI without harming data privacy or compromising civil rights.
Martin Taylor, Co-Founder and Deputy CEO at Content Guru, says:
“Many of the use cases for Gen AI that we hear about are still theoretical and unproven, but in CX it’s real and deliverable. AI is playing a role at every stage: before, during and after customer interactions.
The ability to benefit from cutting-edge AI is exciting, but being on the front line of technology innovation means pre-empting regulations and adapting to them at speed.
Failure to prepare for changes based on regulations will hamper AI-driven projects across the business and has the potential for competitors to surge forwards in the market. As a result, the time to prepare is now.”
The Impact of Gen AI Regulation on Contact Centers
Ultimately, global AI regulation is inevitable. Standards are developing all the time, throughout countless countries and territories. Of course, it’s unlikely we’ll see a universal agreement among governments and regulatory bodies any time soon.
We can expect is that organizations, nations, and individual customers will look to the regulations created by the EU and US for inspiration. We saw a similar process taking place when the EU introduced their General Data Protection Regulation (GDPR) guidelines a few years ago.
For contact centers, this means implementing Gen AI will require a careful consideration of both current and likely near-future guidelines implemented for safe, transparent, and ethical use.
For instance, some of the key concepts outlined by current EU and US regulations include:
Ensuring Transparency
Transparency is crucial in the ethical development of generative AI systems for contact centers. Customers need to be made aware when interactions are mediated or augmented by artificial intelligence. This means companies will need to ensure they’re informing customers when they’re interacting with virtual agents and chatbots.
According to EU rules, companies will need to disclose which content is created by generative AI, publish summaries of data used for training, and design models to ensure they don’t generate unsafe or dangerous content. US guidelines also require companies to leverage tools to detect AI-generated content, deepfakes, and other solutions used for fraud.
Data Security
It almost goes without saying that every new AI regulation will focus on data security. The EU and US mandates already restrict companies from leveraging and using sensitive data, such as biometrics scans to train AI models. They also require companies to implement comprehensive strategies for handling personally identifiable information and enabling end-to-end encryption.
In the contact center, this means business leaders will need to implement strong governance that combines advanced cybersecurity strategies with tools that protect against data breaches. Customers will need assurance that their data is being handled with care and respect.
Safe AI Usage
In the contact center, many of the concerns regulators have about how AI might be used may not come into play. For instance, the US requires companies to avoid using AI to engineer dangerous biological materials and create deepfakes. However, the US and EU also require companies to be cautious about the content they create with AI tools.
Organizations need to implement safeguards to detect and rectify issues wherein AI might accidentally generate inaccurate, misleading, and potentially damaging information. This is crucial to not only staying compliant but preserving strong relationships with your customer base.
Mitigating Job Displacement
Generative AI doesn’t signal the end of the contact center agent. Even the regulations created by the EU and US require companies to ethically implement AI in a way that augments human employees, rather than replacing them entirely.
Contact center leaders will need to focus on training and upskilling their workforce, to help them unlock the full benefits of AI, rather than automating every task. This will be particularly crucial if new regulations emerge that give customers the “right to speak to a human”.
“The EU and US have shown themselves keen to collaborate with one other and with industry to develop standards further. A formal global framework would be much harder to achieve, of course, with universal agreement on every detail unrealistic. However, there is enough information available for organizations to begin adapting their approach to fall in line with potential changes in regulation.
The CX landscape is ground zero for Gen AI in business because the new technologies can bolt straight onto modern cloud contact center environments, and as a result the sector will be heavily influenced by incoming regulation. Selecting an experienced AI partner is crucial to help you understand, navigate and overcome any adoption challenges.”
- Martin Taylor, Co-Founder and Deputy CEO at Content Guru.
Preparing for the Future of Gen AI Regulation
Artificial intelligence, particularly generative AI and large language models is evolving too quickly for a classical top-down regulatory approach to be effective. However, rules and guidelines are emerging. When generative AI was first introduced, we only had a few existing rules surrounding data protection in the contact center, like GDPR.
Going forward, we’ll see AI continue to evolve, and regulations will transform alongside it, driven by new discoveries, emerging customer concerns, and evolving risks. To ensure generative AI is used safely in the contact center, government regulators and the tech industry will need to work together to implement comprehensive frameworks.
In the meantime, contact center leaders will need to prioritize working with vendors who already understand the risks, emerging challenges, and potential regulatory requirements for generative AI. Companies like Content Guru, with a strong background in the AI landscape, can assist businesses in implementing their own comprehensive governance strategies.
With the right support, business leaders can stay ahead of AI trends, implement the latest technology, and ensure they’re future proofing their approach to compliance.