Contact center agents have access to sensitive customer data. The risk of them sending this to large language models (LLMs) – like ChatGPT – to auto-generate customer responses is high.
Indeed, without the proper data security consideration, there is a chance that ChatGPT incorporates this data into its models and churns it out when responding to other user inputs.
That scenario presents a real security risk, with 3.1 percent of workers pasting confidential data into ChatGPT – according to Cyberhaven research published in late February.
By now, this percentage has likely risen much higher. After all, the risk grows as more employees discover GPT – and its rival LLMs.
Already, many cases are coming to the fore. Last week, three employees at Samsung fed ChatGPT with “sensitive database source code and recorded meetings.”
In another case, a doctor entered their patient’s name and medical condition into ChatGPT before asking it to write a letter to an insurance company.
The risk of such actions is not only theoretical. A 2021 study published by Cornell University into GPT-2 warns of the dangers of “training data extraction attacks.”
Such attacks may allow someone to recover personally identifiable information (PII) and verbatim text sequences.
As a result, there is a threat to security and compliance processes – particularly in environments like the contact center.
After all, agents are increasingly embracing automation tools to simplify their jobs – including those not provisioned by the businesses they work for.
By 2026, Gartner estimates that 30 percent of agents will use such solutions.
Commenting on this finding, Emily Potosky, Director of Research at the Gartner Customer Service & Support practice, stated:
While self-automation has been happening for a while in the software space, this trend will become more present internally in customer service because reps now have improved access to automation tools.
Alongside LLMs, there are also other risky self-automation tools that agents may use.
For instance, unauthorized third-party call recorders also present a risk.
As such, contact centers should look beyond ChatGPT alone when creating a strategy to mitigate the use of illicit agent productivity tools.
Mitigating the Risk of ChatGPT
Many organizations are already taking steps to mitigate the risk of ChatGPT. JPMorgan Chase was one of the first, restricting its workers’ use of LLMs after citing compliance concerns.
Meanwhile, other businesses – including Amazon, Microsoft, and Walmart – have instead issued warnings to employees. Indeed, the latter sent out a memo stating:
“Employees should not input any information about Walmart’s business — including business process, policy, or strategy — into these tools.”
Such moves are likely to become increasingly common as employers mull over confidentiality policies and prohibitions relating to entering confidential company information into LLMs.
As they do so, many will likely explore contact center marketplaces, investigating agent automation tools.
After all, the best way to mitigate such a risk is to provide authorized alternatives.
As Potosky said:
Customer service and support organizations that not only allow but authorize self-automation will become more competitive than those that don’t, as reps will notice and correct inefficiencies that leaders are unaware of.
This point highlights the need for close agent communication to understand the self-automation opportunities they spot, review them, and then pinpoint appropriate solutions.
Indeed, the rise of ChatGPT in the workplace highlights the willingness of employees to augment their work processes.
Embracing this and working with agents to implement new productivity tools will enable contact centers to create more efficient and engaging ways of working.