A Gartner report has made a highly troubling prediction that could drastically alter the role of generative artificial intelligence in the customer service sector.
Listed as one of its Strategic Planning Assumptions, Gartner stated that “by 2027, a company’s generative AI chatbot will directly lead to the death of a customer from bad information it provides.”
In expounding on this prediction, Brad Fager – Senior Director and Chief of Research at Gartner – explains that the potential threat will come from “hallucinations”, a term used by Open AI to describe content created by GenAI that is “nonsensical or untruthful in relation to certain sources.”
Ironically, it is the more human and realistic capacity of GenAI that has been integral to its success, which could enhance the believability of these hallucinations.
As customers become more trusting of authentic-feeling chatbots, they will become over-reliant on the technology and not be able to discern irrelevant and potentially dangerous advice/suggestions.
The idea of customers being placed in life-threatening situations by technology will undoubtedly remind some readers of the 2014 Apple scandal, which saw a fake ad claiming that Apple products could be charged in a microwave. Worryingly however, the threat from GenAI is a much larger concern.
Whereas the fake Apple ad was perpetrated by hackers as a prank, Gartner has warned that future instances of placing customers in danger “could come from any generative AI tool.”
Gartner has warned that future instances of placing customers in danger “could come from any generative AI tool.”
Rather than intentionally looking to trick people, the concern with GenAI is that it could provide seemingly good advice that is applied in an improper way – whether it be neglecting certain food allergies or dangerous product repair suggestions.
While one would assume that safeguarding against examples such as those listed above would be at the top of any company’s list before implementing a new GenAI tool, the proliferation of generative AI at such a remarkable speed has completely outpaced risk assessors.
Indeed, in Gartner’s 2Q23 Emerging Risks Report, a survey revealed that “Only 1% of risk executives said they were thoroughly prepared for the risks of AI adoption.”
In spite of these risks and concerns, it is clear that the GenAI gravy train won’t be slowing down any time soon. There is no point in closing the stable door after the horse has bolted.
So, what can companies do to ensure that they are protecting their customers against their GenAI offerings?
Safety First
While there have always been misgivings around AI, historically, it has mainly concerned the fear of job loss and being made obsolete – not worries about personal injury.
While it is still a long way removed from Will Smith battling an army of evil robots, any issue that could potentially harm a customer is a very serious matter that should be treated as such by any organization deploying GenAI chatbots.
Unfortunately, the news comes at a time in which GenAI’s impact on customer service and CX is skyrocketing, with Gartner estimating that the technology will be embedded within 80% of conversational AI systems by 2025 – up from 20% in 2023.
GenAI will be embedded within 80% of conversational AI systems by 2025 – up from 20% in 2023.
This surge in popularity has led to heightened expectations surrounding GenAI that are placing customer service professionals in a tough situation, with a recent survey revealing that 60% of customer service and support leaders admit to feeling pressured to adopt generative AI in their function.
With GenAI set to continue to dominate the customer service space, organizations must focus on how to maximize its benefits without sacrificing the safety of their customers.
In Gartner’s report, it lists the following recommendations for how to use generative AI safely and effectively:
- Limit customer-facing use cases where possible. Instead, focus GenAI investments on office and employee-facing roles.
- Invest in model training that prioritizes trust and safety.
- Do not underinvest in information security, proper governance and controls.
- Coordinate all steps of the way with legal, compliance and risk officers in your company.
- Do not be tempted to build GenAI systems in-house. Organizations should use third-party providers who are more likely to have risk control capabilities.
It is incontestable that GenAI will play a huge role in the future of customer service and CX. With so many offerings in the market and new features being announced every other week, it is easy to get swept up in the hype and introduce unvetted and potentially dangerous solutions.
Organizations must show restraint and make measured decisions – remembering that customer safety comes before any efficiency and cost-cutting benefits.
To quote the Hippocratic Oath: “First, do no harm.”