Last month, Canadian courts ordered Air Canada to pay up after the airline’s GenAI-powered chatbot misled a customer.
As reported by CX Today, Jake Moffatt, a bereaved grandchild, paid more than $1600 for a return flight to and from Toronto.
Yet, in accordance with the airline’s bereavement rates, he only needed to pay around $760.
After realizing this, Moffatt sued, and Air Canada tried to use the defense that the bot is a separate legal entity for which it cannot be held liable.
That defense failed, and Marc Benioff, Chair & CEO of Salesforce, suggests the ruling holds great significance. During an earnings call, stated:
Just as they would for a human employee, they were being held liable for a digital employee.
Moreover, Benioff warned that these AI models are “very confident liars, producing misinformation and hallucinations,” and suggested: “There’s a danger for companies, for enterprises, for our customers, that these are not trusted solutions.
“These [public] models don’t know anything about the company’s customer relationships and, in some cases, are just making it up.
“Enterprises need to have the same capabilities that are captivating consumers, but they need to have it with trust, and they need to have it with security. And it’s not easy.”
Thankfully, Benioff believes there are three “essential components” that enterprises can build into their GenAI bot strategy to deliver trusted experiences.
These are a compelling user interface (UI), a world-class AI model, and a huge data set.
Yet, Benioff stresses that the data set must include metadata so the AI understands and delivers the critical insights and intelligence that customers need.
“That’s not just some amalgamated stolen public data set… that’s the deep integration of data and metadata,” he concluded. “Oh, and that’s what Salesforce does.”
Avoiding Chatbot Blunders: Advice from CX Analysts
During a recent Big News Update, several CX analysts had their say on some of the biggest learnings they believe businesses should take from Air Canada’s blunder – alongside DPD’s bot breakdown.
Rebecca Wetteman, CEO & Principal Analyst at Valoir, concluded: “It’s not just about hallucinations; it’s about exposing bad data.
“I don’t know if the chatbot made it up or if – in fact – there’s some tribal knowledge sitting somewhere that someone didn’t know about.
“But, I think we’re going to see a lot more of the downstream effects of this as people start to entrust more of those critical conversations to AI and chatbots.
A lot more companies saying: “Oh, I want the technology vendor to take on some of that responsibility and some of that risk for me.”
That raises a fascinating conversation about adding indemnity clauses into vendor-customer contracts.
Yet, Michael Fauscette, Founder, CEO & Chief Analyst at Arion Research, argues: “Fundamentally, it’s your responsibility. I don’t care if it’s run by another entity, if it’s outsourced, or if it’s… bad data, bad experience, that’s as simple as it is, and it’s your responsibility.
“The court likely would have looked at a human agent in that being like, “You know what, people make mistakes”… There probably would have been some wiggle room.
“But, that’s the lesson; these courts are not likely going to give a machine much wiggle room.”
To hear more from Wetteman, Miller, and more excellent CX analysts, check out CX Today’s latest BIG CX News Update: The Latest on Oracle’s New Communications Platform, High-Profile Chatbot Failures, & HubSpot