Court Orders Air Canada to Pay Out for Chatbot’s Bad Advice

Air Canada argued that the chatbot is responsible for its own actions

3
air-canada-chatbot
Contact CentreSpeech AnalyticsLatest News

Published: February 19, 2024

James Stephen

A small claims court has ruled that Air Canada should compensate one of its customers who was misled into paying for full-price flight tickets by a contact center chatbot.

The customer in question, Jake Moffatt, was a bereaved grandchild who paid more than $1600 for a return flight to and from Toronto when he only in fact needed to pay around $760 in accordance with the airline’s bereavement rates.

The chatbot told Moffatt that he could fill out a ticket refund application after purchasing the full-price tickets to claim back more than half of the cost, but this was erroneous advice.

Nevertheless, Air Canada argued that it should not be held responsible for the advice given by its chatbot, amongst other defenses.

Civil Resolution Tribunal (CRT) member Christopher Rivers wrote as part of its reasoning for the verdict:

In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission.

“While a chatbot has an interactive component, it is still just a part of Air Canada’s website.

“It should be obvious to Air Canada that it is responsible for all the information on its website.

“It makes no difference whether the information comes from a static page or a chatbot.”

Initially, an Air Canada customer service representative pointed Mr Moffatt to a link the chatbot had shared, which provided accurate information about its bereavement policies. They also told him that they would update the chatbot to prevent it from dispensing the same misleading advice.

Mr Moffatt was not satisfied with this, however, and elected to sue the airline.

Alongside its defense that it was not responsible for the chatbot’s words, the airline also argued that Mr. Moffatt could have found the correct information on one of its web pages, as well as the terms and conditions of its tariff removing its liability.

The court was unpersuaded by these arguments and it ordered Air Canada to pay Mr. Moffatt a total of $812.02 to cover damages, pre-judgment interest, and CRT fees.

According to a survey of the Canadian Legal Information Institute, which keeps a database of Canadian legal decisions, Mr Moffatt’s case is the first to feature fallacious advice from a chatbot.

The Bigger Picture

With new GenAI-augmented chatbots, brands can hook their virtual agent up to internal knowledge sources, and it will respond to customers without prior training.

This story underlines the necessity of vetting that material, and testing.

Yet, it also begs the question, is GenAI without a human-in-the-loop ready for such use cases?

After all, this story follows DPD’s GenAI-powered bot swearing and writing a poem about “how terrible they are as a company”.

Rebecca Wetteman, CEO & Principal Analyst at Valoir, projected that many AI projects will suffer “spectacular” failures in 2024 – and it appears she may have been onto something.

“Lack of mature technology, adequate policies and procedures, training, and safeguards are creating a perfect storm for AI accidents far more dramatic than just hallucinations,” she said in a CX predictions video.

Expect public fails, lawsuits, and effective shake-ups of technology vendors and AI adopters when things go awry.

While many companies are already relying on chatbots to meet their customer service demands, the technology evidently has much more maturing to do.

Indeed, Pierce Buckley, CEO & Co-Founder at babelforce, is concerned that a second wave of terrible bots is on its way this year; the first wave being those released in the early 2010s.

He believes that in order to raise the chatbot user experience, businesses must include the human perspective from the outset.

 

 

ChatbotsTravel and Hospitality
Featured

Share This Post