Across every sector I cover, one theme consistently stands out. In healthcare, finance, and travel and hospitality, customers want speed, convenience, and personalisation, but they won’t trade away safety or honesty to get it. Trust has quietly become the real currency of customer experience.
The question is no longer “Is this fast?” but “Is this right?”
In practical terms, trust means that every digital interaction must prove to the customer that the brand is acting in their best interest. Customers now assume automation exists. What they’re checking for is accuracy, safety, transparency, and fairness. When those expectations are met, customers stay loyal. When they aren’t, the best chatbot in the world won’t save you.
Why Trust Has Moved From Nice-to-Have to Primary Differentiator
Two main forces are driving this shift, according to Mat Ladley, Principal Solution Consultant for AI & Automation at Five9, and Steve Morrell, Managing Director at ContactBabel.
First, AI has raised the stakes. Customers know brands are using automation, and they want reassurance it’s being used safely. Second, economic pressure means people scrutinise decisions more closely, from health appointments to financial approvals to travel refunds. Trust becomes the emotional anchor in moments of vulnerability.
“Speed still matters. Personalisation still matters. But without trust, none of it lands,” Ladley explains.
Research from ContactBabel supports this. In a survey of 1,000 UK customers, the biggest concern when dealing with AI in customer service wasn’t speed or cost. It was being misunderstood or receiving inaccurate answers. Even more telling, customers feared that AI was being implemented to stop them from speaking to a person when they needed to.
According to Morrell:
“The default mode for customers using technology is: you’re not doing it for our benefit, you’re doing it for your own. Prove them wrong.”
Healthcare: Where Trust Is Built on Vulnerable Interactions
Healthcare is built on vulnerable interactions. The trust moments include first contact triage, appointment management, sensitive conversations about diagnoses or test results, and escalations to human support when needed.
AI can help in symptom-based digital triage with clear guardrails, automated follow-ups for medication or referrals, surfacing bottlenecks through AI insights, and providing real-time agent guidance to improve empathy and compliance.
But AI can erode trust if it operates without clear guardrails, escalation paths, and monitoring. Ladley says:
“The goal isn’t to replace clinicians. It’s to create clarity, reduce anxiety, and ensure no patient falls through the cracks,”
ContactBabel’s research reveals that 76% of inbound calls to UK healthcare contact centres require caller identity verification, taking an average of 59 seconds per call. This represents almost 13% of a typical call’s length, adding nothing to the customer experience while impacting agent engagement.
Interestingly, healthcare presents scenarios where customers actively prefer AI over human interaction. For example, ordering diagnostic tests for sensitive or embarrassing health issues. “Customer actually don’t want to talk to a person. They want to get the job done, but do it through automation,” Morrell explains.
However, for test results or complex medical discussions, the last thing customers want is automation. They need personal, empathetic support. The key is matching the right channel to the right moment.
Financial Services: Governance, Explainability, and Human Oversight
In financial services, the stakes are obviously high. How do you prevent AI from making or amplifying harmful decisions in lending, collections, or fraud scenarios?
Three things matter: governance, explainability, and human oversight.
With Five9’s Trusted Agentic AI approach, the focus is on guardrails that prevent AI from taking actions beyond its scope (such as denying credit), auditability where every AI action is logged and explainable, process checks so high-risk decisions automatically escalate to a human, and real-time supervision to detect inconsistency or bias early.
“AI should assist with analysis, data collection, and pattern recognition, not replace human judgment for high-stakes outcomes,” Ladley emphasises.
The challenge in financial services is particularly acute. ContactBabel data shows that 84% of inbound calls to UK financial services contact centres require caller identity verification, taking an average of 49 seconds per call. That’s over 9% of a typical call’s length, costing the industry hundreds of millions of pounds annually.
But the bigger issue is trust erosion. Financial services has one of the highest levels of telephony usage, with the channel accounting for 78% of inbound customer interactions. This is partly due to security needs, but also because customers have a strong preference for this channel in times of high emotion, urgency, and complexity.
Ladley warns about the importance of observability in AI systems:
“Without observability, it’s the same with anything. If you don’t have any of those metrics, then you’re blind to what’s going on.”
Morrell adds that having real-time monitoring, auditability, and human oversight isn’t just good practice. It’s essential for preventing AI from going rogue and for building the kind of trust that keeps customers loyal.
Travel and Hospitality: Turning Crisis Into Loyalty
The most fragile moments in travel and hospitality are flight delays and cancellations, refund eligibility and timelines, lost luggage, compensation rules, and last-minute changes to bookings.
Trust is damaged when customers get conflicting information, can’t reach an agent during disruption, find refunds or vouchers opaque, or encounter digital tools that break under pressure.
Technology helps when it gives consistent, proactive updates, automates rebooking with complete transparency, speeds up refunds with clear rules, escalates instantly when customers show frustration, and supports agents with real-time policy guidance.
“Holidays are emotional. Honest, timely communication can turn a crisis into loyalty,” Ladley observes.
Five9’s Travel & Hospitality campaign is built around three principles: transparency in disruption, smart self-service with human backup, and AI Insights for recovery. Early outcomes from customer pilots show 20-30% faster recovery during disruption peaks, higher digital containment with better satisfaction scores, reduced refund friction due to clearer rules, and agents reporting higher confidence during peak-stress moments.
ContactBabel research reveals that transport and travel contact centres have seen a 70% increase in call lengths since 2014, with average speed to answer reaching 263 seconds and call abandonment rates hitting 10.4% in 2024. These operational pressures make trust even more critical.
Customers increasingly prefer telephony for high-emotion, high-urgency interactions. In 2024, 40% of customers preferred the phone channel for high-complexity issues, 36% for high-urgency situations, and 31% for high-emotion scenarios. This has increased significantly since 2018 – especially for urgent or complex issues – suggesting that the live voice channel provides a level of reassurance that other channels struggle to match.
The Biggest Mistakes Organisations Make
What are the top mistakes organisations make when introducing AI into sensitive journeys without thinking deeply enough about trust?
“Just because you can automate something doesn’t mean you should,” Morrell cautions.
“Don’t ignore the human element. The emotion that somebody will feel when they’re making that particular interaction.”
The top three mistakes are:
Over-automation. If customers feel trapped in a bot, trust evaporates.
No AI governance. Without guardrails and monitoring, AI can drift or hallucinate.
Ignoring emotion. Not all journeys are equal. A medical result or fraud alert cannot be treated like a product return.
“Trust collapses when organisations push AI before they’ve fixed the basics: clarity, consistency, and escalation paths,” Ladley warns.
Ladley emphasises that it’s a marathon, not a sprint. “Don’t bite off more than you can chew. Think about the highest impact that you can have and really focus on that and do that correctly in a good way. Because if not, and if you’re rushing into things, if you don’t have the guardrails in place, that’s where you can have these negative experiences and ultimately you’re damaging the trust in your brand.”
Where Should Trust Sit on the 2026 Roadmap?
As CX leaders plan for 2026, where should trust and AI governance sit on their roadmap? Is it a separate initiative, or something that needs to be baked into every CX project by default?
It needs to be everywhere.
“If you treat trust as a separate project, it becomes a compliance exercise,” Ladley explains. “If you design it into every workflow, routing, automation, analytics, self-service, it becomes a competitive advantage.”
By 2026, the organisations that win will be the ones where customers can say: “I understand why this is happening, I feel safe, and I can reach a human if I need to.”
That’s trust. And it’s the currency of modern CX.
Morrell adds a crucial point about the sophistication gap: “Businesses and C-level need to understand that the stuff that’s going to be implemented in the future is not just a jazzier version of what you’ve got now. If your rules-based chatbot doesn’t know something, it just metaphorically shrugs its shoulders. But generative and agentic AI is trying to help you, gathering things. It’s far more powerful in terms of potential for good, but also potential for harm. This is a different game we’re playing now.”
The level of sophistication is 100 times more than what most companies are using, which means the stakes are higher. Morrell warns:
“You can’t put the genie back in the bottle.”
Across healthcare, finance, and travel and hospitality, the common thread isn’t just technology, it’s trust. The brands that win won’t be those with the flashiest AI, but those that use it to show customers: your data is safe, your experience is fair, and when things go wrong, we’ll be honest and fix it fast.