Who Is Accountable When Public Sector AI Goes Wrong?

As AI becomes embedded in the services millions of people depend on, the question of who bears responsibility when it fails is more urgent than most organisations acknowledge.

4
Who Is Accountable When Public Sector AI Goes Wrong
AI & Automation in CXCommunity & Social EngagementInterview

Published: May 7, 2026

Thomas Walker

When a local authority deploys AI to manage transport networks or answer resident queries, responsibility for that system’s outputs doesn’t sit neatly with a single party. Foundation model providers, platform vendors, systems integrators, and public authority deployers each occupy a link in the chain, and can all point to each other when things go wrong.

Kayla Vondy, AI Project Manager at Liverpool City Region Combined Authority, works across AI governance, transport technology, and cybersecurity for a region of 1.6 million people. We sat down with Vondy at UCX 2026 to discuss how these legal complexities intersect with organisational responsibility, and where AI can be genuinely inclusive.

What Does AI Accountability Really Mean in the Public Sector?

The EU AI Act, which begins enforcing deployer obligations from August 2026, introduces formal role definitions – provider, deployer, distributor – and crucially, a “role mobility” mechanism through which organisations can inherit provider-level compliance obligations if they modify, rebrand, or shift the intended purpose of a system. For public authorities integrating third-party AI into frontline services, this presents a serious risk.

But for Vondy, the more pressing question isn’t legal – it’s human. “If I were a resident and I didn’t know anything about AI and someone said they were using AI on public data, my initial thought would be: What do they know about me as a person?” she says. “So, it’s really important that we’re communicating – yes, your data is going to be in this public record, but it’s because you’re a rider of the bus, not because you’re Steve from Main Street.”

Most people don’t read algorithm transparency notices (CX Today employees excluded…). What they experience is whether an organisation communicates clearly, proactively, and with respect. Liverpool City Region publishes algorithm information where possible and has embedded community outreach directly into its AI governance model. “It’s about making sure that there’s that outreach and that communication, that openness with them, and creating a safe space for them to ask questions – because it is their data”.

Where Does AI Still Fall Short on Accessibility?

Applied well, AI can expand public service reach in ways previously impossible. Liverpool recently launched Jimmy, a digital avatar at Southport station, who communicates in 99 languages. For a diverse city home to significant multilingual communities, this represents a meaningful step toward genuinely accessible transport services.

“With AI, we have a real opportunity to include people. […] He speaks 99 languages, meaning we can support people who are looking to manoeuvre around the city in whatever language they feel comfortable with.”

However, Vondy raises the question of whether those same capabilities can adequately serve the entire community.

“The flip side is thinking about people who use BSL. That is a language in and of itself. AI is not quite at the capability of providing adequate BSL support, in my opinion.”

British Sign Language received official legal recognition in the UK under the BSL Act 2022, with approximately 87,000 Deaf people in Britain using it as their primary language. Avatar-based signing systems and video relay services are edging toward viability, but nothing deployment-ready at a public sector scale yet exists.

An organisation can be transparent, proactive and genuinely well-intentioned, yet still systematically underserve a protected community.

“We have to make sure that we’re going into the communities and we’re constantly understanding the people that we’re trying to serve. That’s the only way we’re even going to make a dent in the inclusion”.

Making AI Work for Residents

Accountability and inclusion for public-sector AI isn’t simply a box-ticking exercise. Instead, it is an ongoing practice of transparency, community engagement, and honest discussion with the people a service still cannot reach.

The organisations building it the right way aren’t waiting for regulation to set the standard. They’re setting it themselves. Because progress is great… unless we forget who it’s for.

FAQs

Who is legally accountable when public sector AI goes wrong?

The EU AI Act is formalising role-based accountability across AI supply chains, but public authorities deploying AI remain answerable to the communities they serve, regardless of where a system originated.

Can AI currently support British Sign Language (BSL) users?

Not adequately – while AI has advanced rapidly in multilingual text and voice, BSL’s spatial and visual grammar presents challenges that current technology cannot yet replicate at the quality or scale required for public sector deployment.

How can public sector organisations build public trust around AI use?

Trust is earned through consistent communication with communities about how AI is being used, and by creating genuine channels for people to ask questions and raise concerns.

When does the EU AI Act apply to public-sector deployments?

Deployer obligations for high-risk AI systems, including many used in public services, take effect from 2 August 2026.

Agentic AI SoftwareCybersecurity for CXSecurity and ComplianceSecurity Compliance Software
Featured

Share This Post