California AI Order Raises CX Standards but Leaves Gaps in Governance and Oversight

California introduces stricter AI safeguards, aiming to improve trust, accountability, and CX while raising concerns around oversight gaps

4
California AI Order Raises CX Standards but Leaves Gaps in Governance and Oversight
Security, Privacy & ComplianceInterview

Published: April 13, 2026

Francesca Roche

Francesca Roche

The governor of California has issued an executive order to strengthen guardrails for all contracts connected to generative AI tools in the state.

The aim is to reduce risks such as bias, privacy violations, and misuse while ensuring these technologies are deployed safely and accountably. 

For Californian enterprises, this policy may affect how companies design, deploy, and manage AI tools that interact with customers. 

However, whilst the policy strengthens procurement standards for AI systems, this leaves gaps in ongoing oversight, real-time monitoring, and how AI information is managed once deployed in public-facing services.

Chris Hood, Head of Digital Business Strategy and Platforms at Google Cloud, says generative AI is raising expectations for fast, personalized government services, but also increasing the risk that confident errors will quickly damage trust. 

“Generative AI is raising the floor on responsiveness. Citizens now expect government to communicate with the clarity, speed, and personalization they get from commercial services,” he said. 

“But that creates a dangerous gap: the moment a chatbot gives a confident wrong answer about benefits eligibility or permit requirements, trust collapses in a way it never would from a long hold time. 

“Newsom’s directive is a step in the right direction, but contract language only gets you so far.”

Newsom’s Approach to AI Risk and Accountability

This new executive order framework focuses on how California’s government buys and uses AI systems, leveraging state contracts to set localized standards. 

At the end of March, Gavin Newsom, Governor of Californiaspoke out on the concerns of risk, trust, and accountability in AI across the country. 

“California’s always been the birthplace of innovation. But we also understand the flip side: in the wrong hands, innovation can be misused in ways that put people at risk,” he explained. 

“California leads in AI, and we’re going to use every tool we have to ensure companies protect people’s rights, not exploit them or put them in harm’s way.  

“While others in Washington are designing policy and creating contracts in the shadow of misuse, we’re focused on doing this the right way.” 

As a result, companies that want contracts with California will need to show that they use AI responsibly and safely, meet strict privacy and security requirements, and demonstrate safeguards against misuse, setting a higher bar for vendors. 

This means developing new certification requirements for AI vendors to assess risks such as harmful or illegal content, bias, discrimination, and civil rights impacts, and determining with whom the government can work. 

By expanding the responsible use of AI inside government, this order aims to encourage ethical and effective use of AI in public services and improve how the government operates while managing risks. 

Increasing Compliance and Regulation

Newsom’s policy can help shape CX in California by changing how AI is used in public-facing services. 

Implementing stricter requirements will mean AI systems will have to be tested more thoroughly and carefully monitored for errors and bias, meaning customers are likely to experience more reliable interactions with fewer incorrect answers and more consistent service quality. 

These tighter privacy rules will also require customer data to be managed more carefully in line with state expectations, reducing the risk of data misuse and increasing trust in digital services. 

Enterprises are also likely to be more cautious about innovation deployment, seeing an increase in delayed rollouts of new AI features and higher compliance costs for more stable and dependable services. 

Expectations for accountability could also see companies using AI have more human involvement in critical moments, such as increased escalation paths to human staff and review for high-stakes decisions, ensuring better handling of complex and sensitive cases, and decreasing enterprise reliance on fully automated decisions. 

For CX, this framework policy means government services are likely to become more trustworthy, transparent, and reliable, while also being more regulated and slightly slower to evolve. 

Beyond AI Procurement Rules

However, this executive order contains several gaps and limitations despite being a step in the right direction for California, focusing on how AI systems are selected and approved rather than how they are managed once in use. 

“The real test is runtime governance, monitoring AI behavior continuously after deployment, not just at procurement,” Hood continued. 

This means going beyond setting rules when buying AI systems and instead focusing on how those systems behave after they are in use. 

When AI behavior adjusts after deployment, these errors only become visible when customers start using them, meaning continuous monitoring is needed to catch errors, maintain trust, and ensure systems remain accurate and accountable over time. 

Hood also argues that government AI also requires a higher standard of trust due to state residents having no alternative providers. 

Trust in government AI cannot be earned the same way it is in the private sector,” he explained. 

“Citizens cannot opt out of the DMV or choose a competing tax authority. That asymmetry demands a higher standard. What I call meaningful friction: intentional checkpoints where AI defers to human oversight. 

“Not because the system cannot handle it, but because the citizen deserves a human accountable for the outcome.”

Furthermore, control over information has shifted beyond governments, as this policy focuses on systems they directly buy and operate. 

“People no longer start with Google and land on an official website. They ask ChatGPT or Claude, get a summarized answer, and often act on it before ever reaching official government sources,” Hood said. 

Not addressing how third-party AI systems present government information and justifying whether those summaries are accurate or up to date creates a policy gap by only improving AI safety within government-run systems. 

By failing to account for pre-engagement influence, many citizens may trust AI answers and act on them before reaching official channels, even if the information given is incorrect. 

“The challenge is that people trust AI answers with remarkable confidence, sometimes following directions without questioning the source,” explained Hood. 

“That puts enormous pressure on agencies to ensure the information circulating in AI systems about their services is correct.”

AI Governance ToolsRisk ManagementSecurity and ComplianceTrust & Safety
Featured

Share This Post