The Transparency in Frontier Artificial Intelligence Act has taken effect as the first state-level AI law in the US, regulating safety and transparency in AI model development.
The law, which Governor Newsom signed in September 2025, will now require large AI model developers in California to publish risk assessment and management strategies, as well as provide summaries during ‘catastrophic risk’ to the authorities.
This will ensure that Californian developers will adhere to safety, transparency, and increasing public trust by strengthening oversight of advanced AI systems.
Governor of California, Gavin Newsom, announced a series of state laws that took effect on January 1st.
In a statement he made earlier last year, Governor Newsom explained how SB 53 will enable the state to meet security and compliance expectations now seen in providers worldwide.
“California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive,” he said.
“This legislation strikes that balance. AI is the new frontier in innovation, and California is not only here for it – but stands strong as a national leader by enacting the first-in-the-nation frontier AI safety legislation that builds public trust as this emerging technology rapidly evolves.”
Many of these state laws, which aimed to provide safety and digital rights to Californian residents and companies, also included SB 53, requiring large AI developers to document risk-mitigation strategies and improve transparency between provider and customer.
This announcement followed the release of the AI guardrail safety report, offering Governor Newsom strategic recommendations to ensure the appropriate regulation of security risks, transparency, and policymaking.
Being home to 32 of the top 50 AI companies worldwide, California is in a critical position to provide healthy regulations in the birthplace of AI.
In response, SB 53 was created in line with the report’s recommendations, strengthening California’s position as an AI leader.
What SB 53 Offers
This law now establishes new requirements for frontier AI developers, increasing expectations in governance, transparency, and incident readiness.
Transparency: AI developers must now publish a written framework to explain how they are applying national/international standards and industry practices during AI model development and management. This offers customers clarity about the developer’s current approach to risk management and governance.
Innovation: The Government Operations Agency has developed a state-led consortium, CalCompute, to create a public computer cluster framework. By utilizing CalCompute, developers can collaborate to plan and guide the development of shared AI computer resources that support AI development with shared safety and equity policies.
Safety: AI developers are now required to inform local authorities of potential safety risks associated with frontier models, including possible threats to public safety and large-scale harm. This allows both customers and developers to report likely risk incidents to the Office of Emergency Services.
Accountability: AI developers may face civil penalties if they fail to meet these new governance standards, whether via employee or public exposure. The law also provides legal protection to those responsible for risk disclosure, ensuring that developers are continuously held accountable without fear of retaliation.
Responsiveness: The California Department of Technology has agreed to review the law annually to continue the progression of the regulatory framework. Updates to SB 53 will be made to ensure that California remains in line with international standards and technological advancements.
What Will This Act Mean For CX?
1. Trust Through Transparency & Explainability
Speaking with CX Today, Jigyasa Grover, Machine Learning Engineer for Uber, X, and Meta, frames SB 53 as a trust and experience driver, rather than simply a compliance obligation.
SB 53 mandates transparency requirements that fundamentally change how customers interact with AI systems, offering clarity to reduce uncertainty.
From a CX perspective, users can now understand how AI makes decisions, increasing confidence and trust with the technology as frameworks become customer-facing assets.
“SB 53 doesn’t just mandate transparency; it encourages investment in explainability frameworks,” she said.
“From a CX standpoint, users may naturally ask, ‘Why did this AI make that recommendation?’ or ‘What safeguards exist if it fails?’, and being able to answer these questions clearly can be a game-changer for building trust and loyalty.”
Regulatory compliance is typically out of sight for users, however this act allows them to access these in simple terms and self-justify risks.
“Companies that can turn technical compliance into intuitive, user-facing signals, like dashboards, safety labels, or opt-in transparency notices, will not only stay ahead of regulators but also strengthen customer confidence.”
And by connecting product, ML, legal, and CX teams together, this allows transparency and safety to occur and reduces the chances of mismatched information from the AI.
“SB 53 also underscores the value of cross-functional collaboration between product, CX, legal, and ML engineering.”
Grover further frames SB 53 as a leader indicator, raising baseline expectations for AI governance and how it should be communicated to users.
“Even if your AI doesn’t yet qualify as ‘frontier,’ SB 53 sets expectations for auditability, reporting, and operational safety that are likely to influence all AI systems. CX teams should proactively consider how to signal compliance and model reliability, rather than reacting after the fact, to preserve trust as regulations evolve.”
2. Governance Moves Into the Customer Journey
In conversation with CX Today, Nik Kale, Principal Engineer, CX Engineering, Cloud Security & AI Platforms at Cisco, highlights how AI compliance has shifted from being a backseat, internal function to now an integral, front row part of the customer experience.
With its popularity having increased in recent months, AI compliance is now becoming a visible part of the customer experience through disclosures, explanations, and escalation paths.
SB 53 and similar regulatory frameworks incentivize companies to resurface their governance practices, even outside of regional law expectations.
“Laws like California’s SB 53 reflect something already happening in practice: AI governance is moving out of the back office and into the customer journey.”
This allows customers to have access to simplistic understandings of AI behaviors and how to handle change or failures rather than keeping it closed within internal processes.
“As AI systems scale and regulation catches up, customers will encounter compliance directly through clearer disclosures, more explainable behavior, and defined escalation paths when automation hits its limits.”
With higher standards in place, organizations will now have to design AI experiences with explanations, disclosures and safe escalation as baseline expectations.
“Transparency requirements don’t just mitigate risk; they reset expectations. Customers start expecting AI-driven interactions to be understandable and accountable by default.”
3. Privacy & Data Control as Brand Differentiators
In discussion with CX Today, Ron De Jesus, Field Chief Privacy Officer at Transcend, argues that the act does not offer full transparency expectations needed for clarity and confidence in data handling.
Whilst SB 53 addresses catastrophic risks, CX teams face a different challenge: the decrease of trust at an individual level when customers don’t understand how their data is being used by AI systems.
He points out that even though the law addresses the rising demand for AI safety, the act needs to offer better understanding of personal data handling.
“The Act is a welcome step, but for CX teams, it’s the tip of the iceberg. SB 53 addresses catastrophic risks from advanced AI models by requiring the biggest developers to publish safety frameworks and report critical incidents. But CX teams deal with a different kind of risk, namely the slow erosion of trust when customers interact with AI without understanding what’s happening to their personal information.”
Despite this, the act is still valuable to customer experience, as many customers are unaware of how AI interacts with their personal data, lack of transparency can lead to frustration or disengagement with a brand.
“Most people can’t tell you whether they’ve consented to AI training on their data, how it’s personalizing their experience, or even if they have a way to opt out.”
For CX teams, AI data interaction monitoring requires tools and processes to view this in real time, ensuring that customers can understand the role of the AI and adjust their preferences accordingly.
“Teams need infrastructure that gives them real-time insight into how AI systems are interacting with user data. CX leaders should ask: when a customer uses our AI chatbot or sees a personalized recommendation, do they understand what’s powering it? Can they easily adjust their preferences? Does our privacy experience feel like part of our brand, or like legal boilerplate?”
Furthermore, building trust and transparency early into the AI experience is essential for understandable and actionable consent flows, strengthening AI’s position in CX as a core feature for long term engagement.
“The best path forward regardless is to embed trust and transparency into AI experiences now. Bring privacy teams into product planning early, review AI consent flows with the same rigor applied to any other touchpoint, and make sure customers actually know what they’re agreeing to. That’s how you build loyalty and growth in the AI era.”
4. Tackling Customer Information Overload
On the other hand, increased transparency requirements could overwhelm customers with information they may not understand or care about.
SB 53 increases transparency obligations for developers of advanced AI systems, meaning the customer experience will see higher levels of development disclosures, risking information overload and reduced qualitive experiences.
The challenge is not transparency itself, but rather how the transparency is delivered to the customer, with developers who deliver disclosures to the customers in a complex, technical way can risk losing trust and engagement.
Erika Sylvester, General Counsel and Head of Compliance at Authenticx, spoke to CX Today about how the risk with SB 53 is not lack of information, but excess information that customers struggle to interpret.
“In 2026, regulatory evolution will shape patient relationships through increasing requirements for transparency and consent. Organizations may be required to spell out, in far more detail than today’s privacy laws demand, exactly how patient data is being used.”
She explains how more transparency does not equate to better experiences, whilst highly detailed AI reports can increase some levels of trust, this can increase the risk of cognitive overload for customers and reduce overall interest, as well as leave room for misinterpretation or disregarding the disclosure completely.
“That level of clarity has the potential to build trust, but there’s also a real risk of oversaturating patients with information. Some may not care, and others may not be able to fully understand it, which could ultimately create more confusion and negatively impact patient engagement.”
Slyvester expects two outcomes from this act in 2026, a national-level AI regulation that reduces fragmentation and standardize requirements across the country, or continued state-by-state regulation, however priorities about how AI is used will be the same in both scenarios.
“When it comes to AI regulation in the U.S. in 2026, we’ll either see a shift back toward federal-level regulation, to avoid the kind of over-regulation that becomes a blocker for small businesses, or continued adoption of state-by-state rules with widely varying requirements. Either way, the focus will remain on transparency, bias and maintaining humans-in-the-loop.”
5. Proactive Governance Reduces Customer Uncertainty
This take argues that SB 53 acts as a catalyst for customer questions rather than a direct CX improvement on its own.
Regulations like SB 53 are valuable when raising questions but often don’t immediately change business operations, however proactive governance frameworks provide customers with tangible proof of responsible AI use.
Organizations that can respond proactively with structured and visible governance, reduce confusion and build confidence.
In practice, CX outcomes improve when customers see evidence of responsible AI management, rather than just references to regulation or compliance.
Patrick Sullivan, VP of Strategy and Innovation at A-LIGN, explained to CX Today how the act increases public visibility, but does not immediately change in the business, causing customer confusion and uncertainty.
“Customers are becoming more aware and knowledgeable about AI. While customers expect AI to be integrated into the buying experiences, they are also starting to ask more questions around responsible AI usage. Regulatory initiatives like that of California’s Transparency in Frontier AI Act tend to raise a lot of these types of questions, causing a lot of confusion for both business leaders and customers.”
From a CX standpoint, Sullivan argues that questions surrounding regulations change the customer conversation before the product is changed, signaling risk likelihood even in cases of established technical compliance.
“This places the burden on compliance teams who will have to answer an influx of questions on how AI is used in their product and what they are doing to safeguard data. From the customer perspective, these regulations create a lot of uncertainty in the business when it becomes clear that nothing has actually changed.”
Moreover, he points out that SB 53 alone does not reassure customers if companies primarily work reactively, meaning this law is just a stepping stone for developers to introduce its own framework in line with these guidelines.
“By moving away from reactive, disjointed, ad-hoc fixes and embracing a comprehensive and transparent governance framework, businesses build confidence with their customer base. The key here is that it’s no longer enough to rely on outside regulations to prove this commitment to governance. Customers want proactivity and proof.”
By adopting a framework alongside state regulations, developers can tackle customer doubts there and then, improving how customers purchase and renew their trust for a company.
“An AI management system treats uncertainty as something leaders can manage, not something to wait out… being able to show tangible proof that you’re committed to responsible AI adoption can make or break the CX.”
6. Consumer Rights & Personal Data Protection
This final view sees SB 53 as more than a set of internal compliance rules for developers, but a broader trend in how local governments should treat individual rights when personal data and identity are used by AI systems.
This law represents the need to protect individual consumers’ rights to control their personal information and likeness with AI, and receive clear transparency around how and which systems are handling it.
They argue that whilst SB 53 has introduced early safety and transparency rules for advanced AI, the law does not go far enough to protect individual consumer data and privacy rights when AI accesses personal content.
Darin Myman, CEO at DatChat, spoke with CX Today about the current policy gaps within SB 53.
“While this new law is an important first step in putting in the necessary guardrails on an industry that is growing faster than the existing regulatory environment can support, much more needs to be done very quickly to protect consumers, and their right to privacy.”
He further expresses concern around AI models having access personal data and the possibility of distributing it without explicit consent, highlighting that consumer risk is not properly addressed in the act.
“Almost unchecked, these models are training our private information such as our photos and videos that have been shared on social media.”
Myman calls for consumer rights to be prioritized over personal likeness, removing personal data from AI unless initially agreed.
“Let’s make sure that our likeness can never be used in an AI generated commercial without our permission, and possibly some compensation.”