Virginia is poised to pass a state-wide law that will regulate the use of “high-risk” AI.
The High-Risk Artificial Intelligence Developer and Deployer Act will impose new compliance requirements on businesses using “high-risk” AI systems that impact Virginia consumers.
The legislation covers AI systems that independently make or heavily influence important consumer decisions, affecting areas like customer service automation, personalization, and recommendations.
Having officially been passed by the Virginia state senate, the act now must be signed into law by the Virginia Governor, which will bring the regulations into effect on July 1, 2026.
But what exactly is meant by “high-risk” AI, and how will this impact customer service and experience teams moving forward?
Understanding the Letter of the Law
In discussing the news, The Contact Center AI Association outlined the following five areas as instances of “high-risk” AI that could relate to CX:
- When it’s used to automate decisions on customer eligibility for products or services.
- When it’s used to generate personalized financial offers and recommendations.
- When it’s used to determine access to premium services or customer tiers.
- When it’s used to resolve disputes and process, customer claims automatically.
- When it’s used to influence credit approvals and financing options.
Virginia’s law will focus on AI solutions that are “specifically intended to autonomously” make decisions.
While these changes may impact businesses currently deploying AI systems, the 16-month timeframe – until the Act comes into regulation – will help CX teams get up to speed.
Readers will note that the full name of the law references developers and deployers; this is because the legislation includes specific guidelines for the two roles.
Businesses that develop or significantly modify AI-driven customer experience systems are classified as developers under Virginia’s law.
Those that fall under this categorization must take reasonable steps to prevent discrimination, disclose system purposes and limitations, provide documentation for bias monitoring, and update disclosures within 90 days of major changes.
Under the legislation, organizations using AI systems for customer interactions are classified as deployers.
Designated deployers must have a risk management policy for AI tools, conduct impact assessments before deployment, and inform customers when AI is involved in decision-making.
Adverse decisions must be explained with a chance for correction, and documentation must be kept for at least three years.
In addition to these two classifications, the law also includes specific regulations for the use of generative AI (GenAI).
The legislation mandates detectable markers or identification methods for GenAI to create synthetic content (audio, video, or images) in customer experience applications.
This applies to AI-generated product demos, virtual try-ons, AI-voiced customer service, and personalized marketing.
However, there are exceptions for creative works and artistic expressions, allowing their use in marketing and branded content.
The use of AI of any form in the following scenarios also includes varying degrees of exemption:
- Anti-fraud technologies (except those using facial recognition)
- Cybersecurity tools used in customer data protection
- HIPAA-covered entities in specific healthcare CX scenarios
- Financial institutions following equivalent federal standards
For those that are not exempt and are found to have broken the new law, non-willful violations may incur up to $1,000 per instance, while willful violations can result in fines of up to $10,000 per instance.
Each affected customer is considered a separate violation, meaning that the potential fines incurred could add up to a significant cost.
An Emerging AI Trend
Virginia may be the latest, but it is not the first to introduce state-specific AI regulations.
Colorado became the first state in the US to officially enact a comprehensive consumer protection regulation aimed at ensuring that AI is used fairly and without discrimination, providing risk-based guidelines for AI deployment to protect consumers.
California, Illinois, Minnesota, and Utah are all also currently in the process of introducing some form of AI regulations.
Outside of the US, the EU has already introduced an AI Act, with Gartner actually predicting that it may go as far as to incorporate “the right to talk with a human” into its consumer protection laws within the next three years.
It is clear that as AI begins to mature, the governance of the tech will become more complicated and moderated.
Customer service and experience professionals looking to maximize the potential of their AI offerings will need to stay abreast of all the latest laws and regulations or risk legal and financial troubles.
Join the CX Community That Values Your Voice
This is your space to speak up, connect, and grow with thousands of CX leaders. Share your voice, influence what’s next, and learn from the best in customer experience. Join the conversation today.