Why AI Developers Should Pay Attention to Utah’s Mitigation Model
Executive Summary
- Utah created a voluntary, structured AI pilot framework that balances innovation with accountability.
- Regulatory Mitigation Agreements (RMAs) provide controlled deployment with enforcement to create clarity.
- Developers working in high-risk sectors should study this model before scaling AI across multiple states.
In 2025, the real challenge for AI companies is no longer building models. It is deploying them at scale without stepping into regulatory quicksand.
Right now, businesses are moving beyond pilots and deploying live models. AI is now embedded in healthcare workflows, financial systems, education platforms, and customer-facing decision engines. At the same time, states are introducing hundreds of AI-related bills. Innovation is moving fast, and oversight is trying to catch up. Once again, Utah leads other states in developing a practical solution that makes sense.
Utah Didn’t Just Regulate AI
In May 2024, Utah became the first state to address AI in consumer interactions through the Utah AI Policy Act. The state did not default to prohibition. Instead, it built a framework centered on transparency requirements for high-risk companies, clearer liability definitions and defenses, and a structured innovation pathway through Regulatory Mitigation Agreements (RMA).
That third pillar is what sets Utah apart.
Rather than positioning regulators as adversaries waiting to enforce penalties, the state created a collaborative pilot structure. Developers and companies can voluntarily enter into agreements with the Office of Artificial Intelligence Policy and relevant agencies to test high-risk AI applications in a controlled environment.
This shift is not a theoretical policy. Utah is actively running pilots in healthcare, which is one of the most regulated and sensitive sectors in the country.
What an RMA Actually Looks Like
An RMA is not a blanket safe harbor. It is a negotiated agreement with defined scope, oversight, reporting obligations, and operational guardrails.
Utah currently has three healthcare-focused RMAs covering teen mental health support through ElizaChat, AI-assisted dental diagnosis through Dentacor, and AI-enabled prescription refill workflows through Doctronic.
The Doctronic agreement is particularly instructive.
It runs twenty-four pages, with only six pages containing the core agreement terms and the remainder outlining detailed schedules for workflow design, risk monitoring, performance metrics, and medication scope.
The agreement provides enforcement guidelines during the pilot period, subject to the company’s compliance with its terms. It defines the medication scope, limiting the system to 192 approved prescriptions. It requires identity verification and layered prescription validation. It mandates physician review thresholds, real-time AI monitoring, and monthly reporting to the state. It also requires performance benchmarking and patient satisfaction tracking.
This is structured experimentation.
The state provides breathing room, and the company accepts documentation and accountability. That balance is what makes the model worth reviewing.
Why Utah’s AI Laws Matter
Recent federal guidance emphasizes minimally burdensome AI oversight, and there is active discussion about preempting overly aggressive state laws.
Utah’s framework may endure where others face challenge because participation is voluntary, oversight is targeted rather than sweeping, the primary mechanism is enforcement forbearance rather than new mandates, and agreements are narrow and specific to defined use cases.
This approach is not a broad prescriptive regulation. It is structured validation.
If courts begin drawing lines between acceptable and excessive AI governance, Utah’s model will likely be viewed as a great measuring stick rather than heavy-handed.
The Real Problem: Multi-State AI Deployment
The issue for developers is not just Utah. It is the patchwork of all the other states.
Colorado, Texas, and other states are moving in different directions. Hundreds of AI-related bills are under consideration nationwide. A company deploying AI across multiple jurisdictions will face increasing compliance friction.
Utah’s pilot model offers something most frameworks do not: practical clarity.
A completed RMA generates documentation, performance data, workflow validation, and regulator familiarity. That creates credibility that companies can leverage in other jurisdictions.
Sophisticated developers will treat structured pilots as strategic assets rather than compliance burdens.
A Practical Framework for Evaluating AI Risk
An AI deployment is considered high risk when it affects decisions that affect people’s health, finances, rights, or safety. It would include systems that influence medical care, access to credit or employment, process sensitive personal or biometric data, or create legally consequential outcomes. In these environments, errors are not minor inconveniences, as they can result in financial loss, physical harm, or regulatory exposure. Before deploying AI in a high-risk setting, leadership should walk through five questions.
They should first determine what type of AI system they are deploying, whether traditional machine learning, generative AI, or autonomous agentic systems, because capabilities directly influence risk exposure.
They should then understand if the specific use case genuinely benefits from automation and assess the organization’s tolerance for error in that context.
They must review what data feeds the system and determine whether it relies on curated internal datasets or broader external inputs, recognizing that data governance often dictates safety outcomes.
They should carefully examine what actions result from system outputs, distinguishing between informational, advisory, and operational decisions, and confirming whether autonomous action is legally permissible.
Finally, they should define how accuracy is measured, including acceptable false-positive and false-negative rates, and determine whether minor inaccuracies undermine the system’s value proposition.
If leadership cannot clearly answer these questions, the company is not prepared for scaled deployment, and certainly not for regulator engagement.
When an RMA Makes Strategic Sense
RMAs are not appropriate for every AI application.
They are most compelling when the use case carries elevated risk, when regulatory validation improves market access, when collaboration with state authorities strengthens credibility, and when structured testing is desired before broader rollout.
There is a cost. Documentation requirements are not easy to maintain. Reporting obligations are ongoing. Oversight is continuous.
But so is regulatory uncertainty.
Businesses that treat compliance as an afterthought will struggle. Companies that incorporate structured oversight into their go-to-market strategy will move faster and with greater confidence.
Emerging Best Practices from Utah’s Experience
Utah’s pilots reinforce several governance principles.
Businesses must define the scope correctly, clearly outlining what the AI system can and cannot do. They must implement continuous monitoring rather than relying on one-time validation. They must maintain meaningful human oversight for consequential decisions. And they must make sure the documentation reflects operational reality rather than aspirational policy language.
Most AI failures are not model failures. They are governance failures.
Utah’s framework forces governance maturity early in the process, which may be its most significant contribution.
State-Backed Clarity as Competitive Advantage
In regulated sectors, credibility creates adoption.
Participation in a structured pilot signals that a company is willing to operate within the rules outlined, which matters most in healthcare, finance, and education.
As AI becomes more widely used in operational systems, trust will separate long-term leaders from short-term opportunists. State-backed validation can help stay ahead of the competition.
Utah’s model demonstrates that innovation and oversight are not mutually exclusive. When created properly, they reinforce one another.
How Tanner Helps Businesses Navigate AI Risk
Deploying AI in regulated environments requires governance discipline in addition to technical capability.
Tanner helps companies build that discipline before regulators demand it.
We conduct IT assessments of AI systems to evaluate whether controls operate effectively, whether documentation accurately reflects the system design, and whether governance structures can withstand regulatory review.
We perform AI assessments that analyze architecture, use case alignment, data governance, output controls, and accuracy thresholds, identifying gaps before they become regulatory or operational failures.
We also conduct IT risk assessments to establish performance benchmarks, monitoring frameworks, and reporting structures that work in practice rather than exist only in policy documents.
Beyond the assessments we perform, the outcome is structural discipline. Companies leave with clearly defined accountability across teams, documentation that reflects how the system operates and can withstand external scrutiny, and monitoring frameworks that align with real-world risk rather than theoretical policy.
AI regulation will continue to evolve. Companies that build adaptable governance now will move faster and with fewer surprises in the near future.
If you are deploying AI in a high-risk environment or considering structured engagement with regulators, it is better to address risk now than react under pressure later. Contact us today to learn how we can help business leaders set up their AI infrastructure for the future.
Schedule a Call