NAIC regulators debate the scope of AI guardrails

State insurance regulators are at the implementation phase of guidance on artificial intelligence, following broader adoption of a model bulletin developed by the National Association of Insurance Commissioners.
During a recent Big Data and Artificial Intelligence Working Group discussion, an NAIC representative said about half of U.S. states have adopted the bulletin in full or in part, with several others considering similar measures. The growing uptake reflects heightened concern among regulators about potential consumer harm tied to AI-driven insurance practices.
“The main goal … is really to start a discussion on how regulators might operationalize the bulletin now that it’s adopted,” said Dorothy L. Andrews, senior behavioral data scientist and actuary at the NAIC Research and Actuarial Department. “This will be a high-level discussion on the key aspects of an insurer’s operation that can be measured as being in compliance with the bulletin.”
First AI bulletin
In December 2023, the NAIC adopted the Model Bulletin on the Use of Algorithms, Predictive Models and Artificial Intelligence Systems by Insurers. The bulletin carries no authority.
It defines key AI terms while intentionally omitting formal definitions for bias or harm, focusing instead on regulating “adverse consumer outcomes” that violate established insurance standards. Regulators are concurrently developing a four-level risk taxonomy to classify AI systems from low to unacceptable risk for closer oversight.
A key part of the proposed approach involves standardized reporting tools, including “model cards,” which function similarly to nutrition labels by outlining how AI systems are built, Stevens explained, what data they use and what risks they pose. Such tools could help identify which models deserve the most oversight.
“Model drift is a major concern, because if models are no longer a good fit for the problem they were designed to address, then consumers are at risk of harm,” Stevens said. “There are lots of methods for testing model drift and validating models. The companies should detail those methods.”
Data quality emerged as a central concern. Presenters highlighted risks tied to both internal and external data sources, noting that insurance data can reflect inherent biases — such as limited representation of uninsured populations — while third-party data may be used beyond its original purpose.
“For example, it would be difficult to determine whether more speeding tickets were written in some communities versus others because of over policing,” Stevens said. “Only a socio-technical analysis would uncover that. You would not see that issue in a mathematical analysis of the data.”
Operational challenges are also coming into focus. Regulators and industry representatives said implementing the bulletin will require additional staffing, training and coordination with existing oversight efforts.
Consumer advocates raised concerns about how AI systems are integrated into insurer workflows, warning that overreliance on automation could lead to gaps in accountability if human expertise is lost. They also emphasized the need for clear escalation processes when AI systems fail or produce questionable outcomes.
‘When the model can’t do something’
Eric Ellsworth is director of health data strategy for Consumers’ Checkbook/Center for the Study of Services. He called for a “well-defined exception handling” process.
“So, when the model can’t do something that it’s known in advance that it can or won’t do that well, and there’ll be a really clear workflow for the control to come back from the model to someone,” he explained. “Those kinds of issues are critical for making sure that consumers can get issues resolved, because otherwise you have nobody’s home problem.”
Industry representatives, meanwhile, stressed the importance of maintaining confidentiality protections when insurers share data with regulators, particularly when reviews occur outside formal examinations.
State regulators participating in pilot programs are incorporating AI oversight into existing financial analysis and market conduct processes, with assurances that information collected will remain confidential under established regulatory authority.
“We have a series of questions really designed to … elicit kind of the use and the scope of AI of the company, and then, based on initial answers, may go deeper to get the best understanding of how that company is using AI,” explained Michael Humphreys, Pennsylvania insurance commissioner.
© Entire contents copyright 2026 by InsuranceNewsNet.com Inc. All rights reserved. No part of this article may be reprinted without the expressed written consent from InsuranceNewsNet.com.
The post NAIC regulators debate the scope of AI guardrails appeared first on Insurance News | InsuranceNewsNet.

