NAMIC: Insurance AI regulation efforts driven by ‘unfounded notions’

Efforts to regulate the use of big data and artificial intelligence by insurance companies are rife with myths that threaten to harm policyholders, a leading trade group claims.
“[U]nfounded notions about the effects of using big data and AI in insurance are running rampant, inappropriately serving as the basis for and inappropriately driving creation of new policy relative to risk-based pricing,” reads the conclusion of the issue paper from the National Association of Mutual Insurance Companies.
Established in 1895, NAMIC represents mutual property and casualty insurance companies.
AI and big data are proving to be a natural fit for the insurance world. After all, data sharpens underwriting decisions, produces the most accurate risk assessments and helps to better connect insurance producers with potential buyers.
Forty-seven percent of technology executives say AI will have a significant impact on the insurance industry in the next three years, according to a recent LIMRA survey, but 48% of insurers do not have an AI training program yet.
Adoption of AI by insurers is being met with concerns from regulators, advocates, and policymakers over the potential for proxy discrimination, an algorithmic bias. While rulemaking has proceeded slowly, some states adopted statutes regulating the use of AI.
Notably, the Colorado AI regulation requires life insurers to report how they review AI models and use external consumer data and information sources, which includes nontraditional data such as social media posts, shopping habits, internet of things data, biometric data and occupation information that does not have a direct relationship to mortality, among others.
Life insurance companies are also required to develop a governance and risk management framework that includes 13 specific components.
Although Colorado acted first, several other states are considering AI legislation to restrict how insurers handle personal and public data.
Insurance is risk-based
The rush to regulate the use of AI must take into account that insurance works best when underwriters get an accurate risk pool, explained Lindsey Klarkowski, NAMIC’s policy vice president in data science, AI/[machine learning], and cybersecurity.
Many of the AI bills “are taking a one-size-fits-all approach to applying standards to the operation of AI without recognizing distinct differences in how different industries operate,” Klarkowski said. “Insurance is distinct in function. It’s distinct in price, and it’s distinct in its legal framework from many other consumer products, because, by its nature, it is a risk-based product where the rate must be matched to risk.”
Klarkowski authored the issue paper titled, “Big Data, Artificial Intelligence and Risk-Based Pricing: Dispelling Five Common Myths.”
It is in the consumer’s interest for insurers to parse the data and create the most precise risk profiles, Klarkowski. Some advocates have concerns that a continued refinement of data and technology could lead to “a risk pool of one,” she added, at the exclusion of high-risk insureds.
“The more accurate risk rating is, the more insurers can take on riskier policyholders,” Klarkowski explained. “For example, identifying insurable risks in traditionally uninsurable areas, and then as a foundational matter, that risk pool of one argument ignores statistical credibility and the benefits of risk spreading.”
Insurance AI lawsuits
While AI continues to play a bigger role in the insurance world, discrimination complaints are growing as well.
State Farm was sued in the Northern District of Illinois over claims that its AI discriminates against Black customers. The class-action suit claims State Farm’s algorithms are biased against African American names.
Plaintiffs cited a study of 800 homeowners and found discrepancies among Black and white homeowners in the way their State Farm claims were handled. Black policyholders faced more delays, for example.
Klarkowski noted that existing laws prevent discrimination.
“Regardless of whether the work is AI-assisted or whether the work is done by a human, there are still laws in place today that rates cannot be unfairly discriminatory,” she said. “There are laws in place today that the data points that are used and the rates that result must be actuarially sound.”
© Entire contents copyright 2025 by InsuranceNewsNet.com Inc. All rights reserved. No part of this article may be reprinted without the expressed written consent from InsuranceNewsNet.com.
The post NAMIC: Insurance AI regulation efforts driven by ‘unfounded notions’ appeared first on Insurance News | InsuranceNewsNet.