Industry trades, consumer advocates, find fault with NAIC draft model on AI
State insurance regulators received 202 pages of comments on a draft model bulletin on use of artificial intelligence (AI) and related technology by insurers.
A National Association of Insurance Commissioner working group published its Model Bulletin on the Use of Algorithms, Predictive Models, and Artificial Intelligence Systems by Insurers in July. Comments were accepted through Sept. 5.
Most of the 24 comment letters came from industry trade associations seeking tweaks to a document with which they are mostly satisfied. Consumer advocate Birny Birnbaum is not at all satisfied and filed comments totaling 58 pages.
“We believe the process-oriented guidance presented in the bulletin will do nothing to enhance
regulators’ oversight of insurers’ use of AI Systems or the ability to identify and stop unfair
discrimination resulting from these AI Systems,” wrote Birnbaum, executive director of the Center for Economic Justice.
The bulletin is not a model law or a regulation. It is intended to “guide insurers to employ AI consistent with existing market conduct, corporate governance, and unfair and deceptive trade practice laws,” the law firm Locke Lord explained.
Meanwhile, regulators recently released a new batch of AI survey data, with 70% of the reporting homeowners’ insurers indicated that they “currently use, plan to use, or plan to explore using” AI.
Insurers are intrigued with the many uses of AI in all aspects of operations. But regulators are struggling with how to regulate the sweeping technology.
A step back from principles
In August 2020, the NAIC adopted guiding principles on artificial intelligence after robust discussions. Regulators added language encouraging insurers to take proactive steps to avoid proxy discrimination against protected classes when using AI platforms.
The NAIC guiding principles are based on the Organization for Economic Co-operation and Development’s AI principles that have been adopted by 42 countries, including the United States.
But Birnbaum, along with some commenters at the NAIC summer meeting in August, says the draft model bulletin abandoned the effort to fight proxy discrimination inherent in many AI and big data uses.
“In place of guidance on how to achieve the principles and how to ensure compliance with existing laws, the draft tells insurers what they already know: AI applications must comply with the law and insurers should have oversight over their AI applications,” Birnbaum wrote. “The draft bulletin fails to provide essential definitions – it doesn’t even define proxy
discrimination. It not only fails to address structural racism in insurance, it incorrectly tells
insurers that testing for protected class bias may not be feasible.”
A principles-based approach to racism in insurance is not necessary or desirable, Birnbaum said. The model bulletin provides “no guidance regarding the actual outcomes a regulator expects and how the insurer should demonstrate those outcomes,” he added.
“AI System governance should start with the outcomes desired by regulators through the AI
Principles and testing to measure and assess the outcomes for fair and unfair discrimination
should be the foundation of the governance,” Birnbaum explained. “You can’t evaluate something unless you measure it and the draft bulletin offers no metrics or methods for measuring outcomes.”
Third-party AI use
Industry concerns start with how third-party providers are subject to oversight. The draft model requires that third parties meet the standards expected of insurers.
Allowing insurers to include a provision in their third-party contracts that they have the right to
audit third parties could be helpful, the National Association of Mutual Insurance Companies said in a comment letter, although “many vendors may be resistant to such provisions.”
“Eventually all third parties will be using AI, so this effectively requires the industry to audit every third party even in cases when due diligence does not uncover an issue or there is low risk,” wrote Andrew Pauley, public policy counsel for NAMIC. “This would be extremely costly and resource intensive.”
‘Bias’ not found in the code
NAIC regulators periodically butt heads with the National Council of Insurance Legislators, who resent the intrusion on their lawmaking turf. The AI issue is no exception, as Will Melofchik, general counsel for NCOIL, made clear in a comment letter.
“Any bulletin issued by an insurance regulator must be carefully crafted to carry out its necessary role of applying existing, codified law to market conditions,” Melofchik wrote, “and to avoid any hint of establishing new legal regulatory standards.”
Specifically, NCOIL objected to language instructing insurers to practice “bias analysis and minimization,” and to avoid “unfair bias.” This establishes “bias” as a regulatory standard, he added, but the word is not found in the insurance codes.
The draft model defines “bias” as “the differential treatment that results in favored or unfavored treatment of a person, group, or attribute.” But the business of insurance involves discriminating between, and classifying, risks, Melofchik wrote.
If “bias,” as the draft model defines it, “is based on risk, then the state insurance statutes authorize (and require) this result – unless the classification method is itself a protected class,” he added.
InsuranceNewsNet Senior Editor John Hilton covered business and other beats in more than 20 years of daily journalism. John may be reached at john.hilton@innfeedback.com. Follow him on Twitter @INNJohnH.
© Entire contents copyright 2023 by InsuranceNewsNet.com Inc. All rights reserved. No part of this article may be reprinted without the expressed written consent from InsuranceNewsNet.com.
The post Industry trades, consumer advocates, find fault with NAIC draft model on AI appeared first on Insurance News | InsuranceNewsNet.