NAIC urged to address issues related to use of AI in insurance
A host of consumer advocates and organizations have brought new concerns to the attention of the National Association of Insurance Commissioners’ Innovation, Cybersecurity, and Technology (H) Committee, urging them to address matters of how insurers are using AI even as regulations are still under development.
At a recent NAIC meeting, advocates presented on three specific topics related to the use of AI in insurance:
1. Adequate testing for unfair discrimination where multiple data sets are being used
2. Proposed development of a new NAIC Model Privacy Act
3. Use of AI in health insurance specifically
Miguel Romero, NAIC director, P&C Regulatory Services, noted that the concerns raised by the various consumer groups during the meeting will help to inform related undertakings already underway by the NAIC.
Additionally, consumer groups noted that formal comments and suggestions will be forwarded to the H Committee for consideration at a later time.
Testing multiple data sets
In urging the NAIC H Committee to consider the implications of multiple data sets being used in AI, Brendan Bridgeland, attorney and policy director, Center for Insurance Research, noted that this has not been sufficiently tested to ensure unfair discrimination is avoided.
“What I wanted to talk about and highlight a little is how using multiple data sets and factors that may have been looked at individually but not been tested together, in particular the outcomes tested together, could result in unfair discrimination on a large-scale and systemic basis if we’re not careful,” he said.
He explained that risk classifications can overlap or “prove to be duplicative proxies of another risk factor that has already been incorporated elsewhere,” and that the more data elements are being used, the higher the likelihood that this will occur and lead to discrimination.
As an example, he said factors such as credit score, bankruptcy data, mortgage amount lender and voter information “appear to measure the exact same factor.”
“That’s why we believe it’s very important that regulators develop a robust testing program capable of spot-checking outcomes between similar consumers,” Bridgeland noted.
New privacy model
Consumer advocates Brenda Cude, University of Georgia professor emeritus, and Harry Ting also urged the committee to consider a new Model Privacy Act that assumes an opt-in rather than opt-out approach to data and privacy.
Ting suggested this long-pending exercise should be a priority with the enormous growth and collection of personal data for AI.
“I think the primary goal of regulation would be to bring the collection, processing and transfer of personal data under control, and that this should be achieved primarily by restricting what insurers can do when they collect data and strengthening what they’re required to do to protect consumers’ personal information,” Cude said.
While she said privacy rights are “fine” as part of regulation, she added that they should be a secondary goal as not only are they inadequate but it is also “impractical” to expect consumers to protect themselves.
“I would expect regulation that emphasizes data minimization, clear expectations about the policies and procedures required to dispose of personal information when it no longer serves a business purpose, timely and transparent consumer notices, prohibitions on insurers discriminating against consumers who opt out of disposing personal information and an opt-in rather than an opt-out approach,” Cude said.
Specific healthcare AI regulation
Consumer advocates Lucy Culp, executive director of state government affairs at the Leukemia and Lymphoma Society, and Adam Fox, deputy director, Colorado Consumer Health Initiative, reminded the H Committee that use of AI in health insurance differs from other lines of insurance in significant ways, and regulation should similarly reflect this.
Fox noted that women and people of color tend to be underrepresented in data sets, meaning that algorithms and AI systems using those data sets without review, oversight and active adjustment run the risk of perpetuating bias and discrimination.
Additionally, Culp called attention to concerns related to coverage denials where prior authorization and other utilization management tools are being used.
“Reports have indicated that insurers are increasingly relying on AI systems to supplement or even supplant individualized decision-making, and the judgment of medical professionals and prior authorization or even levels of care assessment or other coverage determinations,” she said.
To this end, she suggested the H Committee and/or individual states consider rulings similar to those made by the Centers for Medicare & Medicaid Services and the Department of Health & Human Services earlier this year.
The CMS made a decision to prohibit Medicare Advantage plans from relying solely on AI to make coverage determinations or terminate a service, while the DHHS’ Office of Civil Rights decided to require covered entities to make reasonable efforts to identify the use of AI tools that could cause discrimination and mitigate those risks.
The Center for Insurance Research, Inc., out of Cambridge, Massachusetts, is a non-profit organization and consumer advocacy group founded in 2002.
The Leukemia and Lymphoma Society is a non-profit research organization founded in 1949. It is dedicated to fighting cancer worldwide.
The Colorado Consumer Health Initiative is a non-profit health advocacy organization founded in 2000.
Rayne Morgan is a journalist, copywriter, and editor with over 10 years’ combined experience in digital content and print media.
© Entire contents copyright 2024 by InsuranceNewsNet.com Inc. All rights reserved. No part of this article may be reprinted without the expressed written consent from InsuranceNewsNet.com.
The post NAIC urged to address issues related to use of AI in insurance appeared first on Insurance News | InsuranceNewsNet.