4 regulatory trends for AI use in insurance for 2024
In a webinar addressing some of the legal challenges of AI use in insurance, law firm Locke Lord identified four key regulatory trends insurance firms and professionals can expect to see this year:
1Stronger regulations
More scrutiny of AI use
More legal action
Beefing up of expertise
Paige Waters, Locke Lord Chicago partner, encouraged businesses to be proactive rather than reactive to these trends.
“We recommend that companies that are using AI not wait too long before you start putting your framework and governance structures in place,” she said.
Regulations to become more developed
With the popularity of generative AI tools like GPT and others, many regulatory bodies have already started to implement new rules regarding its fair use. The White House has also issued an executive order related to AI use.
Locke Lord expects most states to follow the existing National Association of Insurance Commissioners model bulletin as a guideline.
“You’re likely to see, I would think many, if not most states, issuing some sort of bulletin or circular letter along the lines of the model NAIC bulletin,” Stephanie O’Neill Macro, of counsel at Locke Lord Chicago, said.
However, she said some states may also “just say nothing and simply rely on existing laws to speak for themselves and then apply those laws to a company’s use of artificial intelligence.”
More scrutiny of AI use
Regulators are also expected to more closely monitor AI use, including how well a company is documenting its processes and procedures with respect to AI.
“I think departments of insurance are also going to expand their market conduct exam in terms of looking for your governance and framework and how it applies to your use of artificial intelligence,” O’Neill Macro said.
For example, they could start asking questions about whether AI use is part of the existing governance and risk management strategy; whether there is a separate strategy for AI use; and how those interrelate.
“You’re also likely to see departments of insurance, as you do your rate and underwriting rule filings with the various states, they’re going to start to scrutinize more of those filings when they involve the use of artificial intelligence [or] third-party data,” O’Neill Macro said.
The increased scrutiny will also likely come with more Securities and Exchange Commission sweep exams that involve how AI impacts broker-dealers and investment advisors.
“The SEC has been conducting sweep exams that specifically addresses the use of AI by investment advisors, and they’re focusing these exams on marketing, models, third-party vendors and compliance training,” Waters added.
Legal fallout ahead
As more companies begin to use AI, Locke Lord expects there will be more class actions alleging unfair discrimination.
Industry experts have noted the potential for generative AI to have built-in biases, which then presents a challenge for insurers using it for risk assessment. Insurance firms could be held legally responsible for unintended bias in AI use.
Additionally, both Locke Lord panelists in the webinar noted the Federal Trade Commission recently bulked up its authority to issue civil investigative demands with respect to AI.
The FTC has already issued a few enforcement orders related to AI, and while those have not been in the insurance industry, both Waters and O’Neill Macro encouraged insurance professionals to stay abreast of these developments.
Added AI expertise
Regulators are also expected to improve their own knowledge about AI so that they can better monitor its use, including hiring third-party specialists if needed.
“You’re going to see the regulators become increasingly more knowledgeable about this and where they need to build up their staff to be able to monitor the insurance industry’s use of AI,” O’Neill Macro said.
She noted that both the federal government and state insurance departments are either hiring staff or educating staff to adequately address AI issues.
“They’re bringing in experts, data scientists…getting up to speed and retooling to the extent they need to, to be able to regulate AI.”
Recommended actions for insurance firms
Waters advised insurance firms and professionals to get buy-in from the top-down, involve the board of directors and establish interdisciplinary or cross-functional teams for their AI framework.
“You need somebody from compliance, legal, business, IT, so that you can cover all of the issues that are impacting all the various areas of the organization,” she said.
She said firms should also determine their risk tolerance for AI use and whether they will develop their own AI tools or use third-party AI.
At the same time, they should keep written policies and procedures as well as maintain a record of all of the AI tools being used.
“A lot of companies are taking existing infrastructures and just expanding them to address the AI issues,” Waters said. “As more regulation comes out, it may go in different areas. So, whatever you’re doing needs to be flexible.”
Locke Lord was formed in 2007 as a merger between Locke Liddell & Sapp and Lord Bissell & Brook LLP. Its key sectors include finance & financial services, insurance & reinsurance, private equity, energy & infrastructure and pharmaceuticals.
Rayne Morgan is a content marketing manager with PolicyAdvisor.com and a freelance journalist and copywriter.
© Entire contents copyright 2024 by InsuranceNewsNet.com Inc. All rights reserved. No part of this article may be reprinted without the expressed written consent from InsuranceNewsNet.com.
The post 4 regulatory trends for AI use in insurance for 2024 appeared first on Insurance News | InsuranceNewsNet.