‘Silent AI’: Consulting firm alerts insurers about risks
As use of artificial intelligence in the insurance industry expands, global consulting firm Alpha FMC has cautioned insurers to be aware of unforeseen risks that can lead to significant financial losses — a concept they refer to as “Silent AI.”
“The concept of ‘Silent AI’ represents a challenge for insurers, as they need to anticipate the emerging risks associated with the use of AI and adapt their products to meet the new needs posed by this technology,” Pauline Ratajczak, Alpha FMC consultant, told InsuranceNewsNet.
Insurers in the United States are facing increasing pressure to adopt AI, due to growing competition and client demand, while simultaneously lacking standardized regulation that would enable them to uniformly navigate AI usage. The National Association of Insurance Commissioners is working on a Model Bulletin to guide the use of AI in insurance. However, insurers have no empirical data or theoretical models that estimate the frequency of potential losses. Additionally, most of the related legal cases have revolved around copyright infringement rather than unforeseen risks.
Ratajczak acknowledged that it is complex for insurers to understand AI-associated risks in the current stage of usage. However, she said that by fully assessing and reassessing existing policies and products, insurers can better identify gaps where they may be open to AI risks. From there, they may develop products or introduce clauses specifically designed to protect against those risks.
The emergence of Silent AI
“Silent AI” refers to potential risks in traditional property and liability insurance policies that are associated with the use of artificial intelligence but are neither explicitly included nor excluded.
For example, it can apply to an insurance policy that did not originally take AI-related risks into account when it was issued. Those risks could be considered “silent” because they were not specifically addressed, and the policy could unintentionally cover them because of an overly broad definition and/or the absence of specific exclusions.
Silent AI also encompasses economic losses and damage caused by the underperformance of AI systems, which is a particular concern due to the lack of precise empirical data. For example, if the technology fails to accurately assess product quality and that results in product recall or even customer injury.
Ratajczak pointed out that “involuntary exposure” to such unforeseen risks can lead to significant financial losses for insurers if not mitigated. It can also deter businesses from adopting AI.
“The main risk is having to cover unforeseen claims and losses, without having assessed or priced these risks in advance,” she said.
However, Silent AI doesn’t only affect insurers. They may be on the “front lines” of the issue, but AI providers, businesses that use AI, regulators and individual customers all stand to be affected, Ratajczak noted.
“AI providers need to understand how their products and services are covered by insurance. Developers need to ensure that the risks associated with their technologies are well managed, both in terms of third-party liability and user safety,” she said.
Regulators must also be aware of Silent AI risks when developing guidelines to ensure AI risks are properly managed by insurers. According to Ratajczak, some international regulators have already taken a proactive approach to this.
“These initiatives do not directly oblige insurers to cover AI-related risks, but they do create a regulatory framework that will make the management of AI-related risks more explicit and mandatory.”
Reassessing risk
Before introducing products or clauses to address Silent AI, insurers should start by assessing their existing policies, Ratajczak suggested.
“Players need to assess the AI-related risks lurking in existing products and the risks that still remain underinsured before they consider offering new insurance products,” she said.
As an example, insurers should understand whether physical damage caused by AI is likely to be covered by property or liability insurance. Or if specific risks associated with the use of generative AI are only likely to be covered by special types of insurance, such as cyber insurance.
Once insurers have assessed AI-related risks in their insurance policies, they can then improve existing policies in specific ways, such as by adding riders to cover AI-related risks or clauses to exclude risks arising from opaque AI decisions. Ratajczak noted that some reinsurers have already begun doing this by developing specific contracts to cover AI risks.
“These offers are aimed at AI providers to cover damage linked to the underperformance of their systems. This means that if an AI solution fails to meet expectations, the policy can intervene to mitigate users’ financial losses,” she said.
Additionally, she said insurers may choose to work with AI experts to develop, implement and support solutions to protect against future risks.
Alpha FMC is a consulting firm specializing in insurance and asset & wealth management. Founded in 2003, its team of more than 1,000 consultants operates globally.
© Entire contents copyright 2024 by InsuranceNewsNet.com Inc. All rights reserved. No part of this article may be reprinted without the expressed written consent from InsuranceNewsNet.com.
The post ‘Silent AI’: Consulting firm alerts insurers about risks appeared first on Insurance News | InsuranceNewsNet.