Underwriters must guard against bias and discrimination, speaker says
Data scientists know that artificial intelligence and predictive modeling leads to some bias in underwriting. What they don’t always know are the ways and the extent of the negative impact.
More studies are needed, said Dorothy L. Andrews, senior behavioral data scientist and actuary with the National Association of Insurance Commissioners. The more data, the better, she added, because insurance underwriting is changing fast.
“We’ve gone from rating people based on who they are using traditional information from insurance applications, and relying on human underwriters, proficient in Excel and Google, to rating people based on how they behave,” Andrews said. “To understand behavior we are now relying on non-traditional data sources from third-party vendors for predictive insights as to what kind of insurance customer a person is going to be.”
Andrews recently delivered an hour-long presentation on “Defining Data Biases and Unfairly Discriminatory Considerations” to the Casualty Actuarial and Statistical Task Force, an NAIC body.
Bias and discrimination in insurance underwriting is a big topic in several NAIC working groups. Much of the work is focused on the potential problems accompanying artificial intelligence and predictive modeling.
Insurers are already feeling the liability from the power wrought by big data and predictive modeling. For example, a 2017 study by Consumer Reports and ProPublica found disparities in auto insurance prices between minority and white neighborhoods that could not be explained by risk alone.
“The results revealed that members of non-white communities paid disproportionately higher premiums than what is paid in white communities for the same level of risk,” Andrews said. “Now, I’m not here to defend what ProPublica did. I’m just trying to make you aware of the kinds of analysis that are being done. And being done by non-actuaries.”
In some cases, insurers charged as much as 30% more for drivers in minority neighborhoods. Insurers defended the higher rates by claiming that the risk of accidents is greater in those neighborhoods, even for motorists who have never had one.
Three kinds of bias
Andrews used the iceberg model to demonstrate the three levels of bias found in societies. Just one of the three, statistical bias, is above “the water line,” she noted.
“Human biases are just below the waterline and are more difficult to resolve than statistical bias,” Andrews said. “My mother used to say, ‘You can’t change people. They have to change themselves.’ That is a challenge with human bias. People have to recognize their biases and actively work to change them.”
Systemic bias is the most difficult to change. As an example, Andrews cited history books, where not every group is equally and accurately represented.
In this environment, insurers are making dramatic changes in underwriting, Andrews noted.
Keep studying the impact
The NAIC is surveying each segment of insurance to quantify exactly who is using AI, who wants to use it and what tasks they want it to do.
An initial survey of auto insurers found that 88% of insurers currently use, plan to use or plan to explore using artificial intelligence or machine learning as part of their everyday operations. Seventy percent of home insurers and 58% of life insurers said they plan to do the same.
In December, the NAIC adopted the Model Bulletin on the Use of Algorithms, Predictive Models, and Artificial Intelligence Systems by Insurers. The bulletin is not a model law or a regulation.
It is intended to “guide insurers to employ AI consistent with existing market conduct, corporate governance, and unfair and deceptive trade practice laws,” the law firm Locke Lord explained.
These activities are part of ongoing efforts to establish fairness and ground rules for new modes of algorithmic underwriting.
It is crucial to have actuarial expertise on data modeling teams, Andrews explained, to help prevent flaws in the methodologies.
“One question that has become really important in insurance is does bias equal unfair discrimination?” Andrews asked. “I think that there are real issues with third-party data. That often this data is leveraged for its correlational power. But its relation shift to risk cannot often be rationalized. It contains proxy variables for protected class attributes, which are unlawful to price on.
“We need to make sure there’s more representation in our training data and on our modeling teams, because that will give us the best algorithmic outcomes.”
InsuranceNewsNet Senior Editor John Hilton covered business and other beats in more than 20 years of daily journalism. John may be reached at john.hilton@innfeedback.com. Follow him on Twitter @INNJohnH.
© Entire contents copyright 2024 by InsuranceNewsNet.com Inc. All rights reserved. No part of this article may be reprinted without the expressed written consent from InsuranceNewsNet.com.
The post Underwriters must guard against bias and discrimination, speaker says appeared first on Insurance News | InsuranceNewsNet.