Insurance regulators are asking if AI models are biased

Insurance regulators are asking if AI models are biased

Regulators are increasingly interested in whether the AI models being used by insurers are biased, resulting in unfair discriminatory outcomes on the basis of race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity or gender expression. 

In 2021, Colorado Governor Jared Polis signed SB21-169, a law to require insurers to test whether the use of big data in their processes is discriminatory. In May, the California Department of Insurance sent surveys to property & casualty insurers, to get information about how insurers are using tools like big data, artificial intelligence and machine learning in their decision making processes.

Recent research from Arizent suggests that among insurance executives, minimizing risk is a more common priority this year. At the same time, there has been an increase in pilots and plans around advanced analytics for big data. Many factors are contributing to claim severity and insurers are likely looking to revamp their analytics efforts from the early days of big data.

Cathy O’Neil, an independent data scientist, says actuaries are trained to think about risk and they’re good at it. But the decisions they make, which become data used to train AI models, may have inherent bias.

“Model bias is almost always unintentional,” O’Neil said. She is the founder of O’Neil Risk Consulting and Algorithmic Auditing, an algorithmic auditing company that is working with insurance commissioners including the Colorado Division of Insurance and Attorneys General across the U.S. on how to understand and mitigate algorithmic risk. 

“[Actuaries] can, at a very granular level, figure out somebody’s risk in terms of insurance, in terms of claims,” she said. “On the other hand, there’s also this thing where they’re not supposed to use race against someone. And they aren’t doing that intentionally. The question is if they’re doing it unintentionally. And it’s complicated because a lot of the data that they use is both useful for understanding risk and a proxy for race. So it’s really a question of the notion of fairness.”

See also  5 thoughts on the 2024 Toyota Prius Limited

O’Neil wrote a book released in 2016 on the potential impact of algorithms, ‘Weapons of Math Destruction: How Data Increases Inequality and Threatens Democracy.’ During her research she found that regulators didn’t really know what discrimination meant when it came to an algorithm, that’s why she decided to start ORCAA. 

ORCAA is partnering with Octagram Analytics, a data and analytics consulting firm, to offer property and casualty insurers an Insurance Fairness Explainability Review. INFER does analysis of existing models, data governance structures and bias testing.

“I used to work in house, at a carrier,” said Jessica Leong, CEO of Octagram Analytics and former president of the Casualty Actuarial Society. “And I know that when this came down on my lap — how are we going to actually comply with all this regulation? — it’s actually pretty hard for carriers to get their arms around because it’s changing so fast. That’s why Cathy and I are joining forces to help carriers gain confidence in this area.”

But Leong said that insurers are afraid to test models for bias. 

“As we’ve been talking to carriers, especially even the big ones, they are nervous about starting to test,” she said. “They are nervous about the way that the testing is done. If you actually infer these protected classes from say first name, last name and address, and they are worried about having a data set in their company that actually says, this is my policyholder and this is an inferred race and gender, they feel that is a potential liability.” 

See also  Exomod's crazy 'Goldfinger' is a Dodge Challenger Hellcat Redeye underneath

ORCAA developed a platform a few years ago so insurers don’t have to infer race on their internal systems.

“What we’ve devised is a system with a double firewall where basically on the way up to the cloud, the first name, last name and address are stripped away and replaced with inferred gender and race,” O’Neil said. 

“It protects us from ever having to see first name, last name, and address and it protects the insurers from ever having to see race or gender,” she said. “It solves two problems at the same time, and it allows us to do the most basic analysis like inferred rates for gender, and an outcome such as approved or denied. With just that information, we can do analysis and see the rate of acceptance per race, per gender.”

Anthony Habayeb, CEO of Monitaur, an AI governance software company, said regulators are realizing that AI governance includes data governance. 

“What data did you use? Were you allowed to use it? How did you confirm the data was fair, it was representative, it was appropriate,” he said. “The good news about a lot of those questions, they’re questions that data scientists should be asking themselves when they’re building a model. There’s a lot of really great overlap with some of the regulatory developments and just fundamental good practices of building a robust machine learning application.”

Deploying generative AI is likely even more risky because some of the models have been proven to have inherent bias.

“The thing about AI and generative AI capabilities is it’s building on itself,” said Michael Nadel, a senior director at Simon-Kucher. “And so it’s getting smarter and smarter at this exponential curve.

See also  Go to Jail, Do Not Pass Go

“When Sam Altman went in front of Congress and said that this technology needs regulation, it’s one of those times where I firmly believe that that is the case, because it’s going to progress so quickly that it could get out of our own hands,” Nadel says. “And I don’t know what that looks like and don’t really want to be an alarmist, but I think that as it relates to the use cases of generative AI, that regulation is absolutely necessary. To make sure that we advance it in the ways that we want to advance it and it doesn’t become this kind of tool for potential misuse.”

The National Association of Insurance Commissioners’ Innovation, Cybersecurity, and Technology (H) Committee has several groups working to identify and develop a framework for the use of artificial intelligence. 

The European Union is also working on a law related to AI. The legal initiatives include a European legal framework, a civil liability framework and a revision of sectoral safety legislation, according to the EU website.

Last year, the White House published an AI Bill of Rights that highlights five rights that consumers should have related to AI. 

The Colorado Division of Insurance released a revised version of a draft, Algorithm and Predictive Model Governance Regulations, in May. The Division is currently focused on life insurance underwriting and private passenger underwriting.