Why AI is keeping P&C regulators up at night

AI concept

Increasingly concerned about biased uses of artificial intelligence, industry regulators are questioning insurers’ ability to explain why they’ve used certain data in their models or how the AI came to its conclusions, experts told the Insurance Bureau of Canada’s Regulatory Affairs Symposium last week.

P&C techies beware: although, artificial intelligence (AI) can streamline administrative work, improve risk modelling and bolster cyber security, AI outputs are only as good as their data inputs, Annie Veillet, partner, cloud and data practice at PwC Canada, said during a panel on AI Technology and Governance. 

When data input into an AI is biased or inaccurate, it can produce skewed results. Plus, AI can generate made-up results from its data. AI ‘hallucination’ occurs when the machine produces an imagined answer based on incomplete data or patterns it wrongly perceived.  

“It [can] inspire itself from its large set of data and answer the prompt that a human is asking,” said Veillet. 

Regulators are concerned AI could enable insurers to withdraw coverage for vulnerable classes.

For example, insurers might be able to use valid data inputs as a loophole to insure against specific risks, Avi Gesser, partner and co-chair of the data strategy and security group at Debevoise & Plimpton, said.

“To an extent, you can chunk off little groups…and say, ‘Okay, well, based on what radio station people listen to, what magazines they subscribe to, and what coffee they drink, I know what [a good risk pool] is.’ But why is any of that predictive? Or is that really just your means by which you can get to some insurance class that you’re not actually allowed to insure against?” 

See also  2025 McLaren 750S bespoke program announced

Regulators’ take 

To avoid trouble arising from AI bias, insurers must be able to explain why they’ve used certain data inputs and how their AI drew a conclusion, one regulator said during the IBC’s panel, What Keeps Regulators Up at Night. 

“Some of the questions we are looking at when it comes to AI are, ‘Are insurers having sufficient and high-quality data…to feed into an AI model?” said Elizabeth Côté, acting managing director at Office of the Superintendent of Financial Institutions’ Digital Innovation Impact Hub. “How do you explain the validity of those models? Are there biases? How do you explain how you have [come to] the decision? That’s something that OSFI is very concerned with,” she said. 

“We want to work with you all to make sure that there is a good approach and there are good risk management practices that are being used.” 

Essentially, insurers must be discerning when identifying data variables truly predictive of insurance risk. 

“It’s certainly not to be [used as] a black box…that you’re not even aware of what’s in it and what will be the results,” said Hélène Samson, director of prudential policy and simulations at Autorité des marchés financiers (AMF). 

Gregory Smolynec, deputy commissioner at the Office of the Privacy Commissioner of Canada, compares the rise of AI to the invention of the printing press by Johannes Gutenberg in the 15th century. 

The printing press helped contribute to a mass spread of knowledge, created uniformity in written language and contributed to major historical events and academic revolutions. While he likely knew it would change the world, Smolynec said Gutenberg couldn’t have known the full impact the printing press would have centuries later.  

See also  Insurance Appraisals and Umpire Appointments Texas Style

So, insurers who are using AI for data collection must ensure they’ve taken steps to understand and safeguard when, where and how the data is being used.  

“[With] emerging predictive technologies collecting lots of data, there may be benefits,” he said. “It may be happening in a very considered way. Basically, what we want to know is that these things are being rolled out in a privacy-protected way, with all the information principles taken into consideration.”

Over-regulation 

Blake Richards, neuroscientist, AI researcher, and associate professor at McGill University, said the best way to manage complexity is to not add more complexity. He urged regulators to be judicious about interfering with AI.

Too much regulatory intervention would essentially hand a “monopoly” to tech companies like OpenAi and Microsoft, he said. “We don’t want to kill this nascent economy with undue regulation.” 

But AI is not a one-type-fits-all technology. Companies will employ it in different use cases, and with varying degrees of autonomy.  

“The extent to which AI systems are autonomous is what determines how complicated and intense the regulation needs to be,” Richards said during his AI and Human Behaviour presentation.  

Thus, the industry must determine whether there is a good reason to make an AI system autonomous.  

 

Feature image by iStock.com/a-image