State regulators call for more AI oversight by insurers
The National Association of Insurance Commissioners (NAIC) set a policy for insurance companies’ use of artificial intelligence, in a response to the increased inroads that AI and generative AI are making in the industry.
The policy, called the Model Bulletin on the Use of Artificial Intelligence Systems by Insurers, adopted December 5 at NAIC’s national meeting, addresses responsible governance principles, risk management policies and procedures concerning AI, in an effort to make insurers accountable.
David Sherwood, managing director, risk and financial advisory practice, Deloitte.
“There’s been significant growth in the use of artificial intelligence and machine learning. And obviously, now generative AI and much more,” said David Sherwood, managing director in the risk and financial advisory practice at Deloitte. “As a consequence of that, the regulators appreciate that perhaps there’s a new set of guardrails here that need to be looked at.”
NAIC began developing the AI policy bulletin through its Innovation, Cybersecurity, and Technology, or “H,” Committee earlier in 2023. The H Committee, chaired by Maryland Insurance Commissioner Kathleen A. Birrane, has representatives from 15 U.S. states. A draft of the policy was issued on June 29, and the public comment period ended November 6.
NAIC’s intent with the policy appears to be establishing an oversight process and governance framework for insurance companies, according to Sherwood, who joined Deloitte in 2013 and began his career with the U.K. regulator, the Financial Services Authority, in 1998.
“It’s making sure that governance framework is in place, and there is accountability within the insurer, but also making sure that the right team members are involved,” he said. “Making sure that the business is involved, the control functions such as compliance and risk management are involved, and all the key stakeholders that need to be considered are there within that accountability framework.”
Regarding risk management, insurers tend to have an inventory of the models they use, Sherwood added. Insurers have to determine if their models are in an AI category or are predictive models to determine claims outcomes or underwriting decisions. “Another piece is how you oversee those models,” he said of NAIC’s policy.
The policy addresses how insurers deal with new types of data, such as external consumer data including social media information, according to Sherwood. This entails feeding data into prediction, he added. “An insurer literally has to have that risk management framework to go in place, inventory those predictive models and then the use of data within those,” he said.
Also under the policy, insurers using third-party systems or data still have primary oversight responsibility for what those produce, according to Sherwood.
The next step for state insurance regulators and the industry will be to put NAIC’s AI policy guidance into practice. Some states, like Colorado, are already putting their own regulations into place for the use of AI in insurance. “Will guidance be sufficient?” Sherwood asked. “We’ll wait and see how the balance works between guidance and actual regulation.”
Individual states will have to decide how they examine and supervise insurance companies’ use of AI under the policy guidance, Sherwood added. Insurers will have to figure out how to adopt the guidance or square it with their own systems and controls, he said.