How AI regulation in insurance is taking shape

How AI regulation in insurance is taking shape

It’s not new that insurance regulators are grappling with how (and how much) to regulate artificial intelligence’s use in the business of insurance. This year is no different, but some of the big ideas on which regulators previously focused are beginning to inform more practical and detailed efforts to provide concrete guidance or, in some cases, actual regulatory requirements. Two developments in 2023 especially will inform the approaches that regulators will take in the years ahead.

First, insurance regulators via the National Association of Insurance Commissioners (NAIC) are drafting a model bulletin on artificial intelligence use with the aim of guiding companies in establishing governance systems and regulatory expectations for such systems. As explained during the 2023 Spring National Meeting in Louisville, Kentucky, the NAIC’s Committee on Innovation, Cybersecurity and Technology is taking the lead on producing a commissioner-driven deliverable that likely will be exposed for public comment sometime this year. Much like the NAIC’s AI Principles adopted in 2021, the bulletin likely will provide high-level, principles-driven guidance that will serve as a good guide for companies seeking to understand, at a minimum, what kinds of questions and information regulators will ask when seeking more information about artificial intelligence and machine learning products.

Second, Colorado continues to move forward with its rulemaking under the Colorado Privacy Act, with the first round of draft rules for life insurers exposed for public comment in February. 

Although focused on life insurers, the Colorado Department of Insurance has stated in public meetings that property and casualty insurers should expect the version of the rule applicable to them will be almost similar. Colorado’s rules are more prescriptive than anything coming out of the NAIC thus far in detailing the information insurers will need to have available for using AI as well as how to report such information to the department.

See also  Insurance For Artisan Contractors: Top Policies to Protect Your Business

The differences in these approaches is a preview for regulatory differences insurers will face in the near future across jurisdictions. Some will adopt the NAIC model bulletin while others will modify it. Still others may follow Colorado’s lead in seeking legislation or adopting rules specific to AI usage. At a minimum, carriers using AI as part of their insurance offerings in multiple jurisdictions, irrespective of line, likely will be faced with a somewhat disjointed regulatory regime in the near term, even as regulators work to find consensus wherever possible.

So, what should savvy insurers do now? At a minimum, any insurer that is using or considering the use of AI should be giving thought to implementing a well-documented governance system for its AI and machine learning tools. In other words — how does the enterprise show its work? Whether a jurisdiction elects a more front-loaded approach to regulation (like Colorado with significant reporting requirements) or back-loaded (guidance followed up with market conduct reviews if necessary), much of the regulatory risk surrounding AI boils down to two questions: Do guardrails exist around a company’s AI and machine learning tools, and can the company defend those guardrails as appropriate and adequate through testing and documentation?

In addition, companies with robust governance integrated into their AI and machine learning portfolios are in a much stronger position to shape regulatory requirements as they come into sharper focus. As regulators and policymakers focus more on how and to what extent companies should be prepared to explain AI guardrails, proactive carriers not only will be more prepared when regulation comes, they also are in a much stronger position to speak up and be taken seriously when regulatory proposals become unnecessarily burdensome. As counterintuitive as it may seem to some, continued, patient engagement with insurance regulators on this topic will make for a more navigable long-term regulatory framework.

See also  NYCM Stories: Making Blankets for Active-Duty Service Members

Near-term regulatory uncertainty notwithstanding, establishing robust governance and testing regimes for AI and machine learning are smart investments for insurers in anticipating whatever regulatory requirements emerge. It will be much easier to tweak such systems as needed once established than scramble to implement wholesale systems in response to new regulatory requirements. Keep in mind — governance is distinct from AI and machine learning tools themselves. If AI and machine learning are the economic engines of the future for insurance carriers, effective governance is the oil that will keep the engine running smoothly — and compliantly.