NY insurance regulator makes carriers accountable for third-party AI systems

NY insurance regulator makes carriers accountable for third-party AI systems

New York State’s new regulatory guidance for insurers using AI states that carriers should be responsible for using the technology for underwriting and pricing.

Karthik Ramakrishnan, CEO, Armilla.

Sebastien Auger

“If you’re using third-party systems, you cannot punt the accountability to the third party,” said Karthik Ramakrishnan, co-founder and CEO of Armilla, an AI model and verification technology company that serves the insurance, financial services, healthcare, retail and other industries. “The insurer is still accountable for the end outcomes and that’s what the circular really tries to emphasize.” 

New York’s guidance came from its Department of Financial Services, which regulates insurance, in the form of a circular related to Insurance Law Article 26. This is state law that addresses unfair claim settlement practices, discrimination and other misconduct, including making false statements. The circular specifies that the elements the law addresses should not be violated by the misuse of AI and consumer data and information systems.

What can insurers do to ensure they are compliant with the circular? Ramakrishnan recommends insurers set a governance policy for how they collect data, develop models and train models. Secondly, insurers should examine how their operational production works and their intentions for the use of models. “Where are the areas where we are okay to use AI and where we won’t?” he said.

This, in turn, requires understanding what thresholds an insurer will set and how it trains its data scientists and makes them accountable, according to Ramakrishnan. Lastly, insurers must monitor the processes set for governance, he added.

See also  Likely 2025 Nissan Rogue refresh spied inside and out

There are aspects of using AI where insurers should go beyond just what is mentioned in the New York regulatory guidance, according to Ramakrishnan. AI models should be tested for bias and for how changing variables in the models affect outcomes, he said. 

Insurers also ought to look at how AI models perform. “Can we explain the model well, do we understand how it makes these decisions?” Ramakrishnan asked. “Which features are important in driving decisions and robustness? Does the model do well on unseen data?”

Depending on their accuracy levels, AI models also can be trained to take data and situations they have not seen before, according to Ramakrishnan. The aim is to avoid “data drift” and “concept drift,” he explained. “This is a very specific concept to machine learning, where if it sees too much data that’s outside of its realm, then it may start making more and more erroneous decisions and outcomes,” he said. “You should know how your model is behaving in production.”

New York is not the first U.S. state to consider regulating the use of AI in insurance, but one of the first to issue policy or rules on the subject. Last year, Colorado’s insurance regulator began issuing guidance under a state law passed in 2021. In December, the National Association of Insurance Commissioners, the group of state insurance regulators, issued an AI oversight policy to guide its member regulators.