Insurance commissioners have posted AI guidelines. Do they go far enough?

Insurance commissioners have posted AI guidelines. Do they go far enough?

The National Association of Insurance Commissioners has put out a bulletin on insurance companies’ use of artificial intelligence, and it’s getting mostly good reviews.

The document contains many common-sense recommendations. Insurance companies must abide by all existing laws when they use AI, including the Unfair Trade Practices Act, the Unfair Claims Settlement Practices Act and the Corporate Governance Annual Disclosure Act. They must put in place controls, testing, validation and auditing.

Legal experts say the document is helpful and a good first step. 

“It does some good, it has some good aspects,” said Peter Kochenburger, emeritus professor at UConn and visiting professor of law at Southern University Law Center. The bulletin provides a sense of what regulators are expecting, he said. He likes its emphasis on testing, for example.

“It is useful to set out a system of how companies must, at least as they respond to regulators, think about these issues,” Kochenburger said. 

Cameron Kerry, Ann R. and Andrew H. Tisch distinguished visiting fellow at The Brookings Institution, also said the bulletin is helpful.

“There are a lot of issues out there, this is an important step toward helping people to get their arms around AI,” he said. 

But some say it doesn’t go far enough. It doesn’t address the issue of disparate impact, for instance.

“One of the major issues is, how does the concept of disparate impact apply?” Kochenburger said. “And this document, I think, dodges that issue.” 

What’s in the bulletin

The bulletin acknowledges AI’s potential for good and bad in insurance.

“AI can facilitate the development of innovative products, improve consumer interface and service, simplify and automate processes, and promote efficiency and accuracy,” the document states. 

Some see this as a positive sign.

“They’re acknowledging that AI can be useful if implemented correctly,” said Greg Hoffnagle, a partner in Goodwin’s financial industry and insurance group. “That’s different from the tone five years ago. The paper acknowledges this is where the market’s going. This will have benefits, if used correctly, to consumers.”

“Generally speaking, [AI is] a phenomenal opportunity for the insurance industry,” Hoffnagle said. “The insurance industry has a ton of data and does not know how to harness it, harvest it, use it.”

Kerry noted that insurers could use AI to better assess risk. “You can allocate [risk] better and then avoid any issues with moral hazard that come up in insurance,” he said. 

See also  The sump pump maintenance checklist.

The bulletin also points out the dark side of AI. 

“Using AI can bring unique risks, including the potential for inaccuracy, unfair bias resulting in unfair discrimination, and data vulnerability,” it states. 

The NAIC requires insurance companies developing AI models to comply with several relevant existing laws, such as the Unfair Trade Practices Act, which prohibits unfair or deceptive acts and practices; the Unfair Claims Settlement Practices Act, which sets standards for the investigation and disposition of insurance claims; and the Property and Casualty Model Rating Law, which requires that property and casualty insurance rates not be excessive, inadequate or unfairly discriminatory. 

The commissioners also require insurers to adopt governance frameworks and risk management protocols. 

How insurance companies and states will implement the guidelines has yet to be determined, Hoffnagle pointed out. States like California and New York tend to do their own thing. 

“Sometimes they use these [documents], sometimes they don’t, sometimes they’re consistent with these, sometimes they’re just totally inconsistent,” he said. 

Smaller states will likely find the NAIC guidelines helpful, he said.

Potential for harm

The use of AI in insurance brings “the possibility of enormous intrusion, in various ways, as well as mistakes,” Kerry said. 

Where many people worry that AI will get misused either intentionally or unintentionally, Hoffnagle’s concern is it speeds up everything. 

“It’s the latest and greatest tool,” he said. “But I think there’s a human aspect to misuse: People trying to cut corners, people trying to purposefully take advantage of people and kind of blaming it on the technology and saying, don’t blame us. We weren’t trying to discriminate. It’s just that we put the data in and this is what the computer told us.”

The biggest danger, people interviewed for this article agree, is the potential for discrimination and disparate impact.

The NAIC bulletin warns insurers against “unfair discrimination.” It does not mention the term “disparate impact.” The NAIC could not accommodate a request for an interview by deadline.

“Unfair discrimination” has traditionally meant people in the same risk classification have to be treated alike, Kochenburger said. It doesn’t address policies and decisions that adversely affect groups of people.

See also  AM Radio In Every Car Is The Only Thing Republicans And Democrats Can Agree On

In many industries, including banking, companies are required to consider not only whether their people and systems commit blatantly unfair discrimination, but whether policies, practices and rules that appear neutral result in a disproportionate impact on a protected group. 

For instance, if an insurance company charges more or for or won’t insure older homes, this could make insurance more expensive or unavailable to Blacks and people of color who are more likely to live in older houses.

The NAIC issued some AI principles a few years ago that did allude to disparate impact. It mentioned avoiding proxy discrimination, Kochenburger said. 

“But after a few years, the NAIC generally has not advanced that argument,” he said. “We’re still waiting for them to flesh out the very broad and good frameworks set out by the principles. As insurers partner with third parties to obtain more data and model data more granularly and perhaps accurately, what does that mean for the concept of disparate impact?” 

Hoffnagle has been talking with AI-using clients about disparate impact for more than a decade.

“I don’t think it’s too hard when using AI in a vanilla fashion to know that there are certain data points that are permissible and reasonable to use, and ones that are clearly protected classes, or ones that would just be a bad idea to use or may just be overtly discriminatory,” he said. “So disparate impact is harder to work around or know until you actually use the AI and the predictive modeling programs” and see the results over time. 

“Sometimes it may turn out that the only people who are getting claims denied or the only ones who are not getting policies issued for them are people of a certain income and a certain demographic in a certain city,” he said. “And that’s a problem.”

Data weightings in models also have to be considered, Hoffnagle pointed out. 

“If you use 10 data points to decide whether or not you’re going to underwrite insurance, but one data point is 98% of the model and the rest of them are less than 1% each, that’s also another big issue in this industry,” he said.

See also  Healey: Tax Package Coming with Budget on March 1

Disparate impact is not unique to the insurance industry, but it is complicated, Hoffnagle said.

Insurance carriers are allowed to charge more for coverage in areas where there are more floods, he pointed out.

“Carriers are in the business of making money and they’re either going to charge you more to insure you or they’re going to decide not to insure you,” Hoffnagle said. “If you started using AI as an overlay, there may have already been an inherent disparate impact just because of the neighborhood a consumer or business was in. It might appear that they’re no longer writing as many African-American people. But they may not have been writing as many African-American people before. It’s a pretty complicated thing to get granular on, and particularly to regulate in an efficient way.” 

Kerry pointed out that in auto insurance, age discrimination is permitted and young males under 20 are charged a lot more than older people and women. 

“Those are based on risks,” Kerry said. “Those are examples of where that better data is resulting in more fairness. If you are a safe driver under 25, male or female, maybe you could get better rates.” 

The training data and design of AI models have the potential for adverse discrimination, Kerry said.

“The issue of understanding how representative the training data is, is fundamental,” he said. “That’s where the bulletin is helpful, because it really outlined a series of management steps that the companies should take to assess the models that they’re using. There’s a lot more that needs to be developed.” For instance, test databases need to be developed for the insurance market that would help identify proxies for protected class information.

Other organizations are working on similar guidelines, Kerry pointed out. The National Institute of Standards and Technology is working on an AI risk management framework.