What should you do if your Gen AI is hallucinating?

What should you do if your Gen AI is hallucinating?

Before jumping into implementation of generative AI, insurers may want to consider some of its issues that the industry has already identified.

Professionals and experts are concerned about issues with bias, privacy, security and “hallucination,” the term being used to describe output from generative AI that is inaccurate or incoherent, because the large language model (LLM) technology has not correctly processed data and information received.

Hallucinations can produce incorrect results that materially affect insurance business operations such as underwriting and claims decisions, and can also lead to security issues and leaks of sensitive information, impacting customer service, as Adrian McKnight, chief digital officer of WNS, a business process management company, explained.

Adrian McKnight, chief digital officer, WNS.

“For insurers, having Gen AI as part of a closed data system is really important because you don’t want to expose any of that data to open source systems. You don’t want to end up in any breaches because of data being exposed,” he said. “This requires a degree of quality assurance and checking around the outcomes it’s delivering. The ability to ultimately adjust Gen AI models and the way they operate depends upon the outcomes that it’s delivering. That’s absolutely critical for insurance and really important for regulatory issues as well as customer data and confidentiality.”

The term hallucination itself is something of a misnomer for these types of Gen AI errors, according to Joseph Ours, AI strategy and director, modern software development at Centric Consulting. Understanding how Gen AI actually works is necessary to understand how it makes these errors, he explained. 

See also  Volvo Announces Smaller EX30 SUV Will Slot beneath Electric EX90

“Hallucination implies that it’s thinking and reasoning, and it’s not. It’s actually an underlying physical model,” he said. “The issue comes from asking the wrong questions, and the size of the answer that you get. Asking the wrong question is like putting silly answers into Mad Libs [the word substitution game]. You’re going to get a silly result.”

Human intervention, of course, can help combat Gen AI hallucination, as Andy Logani, chief digital officer at EXL, an insurtech management consulting company, states. Having a human intermediary between Gen AI and a customer to check its output is one method, he said. Establishing guiding principles for the use of Gen AI is another.

Andy Logani of EXL

Andy Logani, chief digital officer, EXL.

“Even if broader laws are being created, you need to get some things in place,” Logani said. “Companies are really worried at the moment about privacy compliance. We recommend using unbiased data collection methodologies as much as possible so that you don’t have bias in the data. If possible, use synthetic data, that you can generate, so that you’re not using any public internet data, which can avoid any privacy breaches.”

Insurers shouldn’t totally depend on Gen AI just yet, according to Rima Safari, a partner in the insurance practice at PwC. “The big risk is if humans get so dependent on the AI model that without even looking they’re just approving everything, because they’ve seen that the first eight times it was correct,” she said. “You don’t want to have any dependency on the Gen AI models yet because they will try to make up an answer when an answer doesn’t exist.”

See also  What’s new with API links between broker and carrier systems?

Rima Safari of PWC Rima Safari, partner, PWC.

Picasa

That tendency can also amplify bias, “whether it’s related to premiums or other processes,” Safari said. “The complexity of the AI programs can lead to transparency issues, where the customer or the organization doesn’t know or even the regulators don’t know they got to this decision, especially with generative AI. There’s not a traceability that if you did this and you selected these three data types, here’s what the answer should be.” 

EXL’s Logani recommends establishing a set of governing laws and guiding principles for AI. Aside from guarding for biased data, EXL recommends that clients anonymize data, Logani said. “Mask personally identifiable information, protected health information,” he said, calling the practice “differential privacy,” in which encryption or “noise” prevents unauthorized parties from seeing the actual data. 

U.S. state and federal regulators, even without knowing how generative AI works, have or will set rules and take action concerning unfair trade practices and other problems that it can cause, according to Rick Borden, a partner specializing in cybersecurity and privacy issues at the Frankfurt Kurnit Klein & Selz law firm.

“I don’t think fighting it is going to be very effective,” said Borden. “It’s basically having policies and procedures and having the right people at the table. And knowing what you have and what you’re doing with it, which is the black box problem, because you don’t know what it does, or why it does what it does. You’re going to have to document all of this stuff.”