Can Gen AI be used to fight back against cyber attacks?

Can Gen AI be used to fight back against cyber attacks?

While generative AI can boost cyber criminals’ ability to steal identification data to commit fraud, insurers can use the same Gen AI technology to fight back, industry professionals and experts say.

Ben Dulieu, chief information security officer at Duck Creek Technologies.

“Every insurance company out there has a treasure trove of customer data,” said Ben Dulieu, chief information security officer at Duck Creek Technologies. “There is an obligation for these insurance companies to keep that data safe. By implementing a key framework and understanding what the best practices are, that’s how you’re going to safeguard the data.”

In addition, some insurers that provide coverage for cyber attacks have their own incident response teams and crisis management resources, Dulieu noted.

AI advances have created new capabilities to detect cyber attacks, such as detecting anomalies in activity like files being opened, according to Dulieu. While automating security processes can be beneficial, cyber attacks are becoming more sophisticated, he said, “because they’re generated so quickly and the coding can be changed at the drop of a dime, because of AI generated phishing emails, or whatever it may be.”

John Keddy - Datos Insights.jpg

John Keddy, senior principal at Datos Insights.

The use of Gen AI for cyber attacks hasn’t “exploded” yet, according to John Keddy, an insurance AI solutions executive at Lazarus AI, but, he said, “that doesn’t mean it’s not coming and companies should not be prepared.” Lazarus uses multi-modal AI tools and technologies to handle information formats including video, reports and handwritten notes.

Security analysts watch numerous monitors at once, and need to determine if an alert is real and requires notifying the target of the cyber attack, Keddy said, describing a scenario. Insurance companies can use Gen AI to set up cyber defense patterns more effectively, Keddy said. For instance, Gen AI itself may be better than people at determining that a phishing email was generated by AI, and therefore is indeed a phishing attempt, he explained.

See also  All You Need To Know About Short-Term Lets: A Guide for Landlords

James Laird of Intelligent Voice

James Laird, chief operating officer, Intelligent Voice.

Fraudulent takeover of insurance policyholder accounts, committed using Gen AI, can also be prevented by the same means, as James Laird, chief operating officer at Intelligent Voice, a speech recognition technology company, said. Intelligent Voice can confirm the identity of the listed owner of an account, counteracting a cyber attacker’s attempts to use stolen information to commit fraud using that account.

“We seamlessly reinforce existing processes, whether that’s knowledge based authentication, or whether that’s using biometric details to enhance call screening,” Laird said.

Insurers can counteract attempts to collect on fraudulent claims, Laird adds, by supplementing AI with a “human in the loop,” an idea frequently mentioned as a form of AI oversight. Hackers are aiming to attack “at volume with automated processes,” he said. “In the code of conduct for the use of AI in claims for insurers, the good insurers will always have a human in the loop.”

Jennifer Wilson of Newfront Insurance

Jennifer Wilson, senior vice president and cyber practice leader, Newfront Insurance.

Since last April, ransomware cyber attacks have kept increasing month after month, observed Jennifer Wilson, senior vice president and cyber practice leader at Newfront, a commercial insurance platform company. She attributes this to “threat actors” finding new ways to attack, using AI, but points to modeling using AI as a means to defend against this.

“The good guys have found a way to use AI in many ways that help the insurance industry,” she said. “One is through modeling. You finally have enough claims and now the AI technology to better model and predict what types of claims and the frequency and severity of claims that insureds should expect based on their industry and their revenue size. We wouldn’t be this far without machine learning tools and techniques.”

See also  Operational Challenges of Paid Family and Medical Leave Laws – Past, Present and Future

Lauren Finnis of Willis Towers Watson

Lauren Finnis, head of commercial lines, insurance consulting and technology division, North America, Willis Towers Watson

However, a standard enterprise risk management (ERM) framework could be applied to address Gen AI-powered cyberattacks, even though these are a “new and developing risk,” said Lauren Finnis, head of commercial lines, insurance consulting and technology division, North America at Willis Towers Watson.

An ERM framework asks about vulnerabilities, how these could affect an organization, what can go wrong, what the consequences would be and how those could play out for the organization, Finnis stated. An ERM leader can look at the elements of a company including HR, operations, IT and procurement, and map the risks, she said. 

“We talk about near misses and we do the impact analysis,” Finnis said. “Which one of these can we tolerate? Which ones can we not? Then you move on to mitigation. Once you have something mapped out, what can you mitigate? Obviously, insurance is there and that’s what we do. Insurers definitely understand that piece. Gen AI is new, but the principles still apply.”