How generative AI is enabling cyber criminals

Hacker attacking the internet

Generative AI is revolutionizing how cyber criminals enact their scams, making their targets fail to detect what would usually be obvious phishing attempts, experts shared with Canadian Underwriter.  

As these threats become more and more sophisticated, insureds must exercise more caution than before to prevent security slip-ups. 

AI can rapidly advance the P&C insurance industry by assisting cyber insurers to improve the security of clients’ systems. 

“You’re able to use AI to go through the system and find any types of flaws or cracks in your cyber security infrastructure,” Sinead Bovell, futurist and founder of WAYE, told attendees during her keynote at the RIMS Canada Conference in Ottawa. “That’s going to be a lot cheaper and easier for organizations to do.” 

On the other hand, however, generative AI is helping cyber criminals generate believable and personalized phishing messages.  

Generative AI (for example, ChatGPT), uses machine learning to generate text, audio and images that can oftentimes appear to be quite sophisticated, or even human-made.  

The advanced capabilities and easy access of generative AI enables cyber criminals to craft material that’s increasingly difficult to discern as a scam.  

“A bad actor could use AI to generate the best version of an email that’s likely going to make somebody click,” Bovell said. 

For example, cyber criminals can prompt ChatGPT to write a business message meant to illicit confidential information from an employee, said Brian Schnese, AVP, senior risk consultant, organizational resilience at HUB International, in an interview with CU.

See also  How brokers can encourage adequate coverage for incensed clients

“I went to ChatGPT and I asked it to please write me an email that I can send to my vendor asking to change my wire banking instructions,” he explained. “Instantly, I’ve got an amazingly worded email that that delivers on that.”

Refining the message

If the first prompt ChatGPT generates doesn’t cut it, criminals can go back and further refine the message. 

“Then I went back after I got my response, and I [asked] ChatGPT to please incorporate a sense of urgency, and also please stress the confidential nature of this request,” Schnese said.  

Traditionally, phishing emails tend to have unusual spelling or grammar errors, or blatantly obvious tonal indicators, that point to the message being crafted by a cyber threat actor. 

With generative AI, the warning signs can be subtle. The AI might be well-versed in a variety of languages and use data and algorithms to imitate the way humans learn, gradually improving the more users engage with it.  

“When I started dealing with email compromise and vishing, which was telephone compromise, there were telltale signs that I was working [with a] criminal,” Dan Elliott, principal, cyber security risk consulting at Zurich Canada, told CU. “A lot of those telltale signs are gone. 

“[Generative AI] is really taking away a lot of those spelling and syntax errors that you used to tell people to look for.” 

Luckily, there are other signs employees can follow to ensure they don’t get phished. 

Suspicious email addresses or domain names that don’t match are one sign that an email might be a scam.  

See also  GM and Tesla score big in S&P Global Mobility loyalty awards

“You’re not going to assume that the content of that email is coming from who it says it’s coming from, as an example,” Schnese said. 

Especially so, if the email has an unusual request involving the transferring of funds, or for your login credentials.  

As Schnese mentioned, cyber scammers can add a sense of urgency to their AI-crafted phishing attempts. If an email emphasizes how urgent it is, the request should give employees’ pause. 

 

Feature image by iStock.com/xijian