4 ways ChatGPT could worsen cyber risk

Chat with AI

Language models like ChatGPT could be used by bad actors to re-write software code and better prepare phishing emails, a cyber specialist warned recently. 

The state-of-the-art language generation model is capable of understanding and generating human-like text based on a given prompt, which can help businesses streamline processes. But with the evolution of new technology comes risk, said Zair Kamal, director of client development and a cyber specialist with HSB Canada. 

A ChatGPT-type artificial intelligence model could be used by bad actors in any one of four ways, Kamal said in a Q&A article earlier this month: 

Compromising sensitive data — Language models process and store large amounts of data from inputted queries. If employees upload sensitive data and confidential information into the model, data could be hacked, leaked or accidently exposed. 
Re-writing code to develop malware — Language models may be able to change software code deliberately. Code for an antivirus program, for example, could be changed so it may no longer be able to recognize a virus. 
Preparing phishing emails — Language models may be able to take over the task of preparing a well-written phishing email. 
More efficient information-gathering — Normally, a cybercriminal conducts manual searches through a target company’s website or social networks. But with a ChatGPT-like AI model, criminals could use language models to do these searches, helping them to get faster access to information. 

Brian Schnese is assistant vice president and senior risk consultant of organizational resilience at HUB International. He agreed generative AI solutions like ChatGPT allow cybercriminals to craft material that’s increasingly difficult to discern as a scam.   

See also  5 Situations That Prove Why You Need Pet Insurance

“I went to ChatGPT and I asked it to please write me an email that I can send to my vendor asking to change my wire banking instructions,” he told Canadian Underwriter in a recent interview. “Instantly, I’ve got an amazingly worded email that delivers on that.” 

If the first message doesn’t work, criminals can go back and further refine the message.  

“Then I went back after I got my response, and I [asked] ChatGPT to please incorporate a sense of urgency, and also please stress the confidential nature of this request,” Schnese said.   

On the flip side, the Canadian P&C industry is using GenAI not just to simplify tasks like ‘traditional’ AI, but for a variety of applications ranging from marketing and fraud detection to legal documents. For example, insurers can use GenAI to understand a threat vector, and also for simple disclosure requirements and warranty statements, Greg Markell, Ridge Canada president and CEO, said during an industry event earlier this year. 

To protect clients against increasingly sophisticated attacks, Kamal recommends a combination of different lines of defence rather than just one security measure. 

This includes identifying and classifying data into different sensitivity levels and clearly defining what type of data can be shared with ChatGPT, and what should remain confidential.  

Business leaders should also educate their teams on the importance of data security when using ChatGPT and not share sensitive information, with only authorized personnel able to use ChatGPT or related systems. User training and awareness on how to recognize and report suspicious activities is also crucial. 

See also  Junkyard Gem: 2013 Ford C-Max Hybrid SE

Finally, develop a well-defined incident response plan in case of a data breach or misuse. This should include communication strategies, investigation procedures, and mitigation steps. 

 

Feature image by iStock.com/Supatman