AI accelerates the cybersecurity arms race
The proliferation of advanced artificial intelligence over the last few years may have brought the arms race between cybercriminals and those who work to stop them into an entirely new phase, as AI programs are now advanced enough to produce malicious new code on the fly.
This was demonstrated recently by researchers experimenting with the generative language AI ChatGPT (see our story) who discovered that the program is capable of producing entirely new malware on request, provided the request is phrased in such a way that it can bypass content controls. Further, researchers were able to use the program to specifically create polymorphic malware, an advanced type of malicious program that can actually alter its own code to evade detection and resist removal.
The idea that AI can generate new malware on demand represents a major development in the cybersecurity world, due to the speed and adaptability it will give cybercriminals in deploying new software. However, Sreekar Krishna, the U.S. national leader for artificial intelligence at Big Four firm KPMG, said that while ChatGPT and the latest AIs of its kind are the most prominent right now, artificial intelligence has long been a tool used in both cyberattacks and cyber defense.
“The attack vectors have always leaned on some form of AI to [make better attacks] than what they were doing just a month before or even two weeks before. The threat vectors for AI have adapted way faster than the institutions trying to protect themselves against the threat. … It just took a lot of effort to bring it to bear the way ChatGPT has done it,” he said.
Highly specialized AIs are already in use at institutions today. Krishna noted that he’d previously worked at Microsoft and that company has been using AI for at least a decade. What’s different is that a single model, ChatGPT, is capable of doing what previously took several models linked together. A human would put an input into one of the AI systems; the output would then be fed as input into another AI system, which in turn would feed its own output into another AI system until the humans had what they needed. Now one model can do what it used to take several linked models to do before.
“If you look at the stack of some of the big tech firms like Amazon or Google or Netflix, they chain a bunch of AI models to do something together. One model outputs, which feeds to the next model. Typically AI technologies have been done by chaining models together. What generative AI is starting to show is you may be able to do some of these tasks using one model or maybe two or three, very few, that work in tandem to have better outputs,” he said. “So you could think about cybersecurity as a specific task we do and we can now use ChatGPT, generative AI models, to tune it, to do something interesting in the cyber arena.”
Even if OpenAI, the company behind ChatGPT, finds a way to block all attempts to make the program code malware, it is likely that other, similar programs will eventually be coded that may not have such controls. In this respect, the barrier to entry has effectively been lowered for cyber criminals, according to Mark Burnette, the advisory services practice leader and shareholder-in-charge of Top 100 Firm LBMC’s information security practice. People with an interest in malicious activities will find it easier to enact them.
Jakub Porzycki/NurPhoto/Photographer: Jakub Porzycki/Nur
“The barrier of entry is indeed [already] very low and ChatGPT and tools like it really underscore the ease by which these types of capabilities are available to even people who are not sophisticated and may not have the level of technical acumen theft would need,” he said.
At the same time, however, the barrier to entry has also been lowered for those with an interest in cybersecurity. David Cieslak, chief cloud officer and executive vice president with RKL eSolutions, noted that AI has actually been making things easier for cybersecurity for years, even as far back as spam filters and virus scanners, which could be argued to be a rudimentary form of the technology. And so, while more advanced AI can theoretically enable criminal activity, it can also bolster cybersecurity to defend against them.
“Is AI being used for attacks? Yes. For defense? Yes. And the two of those continue to escalate. This is similar to the conversation I hear about advanced computing in general. Like what quantum computing will do, where [codes] that took years to crack can be cracked in an instant. But then again, quantum computing can also make us potentially unhackable as well. So both teams are playing with the same ammunition,” he said.
Implications for accounting firms
Due to the sensitive data they have on their clients, accounting firms have been especially interested in cybersecurity and the implications that AI might hold for it. Cieslak noted, however, that while AI enables the creation of on-demand solutions, this is generally not what cybersecurity professionals do.
“They’re not trying to create one-off tools and preventative measures but looking at it systematically, and I think organizations are best served when … you have something where it’s not just an audience of one who is creating it, because you want something tried, true and tested and supported by organizations with the resources to specialize in that, so I don’t look at ChatGPT or the like being used to create boutique or one-off solutions,” he said.
LBMC’s Burnette made a similar point, noting that, for a CPA firm with cyber service offerings, the value their professionals bring to the client isn’t in executing tools or programming an AI but, rather, interpreting data for clients and putting them into the context of risk. Clients, he said, don’t need accounting firms to tell them where their vulnerabilities are and what they can do; they can buy programs for that. What clients need is help understanding the context of their vulnerabilities and how resources can be marshaled to manage those risks.
“That is what the true cyber professionals bring to the table that an AI still can’t replicate,” he said.
Burnett did not, however, entirely dismiss the notion that accounting firms could start offering custom solutions as a value-added service. He noted that it is certainly the hallmark of the cybersecurity professional to leverage technology to analyze and respond to cyber threats and so, in the future, he could see organizations building such programs for competitive advantage. However, he noted that the challenge for CPA firms in particular is that such systems represent significant investments, sometimes well beyond what they can afford.
“So it’s less likely you see firms focusing on that. More likely [we’d see] cyber boutique firms — not necessarily CPA firms — because they have access to the equity. [But] for CPA firms taking private equity investments, it’s certainly possible they could direct some of that towards sophisticated technology like that,” he said.
In terms of clients and what this all means for them, Cieslak said standard cybersecurity advice still applies, it’s just even more urgent that people follow it.
“The recommendations aren’t changing, it’s just creating more urgency to make sure that [security] is in our mindset. So, multifactor authentication, FIDO [Fast Identity Online Authentication], making sure we look at those as baseline for connectivity and access. It’s not just a nice to have,” he said.