US election is coming – it’s time to get cyber ready

US election is coming – it's time to get cyber ready

US election is coming – it’s time to get cyber ready | Insurance Business America

Cyber

US election is coming – it’s time to get cyber ready

“I don’t think governments have really woken up to the risk at all”

2024 may still be young but it is already shaping up to be monumental on the world stage as a year filled with national elections. Across the world, citizens from over 80 countries will exercise their right to vote, including those in Mexico, South Africa, Ukraine, Indonesia, Taiwan, the UK, Pakistan, India, and, of course, the US.

With geopolitical risks still on the rise, it is no secret that elections this year, especially for the United States, are set to invite a lot of scrutiny. While state-sponsored cyber intrusions typically target government entities and critical infrastructure, the potential for collateral attacks poses a continuous concern for businesses too. Additionally, the capacity of artificial intelligence (AI) to generate and disseminate misinformation at unprecedented scales and velocities carries considerable consequences.

Jake Hernandez (pictured above, left), CEO of AnotherDay, a Gallagher company specializing in crisis and intelligence consultancy, described 2024 as “the largest” in electoral history, one that is extremely vulnerable against the threat of wildly powerful technologies.

“There are over two billion people expected to be going to the polls,” Hernandez said. “And the problem with that, especially now we’ve had this quantum leap in AI, is that technology to sow disinformation and distrust at nation-state scales is now available to pretty much anyone.”

Learning lessons from the 2016 election

Harkening back to troubles from the 2016 US election, Hernandez noted that there has been a shift in the way “online trolling” has evolved. Whereas back then, it was centered around organizations such as the Internet Research Agency in St. Petersburg, there was no need for such centers in today’s climate as AI has taken over the “trolling” role.

See also  Top insurance employers – strategic partnership forms

“So, the potential is absolutely there for it to be a lot worse if there are not very proactive measures to deal with it,” Hernandez explained. “I don’t think governments have really woken up to the risk at all.

“AI allows you to personalize messages and influence potential voters at scale, and that further erodes trust and has the potential to really undermine the functioning of democracy, which is really very dangerous.”

This year’s World Economic Forum Global Risks Report highlights the issue as such: “The escalating worry over misinformation and disinformation largely stems from the risk of AI being utilized by malicious actors to inundate global information systems with fabricated narratives.” This is a sentiment shared by AnotherDay.

Explaining the effects of the 2016 elections, AnotherDay head of intelligence Laura Hawkes (pictured above, right) explained that that was the first instance where misinformation and disinformation was used effectively as a campaign.

“Now that it’s been tried and tested, and the tools have been sharpened for certain sorts of players, it’s likely we’ll see it again,” Hawkes said. “Regulation of tech firms is going to be essential.”

Spreading disinformation erodes trust

The proliferation of misinformation and disinformation poses significant risks to the business landscape, influencing a range of outcomes, from election results to public trust in institutions.

AnotherDay notes that the manipulation of information, particularly during electoral processes, can have a destabilizing effect on democratic norms, leading to increased polarization. This environment of mistrust extends beyond the public sector, impacting perceptions and governance within the private sector as well.

See also  Another member leaves Net-Zero Insurance Alliance

Moreover, the spread of false information can lead to varied regulatory responses. Populist administrations may favor deregulation, which, while potentially reducing bureaucratic barriers for businesses, can also introduce significant volatility into the market.

Such shifts in governance and regulatory approaches underscore the challenges businesses face in navigating an increasingly disinformation-saturated environment.

From a business and general populace perspective, this also means a lot more uncertainty, Hawkes explained.

“The advent of AI is going to impact at least some elections,” she said. “AI means that content can be made cheaper and produced on a mass scale. As a result, the public, and also companies, are going to lose trust in what’s being put out there.”

Prepping against cyber threats – especially AI-driven ones

AnotherDay explained that organizations aiming to fortify their cyber defenses must begin by pinpointing potential threats, understanding the attackers’ motivations, and determining the direction of the threat.

A crucial component of this strategy, the firm explained, involves recognizing the tactics employed by hackers, which informs the development of an effective defense strategy that includes both technological solutions and employee awareness.

Recent advancements in cybersecurity research and development have led to the emergence of new security automation platforms and technologies. These innovations are capable of continuously monitoring systems to identify vulnerabilities and alerting the necessary parties of any suspicious activities detected. Services such as penetration testing are evolving, increasingly utilizing generative AI technology to enhance the detection of anomalous behaviors.

Despite the implementation of sophisticated data security policies and systems, the human element often remains a weak link in cybersecurity defenses. To address this, there is a growing emphasis on the importance of employee education and the promotion of cybersecurity awareness as critical measures against cyber threats.

See also  On the brink of a major market correction: Lohmann, Schroders Capital ILS

Cybersecurity professionals are increasingly adopting security approaches like zero trust, network segmentation, and network virtualization to mitigate the risk of human error. The zero-trust model operates on the premise of “never trust, always verify,” necessitating the verification of identity and devices at every access point, thereby adding an additional layer of security to protect organizational assets from cyber threats.

What are your thoughts on this story? Please feel free to share your comments below.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!