FMA highlights opportunities and risks of AI integration across financial services sector

FMA highlights opportunities and risks of AI integration across financial services sector

FMA highlights opportunities and risks of AI integration across financial services sector | Insurance Business New Zealand

Cyber

FMA highlights opportunities and risks of AI integration across financial services sector

Regulator to hold roundtable

Cyber

By
Roxanne Libatique

The Financial Markets Authority (FMA) – Te Mana Tātai Hokohoko – has released new research on the adoption of artificial intelligence (AI) across New Zealand’s financial services industry.

The study, part of the FMA’s occasional paper series, surveyed firms in insurance, asset management, banking, and financial advice, aiming to assess both the current use of AI and the industry’s plans for future implementations.

“We sought to understand both the benefits and the risks to inform more oversight,” he said.

According to Johnson, while AI is seen as a transformative tool in financial services, it also introduces new challenges, particularly in terms of governance.

“Our findings emphasise the need for a balanced approach to harness AI’s benefits while addressing governance and risk concerns,” he said.

AI integration key areas of focus

The report identified critical areas for attention, including data quality, technology selection, and proper documentation as essential steps in managing AI-related risks.

These aspects are considered key to the ethical and secure use of AI in the financial services sector.

Importance of responsible innovation

Although the FMA takes a neutral stance on technology, Johnson emphasised the importance of responsible innovation.

“We believe that New Zealanders should have access to the same technological advancements as those in other countries,” he said, adding that AI integration must be done with a focus on managing risks appropriately.

See also  AXA reports performance in first quarter

FMA’s AI roundtable

To foster ongoing discussions, the FMA will host a roundtable on Oct. 1, 2024, with participants from the study to further examine the use of AI and generative AI (GenAI) in New Zealand’s financial services industry and discuss how firms are managing emerging risks.

Cybercrime risks with GenAI growing

As AI continues to be adopted in legitimate financial sectors, a recent report from cybersecurity firm TrendMicro highlighted the increasing use of GenAI in cybercrime.

Researchers David Sancho and Vincenzo Ciancaglini pointed to a rise in the availability of large language models (LLMs) designed for malicious purposes. These models are being promoted on encrypted platforms like Telegram, offering users unrestricted responses to harmful queries.

Unlike commercial AI systems like ChatGPT and Google’s Gemini, which are programmed to block unethical requests, these criminal LLMs are specifically designed to support illegal activities.

The report also highlighted a resurgence of earlier criminal AI models such as WormGPT and DarkBERT, which have returned to underground markets in updated versions. These LLMs, once believed to have been discontinued, are now being offered with new features, including voice-enabled functionalities.

In addition to the resurgence of older models, new LLMs like DarkGemini and TorGPT have emerged. Although their capabilities mirror those of other criminal AI tools, their ability to handle image processing adds another layer of potential misuse in cybercrime.

The researchers further noted an increase in deepfake technology being used in criminal activity, warning that as this technology becomes more accessible, it could be increasingly used to target individuals.

See also  Mexico returns seeking $360m of World Bank catastrophe bond coverage

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!