7 ways businesses can manage AI risks

7 ways businesses can manage AI risks

Authored by QBE Senior Risk Manager Jaini Gudhka

GenAI is becoming an essential tool of modern business operations. In the second of a two-part blog, we look at what kind of practical measures businesses should implement to mitigate emerging risks

For businesses, Generative Artificial Intelligence (GenAI) is both exciting and daunting. Users of GenAI are expected to reach 77.8 million in the two years following the November 2022 release of ChatGPT – which is more than double the rate of uptake for mobile phones at the height of their user adoption.

Undoubtedly, GenAI can give a competitive edge, by speeding up operations for instance. But this fast-evolving technology poses new risks to data privacy, intellectual property or sound decision making. While businesses are increasingly looking to engage with the new opportunities that AI offers, many are simultaneously grappling to understand and manage the full risk landscape that it operates within.

2023 research from Riskonnect revealed that although 93% of surveyed companies were aware of AI-related dangers, only 17% have briefed their workforce or implemented any training – and only 9% said that their organization was prepared for these risks.

While uncertainty around AI might lead to some to try a ‘wait and see’ approach, risk managers should examine how GenAI applications are being applied to internal and external business operations in more detail. Short- and long-term planning should combine a robust risk management framework alongside structured scenario testing to address potential dangers.

For entrepreneurs who want to experiment with GenAI in a safe way, we have compiled a seven-point check list to mitigate those risks:

See also  "Get paid for the exposure underwritten" – Chubb CEO Evan Greenberg

Choose your tool

Many AI tools will capture the data input for their own machine-learning processes. Make sure the one you select meets client confidentiality and information security standards. The National Cyber Security Centre provides some guidelines for secure AI system development.
Include data and cyber requirements in the Service Level Agreement and consider contractual protections if your own service delivery model will rely on output from an AI tool.

Do your due diligence

When selecting third-party providers who will have access to your and your clients’ data, check their AI safeguards.

Detail your data

Keep a record of what data you have, its quality, value, and where it is stored.

– Is your data relevant and adequate for your needs? Is it reliable?
– Do you need additional sources and if so, which sources?
– Is the data held in silos?
– Has it been corrupted or infiltrated?

You should include a detailed data strategy in your AI risk management plan, and use a diversity of sources to mitigate the risks of bias.

Polish your policies

– Keep accountability and governance front and centre when updating policies and procedures
– Include AI in your risk register
– Update your acceptable use documentation to specify which AI tools can be used, on what devices, and for what purposes
– Review supervision processes and ensure that AI-assisted outputs are checked by a person on a risk-assessed basis
– Test your data security regularly, using trusted independent agencies to assess vulnerabilities. Include misuse of AI in your disciplinary processes.

See also  Long Walk “a template for future cyber cat bonds” – AXIS CFO Vogt

Beat breaches

– Use multi-factor authentication (MFA) and digital certificates to secure communications
– Set up internal-only channels for colleagues to share documents
– For important actions like high-value bank transfers, have a process requiring the operation to be verified via a secure communication channel, that is not initiated by the requester.

Educate your employees

– Regular training is essential for employees to help defeat breaches
– People must be familiar with the AI tools which the business has approved for use, and the associated workflows, processes and risk controls
– They should also be aware of the wider implications. When used inappropriately, AI might lead to error replication and bias reinforcement. So critical assessment skills are crucial to identify errors, hallucinations or bias. That should be on top of AI literacy and data management training
– Employees should also be wary of such innovations as deep-fakes and how criminals can use AI, as this has an impact on cyber security.

Take cover

Purchasing a cyber insurance policy not only helps businesses transfer emerging risks. It gives them access to a range of associated services and expert advice to better protect themselves and, in case of an incident, recover more quickly.

By taking these reasonable steps, businesses should feel confident enough to experiment with generative AI, rolling out the innovations that are right for them and their clients.