Risks for businesses using AI systems

Risks for businesses using AI systems

There’s a lot of pressure on businesses to enter the AI race, but experts recommend looking past the hype and analysing potential risks before relying too much on the technology.

It’s been an eventful year for artificial intelligence. The news, for example, included the hype surrounding the potential of large language models, which power services such as ChatGPT. We also saw the first-ever UN Security Council meeting on the dangers of AI. Keeping an eye on all new developments is a difficult task for business executives trying to figure out the best ways to utilise this new technology.

The only certainty is that almost no business of any size can afford to look away from these developments. Companies must be ready to take advantage of the opportunities that arise while also keeping a close eye on the risks. The latter are highly unlikely to come in the form of the most common doomsday scenarios: marauding robots or mass unemployment. The current risks of AI use in business are more subtle – but they can still cause significant disruption.

Companies’ chief risk officers are worried

In a new report, the World Economic Forum recently warned that risk management is not keeping up with the rapid advances in AI technologies. In a survey of chief risk officers (CROs) from major corporations and international organisations, three-quarters of the respondents said that using AI poses a reputational risk to their organisation. Nine out of ten said that more needed to be done to regulate the development and use of AI.

The chief risk officers were most concerned about the malicious use of AI technologies, which are easy to use for spreading misinformation, facilitating cyberattacks or accessing sensitive personal data. However, AI systems cannot only cause trouble when weaponised by bad actors. They can also go wrong in unexpected ways when used as productivity tools. The main concern in this regard is the systems’ lack of reliability. Every now and then, AI tools such as chatbots and computer vision tools start “hallucinating”, which means that they are just making up information and presenting it as facts. For example, a US lawyer found himself in trouble when he cited fake cases generated by ChatGPT.

See also  Jireh Connect facilitates first industry loss warranty (ILW) placement

The lack of reliability is also a concern when machinery is operated with the assistance of AI systems in unfamiliar situations – that’s the reason why actual self-driving cars aren’t available yet despite Elon Musk’s past claims they would be capable of driving autonomously from Los Angeles to New York by 2017. Many accidents show the difficulties AI technologies still have with perception and decision-making – such as a vehicle on “Autopilot” mode not stopping for a school bus, as required in the US, and hitting a student.

Such risks are exacerbated by current AI systems being highly opaque. They cannot explain their decisions or provide insight into the way they generate content. This lack of transparency makes them black boxes whose output is difficult to trust when much is at stake. Therefore, researchers have been working on new forms of “explainable AI,” but it will take time before that approach might become as powerful as the currently prevalent technologies.

Transparent AI would also help companies mitigate the fact that AI systems are prone to biases. The ideal of a neutral computer making fact-based decisions without a human’s flawed perceptions and prejudices can hardly be realised when those computers are trained on materials created by humans. A classic example is Amazon’s attempt to automate parts of its hiring process by letting an AI select the best resumes. As it turned out, their recruiting engine “did not like women”, according to Reuters. It had learned to discriminate against female applicants by taking clues from previous hiring processes.

Finding the balance between caution and opportunity

Companies delegating tasks to AI systems should be aware that the results may be flawed, turning the investment into a failure or even leading to additional consequences such as lawsuits or reputational damage. On the other hand, staying entirely away from AI until it’s a perfectly safe investment of time and money comes with its own risks – at that point, competitors may be ahead and out of sight. So, what can business leaders do?

See also  The roadmap to equal gender representation - Équité CEO

McKinsey suggests making sure that the team in charge of AI not only consists of “techies” but includes business-minded legal and risk-management experts. “Risk analysis should be part of the initial AI model design,” the consulting firm emphasises.

“Second, because there is no cure-all for the broad spectrum of AI risks, organisations must apply an informed risk-prioritisation plan,” McKinsey recommends. According to the study’s authors, most threats fall into at least one of six categories: privacy, security, fairness, transparency and explainability, safety and performance, and third-party risks.

The International Institute for Management Development (IMD) based in Switzerland and Singapore also says it’s crucial to have robust internal procedures in place to mitigate AI risks. “This includes developing guidelines for AI use, establishing controls for AI development, and implementing tools to monitor and manage AI systems,” authors Michael D. Watkins, Professor of Leadership and Organizational Change at IMD, and Ralf Weissbeck, former CIO of the Adecco Group, write.

As a starting point, they recommend developing an AI ethics policy outlining the business’s commitment to using AI responsibly. “Furthermore, businesses should equip employees with the tools and training to work safely with AI,” they suggest. “By investing in their employees, businesses can create a culture of AI safety and responsibility, ensuring everyone plays their part in mitigating AI risks.”

Selecting a reliable vendor is essential

Companies should also keep in mind that vendors of AI services can be a source of significant risks. “Therefore, a rigorous vendor selection and audit process is essential,” Watkins and Weissbeck emphasise. “Business leaders should establish criteria for selecting AI vendors, ensuring they have robust security measures, ethical guidelines, and a proven track record of regulatory compliance.”

See also  Desjardins opens application for small business grant

According to IMD, the third main area of focus should be supporting governments and academia. “Business leaders must understand the importance of collaborating with governments and academic institutions to identify and tackle potential AI risks,” the authors write. Companies following this advice benefit from technical expertise and academic insights that help devise suitable mitigation strategies.

In addition, IMD suggests funding research at academic institutions to further the understanding of AI risks. Options include sponsoring research projects, offering internships, or providing access to data and resources. “By fostering a close relationship with academia, businesses can stay at the forefront of AI risk knowledge, ensuring they are prepared to address these risks as they arise,” the authors write.

Focusing on security is not glamorous

One major risk mentioned in the World Economic Forum’s report has nothing to do with technology or its implementation: the lack of incentives for management to focus on AI security. The report quotes a chief risk officer who says his organisation “is not prioritising this subject as a risk, and it is unlikely that management would support a pivot in this direction.”

According to the report, resources tend to be heavily weighted towards development rather than risk management and mitigation, especially in the technology sector. “However, in the face of the growing disruptive power of AI technologies, it will be increasingly important for organisational leaders to demonstrate that their use of AI clearly aligns with societal values and interests,” the report’s authors write, “and that they are committed to ensuring that AI risks do not cascade into the next global crisis.”

Authored by HDI Global