Navigating risks in AI governance – what have we learned so far?

Navigating risks in AI governance – what have we learned so far?

Navigating risks in AI governance – what have we learned so far? | Insurance Business America

Risk Management News

Navigating risks in AI governance – what have we learned so far?

Efforts are being made in a current regulatory void, but just how effective are they?

Risk Management News

By
Kenneth Araullo

As artificial intelligence (AI) continues to evolve and become increasingly integrated into various aspects of business and governance, the importance of robust AI governance for effective risk management has never been more pronounced. With AI’s rapid advancement come new and complex risks, from ethical dilemmas and privacy concerns to potential financial losses and reputational damage.

AI governance serves as a critical framework, ensuring that AI technologies are developed, deployed, and utilised in a manner that not only fosters innovation but also mitigates these emerging risks, thereby safeguarding organisations and society at large from potential adverse outcomes.

Sonal Madhok, an analyst within the CRB Graduate Development Program at WTW, delineates this transformative era where the swift integration of AI in various sectors has catalysed a shift from mere planning to action in the realm of governance. This surge in AI applications highlights a profound need for a governance framework characterised by transparency, fairness, and safety, albeit in the absence of a universally adopted guideline.

Establishing standards for proper risk management

In the face of a regulatory void, several entities have taken it upon themselves to establish their own standards aimed at tackling the core issues of model transparency, explainability, and fairness. Despite these efforts, the call for a more structured approach to govern AI development, mindful of the burgeoning regulatory landscape, remains loud and clear.

See also  Gallagher Securities hires RMS’ Jin Shah as VP ILS sales & distribution

Madhok explained that the nascent stage of AI governance presents a fertile ground for establishing widely accepted best practices. The 2023 report by the World Privacy Forum (WPF) on “Assessing and Improving AI Governance Tools” seeks to mitigate this shortfall by spotlighting existing tools across six categories, ranging from practical guidance to technical frameworks and scoring outputs.

In its report, WPF defines AI governance tools as socio-technical instruments that operationalise trustworthy AI by mapping, measuring, or managing AI systems and their associated risks.

However, an AI Risk and Security (AIRS) group survey reveals a notable gap between the need for governance and its actual implementation. Only 30% of enterprises have delineated roles or responsibilities for AI systems, and a scant 20% boast a centrally managed department dedicated to AI governance. This discrepancy underscores the burgeoning necessity for comprehensive governance tools to assure a future of trustworthy AI.

The anticipated doubling of global AI spending from $150 billion in 2023 to $300 billion by 2026 further underscores the urgency for robust governance mechanisms. Madhok said that this rapid expansion, coupled with regulatory scrutiny, propels industry leaders to pioneer their governance tools as both a commercial and operational imperative.

George Haitsch, WTW’s technology, media, and telecom industry leader, highlighted the TMT industry’s proactive stance in creating governance tools to navigate the evolving regulatory and operational landscape.

“The use of AI is moving at a rapid pace with regulators’ eyes keeping a close watch, and we’re seeing leaders in the TMT industry create their own governance tools as a commercial and operational imperative,” Haitsch said.

See also  Markel Corporation outlines full-year numbers

AI regulatory efforts across the globe

The patchwork of regulatory approaches across the globe reflects the diverse challenges and opportunities presented by AI-driven decisions. The United States, for example, saw a significant development in July 2023 when the Biden administration announced that major tech firms would self-regulate their AI development, underscoring a collaborative approach to governance.

Congress further introduced a blueprint for an AI Bill of Rights, offering a set of principles aimed at guiding government agencies and urging technology companies, researchers, and civil society to build protective measures.

The European Union has articulated a similar ethos with its set of ethical guidelines, embodying key requirements such as transparency and accountability. The EU’s AI Act introduces a risk-based regulatory framework, categorising AI tools according to the level of risk they pose and setting forth corresponding regulations.

Madhok noted that this nuanced approach delineates unacceptable risks, high to minimal risk categories, with stringent penalties for violations, underscoring the EU’s commitment to safeguarding against potential AI pitfalls.

Meanwhile, Canada’s contribution to the governance landscape comes in the form of the Algorithmic Impact Assessment (AIA), a mandatory tool introduced in 2020 to evaluate the impact of automated decision systems. This comprehensive assessment encompasses a myriad of risk and mitigation questions, offering a granular look at the implications of AI deployment.

As for Asia, Singapore’s AI Verify initiative represents a collaborative venture with major corporations across diverse sectors, showcasing the potential of partnership in developing practical governance tools. This open-source framework illustrates Singapore’s commitment to fostering an environment of innovation and trust in AI applications.

See also  CrowdStrike outage raises questions on travel insurance

In contrast, China’s approach to AI governance emphasises individual legislation over a broad regulatory plan. The development of an “Artificial Intelligence Law” alongside specific laws addressing algorithms, generative AI, and deepfakes reflects China’s tailored strategy to manage the multifaceted challenges posed by AI.

The varied regulatory frameworks and governance tools across these regions highlight a global endeavour to navigate the complexities of AI integration into society. As the international community grapples with these challenges, the collective aim remains to ensure that AI’s deployment is ethical, equitable, and ultimately, beneficial to humanity.

The road to achieving a universally cohesive AI governance structure is fraught with obstacles, but the ongoing efforts and dialogue among global stakeholders signal a promising journey towards a future where AI serves as a force for good, underpinned by the pillars of transparency, fairness, and safety.

What are your thoughts on this story? Please feel free to share your comments below.

Keep up with the latest news and events

Join our mailing list, it’s free!