Navigating AI Risk Management: Insights from CIOs on Artificial Intelligence Strategy This blog post was authored by Christine Livingston - Managing Director, Emerging Technology Solutions and Richard Kessler - Director, AI Governance and Risk Management on Protiviti's technology insights blog.Businesses are excited about the transformative potential of artificial intelligence (AI) to innovate and enhance business models, customer insights, products, and processes. Alongside this potential, there is a growing need to identify and mitigate risk associated with AI technologies. CIOs, in particular, are exploring how to ensure the responsible design and use of AI and their roles in managing legal, regulatory, financial and reputational risks.We recently met with several Global 100 CIOs to discuss this critical intersection of AI opportunity and risk management. Three common challenges emerged from our conversation:How to formulate an acceptable use policy for AI and establish an environment where employees can make the most of it, without incurring undue risk.How to create a governance model for AI that keeps pace with its rapid evolution, fostering innovation now while anticipating future breakthroughs.How to optimise the interplay between AI enablement and AI governance. This is crucial in balancing AI’s potential rewards with its inherent risks, a responsibility of which CIOs are keenly aware.The conversation echoed the concerns we hear throughout the market. CIOs of large corporations are grappling with the same AI governance challenges, regardless of industry. Let’s look at these challenges in greater depth and discuss some potential solutions. Topics Risk Management and Regulatory Compliance Technology Enablement AI and acceptable useWhile most of the CIOs have drafted some form of initial acceptable use policy, many struggle with making those policies applicable and effective for AI more broadly. Their approaches varied: some start from the perspectives of risk mitigation and limiting AI usage; others have opened the gates fairly wide to employees’ independent exploration of AI’s possibilities.Even as they make progress toward clarifying acceptable use, some CIOs are still exploring the extent to which they’ll allow the use of publicly available AI tools and are concerned about their ability to block public AI tool use. To help counter that risk — and provide a safe way to engage with AI — roughly half of the CIOs were focused on developing internal technologies that emulated the capabilities of publicly available tools.Some CIOs are evaluating acceptable use from a use case perspective. For instance, they’d prohibit using AI in any employment decision. This use case-driven approach considers both current risk exposure and future anticipated regulation.Recommendations: Enterprises will want to define acceptable use policies for AI if they don’t have them already. CIOs may want to consider proprietary and secured AI solutions, built on large AI foundation models, to circumvent the use of publicly available AI tools.Defining what constitutes acceptable use could be considered the first challenge of AI governance. The next challenge would be to equip the organisation with new AI opportunities as they arise by applying governance that is agile and streamlined enough to keep pace with AI’s rapid changes.AI and agile governanceThese CIOs are concerned with striking the right balance between controlling risk and enabling, supporting and managing innovation. They want to establish future-proof governance frameworks whereby an AI solution won’t be obsolete before it’s put in place.By the time an AI opportunity gets through a traditional governance cycle, the potential solution could be outmoded by a new set of AI features or a new player in the AI marketplace. Identifying every risk and potential compliance concern can sometimes take weeks or even months per AI use, depending on the organisation’s approach to governance.A governance committee reviews an AI solution to be built with version N of a technology, but version N+1 may be running before all the approvals are in. The upgraded platform modifies the use case’s risk profile. Meanwhile, regulators could change the compliance picture as decision-makers evaluate the use case. The challenge is to establish governance that’s as diligent as ever while also thinking of governance as an accelerator. One enterprise had rebranded its governance as enablement — an idea that appealed to many at the gathering.Recommendations: CIOs will want to consider a governance framework that’s use case specific, then develop a framework that is both flexible and high-level enough to account for all risks. Organisations can reduce risk by acknowledging that mitigating and managing risk is not just for those professionals who have traditionally handled risk, but it is now everyone’s responsibility.Defining acceptable use for AI and then establishing agile governance is essential to effective and responsible use of the technology. The next challenge is to balance innovation with risk by bringing those perspectives together in AI decision-making.AI and striking a balance between innovation and controlWhat’s appropriate for the functions within one organisation is an anathema for the next. One wants to take full advantage and stay current with each new AI development; another applies firm pressure to the brakes. What they have in common is a need to find a middle ground that enables both perspectives to move forward. This requires figuring out how to find the balance between risk and innovation that’s appropriate for each organisation and function and then bringing that balance into being. More recently, boards generally have had a significant influence on the balance they find appropriate.Equilibrium is found in the interplay between governance and enablement. Organisations have forums for these inherently valuable and distinct functions: managing risk and pursuing innovation. The challenge is first to recruit the right representation of these two views, then establish the appropriate operating model that will result in an effective level of collaboration, including both viewpoints, to develop risk-commensurate AI.One common error is the development of AI councils or working groups that are external to existing governance and innovation structures. These bolted-on groups might make great progress with an AI proof of concept, for example, only to get stuck in legal and risk reviews prior to launch. It’s often more effective to:Retrofit existing forums to take AI into their purview.Familiarise innovation teams with top security, privacy and transparency principles so those concerns are considered from use case conception.Minimise duplicity in intake forms/questionnaires to avoid frustrations in both the innovation and risk management process.Integrate risk control assessment approaches so that a use case can be reviewed just once from multiple perspectives (such as data quality, privacy, security and so on). Checklists are a direct means to ensure this happens.It is important to note that the steps outlined above must occur at the right points in the process. Most successful approaches will ask no more than five questions in an initial AI innovation/build forum to determine the initial low/medium/high risk rating. At this point, the appropriate levels of security, privacy and controls are established in line with the level of risk as the use case progresses.Recommendations: CIOs will want to clearly understand their boards’ viewpoints. Next, they can recruit individuals representing both risk and innovation perspectives to govern and enable AI jointly and effectively.In summaryAs AI opportunities proliferate and accelerate, CIOs struggle to ensure responsible use of the technology. From formulating acceptable use policies for AI and defining agile governance, to striking the enterprise-specific balance between AI risk and AI reward, not many CIOs feel as ready to govern AI as they’d like to be. Nevertheless, recommendations and best practices are emerging to ensure risk-responsible AI.To learn more about our AI solutions, contact us. Find out more about our solutions: Artificial Intelligence Services Artificial Intelligence (AI) stands at the forefront of innovation and is revolutionising the way businesses operate and compete. Al is critical to define the trajectory of future growth and value. The opportunity is vast and balance is key to strategic and responsible use of Al. Technology Consulting Services Whether you are looking to automate, modernise, or embark on an end-to-end transformation journey, our technology consulting solutions can help. Our services range from strategy, design, and development through implementation, risk management, and managed services. Risk Management Consulting Protiviti helps organisations around the world assess risk and develop tech-enabled solutions to manage risk in an agile manner and minimise potential losses. We bring leading insights and innovative capabilities to help you meet future challenges. Leadership Michael Pang Michael is a managing director with over 20 years’ experience. He is the IT consulting practice leader for Protiviti Hong Kong and Mainland China. His experience covers cybersecurity, data privacy protection, IT strategy, IT organisation transformation, IT risk, post ... Learn More Rodney Lai Rodney is a director at Protiviti Hong Kong, with over 25 years of experience in systems design, project management and implementation, risk management, IT audit and transformation. He leads the digital transformation team, serving clients across multiple ... Learn More Featured insights WHITEPAPER Success With Generative AI Requires Balancing Risk With Reward When ChatGPT launched in November 2022, it took just two months to garner a record 100 million users and capture broad market attention. Business leaders are eager to realise the enormous potential that ChatGPT as well as other generative AI models... WHITEPAPER Human v. machine: Tackling artificial intelligence risks in financial institutions In the novel Tell the Machine Goodnight, Katie Williams tells the story of Pearl, a technician for Apricity Corporation, which has developed a machine that “uses a sophisticated metric, taking into account factors of which we are not consciously... WHITEPAPER Enabling Enterprise AI Adoption Through Next-Generation Governance Artificial intelligence (AI) has become increasingly important in the enterprise, thanks in part to the rise of generative AI (GenAI). While not a new technology or concept, AI (including machine learning) holds tremendous promise to transform... Button Button