AI governance is often viewed as a box-ticking compliance exercise. A set of rules and regulations that hold back innovation rather than driving it forward. But recent research from IBM suggests the opposite. Rather than slowing innovation, it suggests good governance can help enterprises develop AI faster, launch initiatives sooner and reduce costly rework.
For example, the companies polled by IBM attribute more than a quarter (27%) of their AI efficiency gains to strong governance. And enterprises that invest more in AI ethics report 34% higher operating profit from AI.
On the other hand, one in four unsuccessful AI projects were found to be related to weak governance, and more than half of organizations still don't have a clear approach to AI risk, ethics, and governance.
These findings are based on a survey of 1,000 global senior business and technology leaders by the IBM Institute of Business Value. They are represented in one of its latest reports 'Go further, faster with AI. How governance increases velocity'.
AI governance is broadly focused on putting the rules and processes in place to ensure an enterprise's AI systems deliver value while at the same time being managed safely, legally and in line with business and ethical expectations.
The report argues that by deploying governance as a strategic tool, "forward-thinking leaders are addressing risks before they take root - and also enhancing innovation". Central to this is what the authors call 'adaptive governance' - governance systems that move faster and continually evolve to keep up with rapidly changing AI capabilities and risks. This is particularly important in a world headed increasingly toward agentic AI.
Here are six important takeaways.
1. Business risks linked to AI have accelerated
The business and technology leaders questioned by IBM perceive that risks related to AI have increased significantly in the last two years.
80% think that the risk of AI hallucinations has increased, 73% think that the risks related to AI misuse have increasedand 72% think the security risks have increased because of AI.
Other areas where risks are perceived to be greater include shadow AI (employees using AI tools that the business is unaware of) which was cited by 62%. AI bias (60%) and Model Drift, when model accuracy deteriotes over time, 59%.
2. Static approaches to governance are no longer enough
AI systems today are characterized by rapid change. New models, new capabilities and new AI risks are emerging all the time while data flows, business processes and decision making are constantly evolving. With Agentic AI, systems can make autonomous decisions, completing multiple actions in a workflow and adapting decisions in real time to achieve their goals. This means risks can escalate in hours, rather than months. The increased unpredictibility and the pace at which things change can mean that traditional static governance approaches that rely on fixed policies, which are agreed and reviewed periodically, are increasingly ineffective.
What's needed now is adaptive governance. This is an approach that requires AI systems to be continually monitored, with real-time intervention when things go wrong. Ideally, it should include rapid feedback loops for every incident or change so that AI governance can adapt and improve over time.
3. AI governance has benefits across the enterprise
64% of leaders questioned in the survey say that modern AI governance can lead to an improved regulatory and compliance risk profile, which is probably to be expected. But they also think it delivers business value in many other ways. 52% think that governance can improve time to value for AI projects, 48% think it can reduce AI bottlenecks due to compliance or ethical reviews and 46% suggest it can increase employee confidence to innovate quickly and responsibly.
46% of leaders also think AI governance can lead to faster approval and launch of AI initiatives and reduce rework that can result from unclear accountability. And 38% think good governance can mean fewer late-stage project failures.
4. Barriers to effective AI governance
The research points to five critical barriers that organizations face in building effective AI governance.
One of the biggest is poor data quality and data management, cited by 76% of the business and IT leaders. If organizations are unsure about the accuracy of their data then the risks of AI initiatives can increase significantly. As IBM's Director of Data Governance and Privacy Engineering, Nicole Jackson, is quoted as saying in the report: "If you have bad data, you're going to have bad AI. It really is that simple."
The report suggests that many organizations also lack the people and structures needed to govern AI effectively. The same proportion of leaders (76%) think that skills gaps and limited resources, particularly in understanding both risk management and AI technologies, are a major barrier. And 70% highlight the difficulties of building consistent compliance frameworks when operating across multiple regions with different legal jurisdictions and regulatory requirements.
Finally, 69% cite inconsistent or non-existent AI policies as a barrier, creating confusion and slowing innovation. The same percentage (69%) also highlight inadequate understanding of AI technologies and governance principles across the organization.
5. Using AI to govern AI
Adaptive governance requires continuous monitoring to enable organizations to detect and manage risk in real time. This is closely linked to the concept of AI-driven observability: using machine learning to interpret monitoring data at scale, revealing patterns, anomalies, and emerging risks that would be difficult to detect manually. In this way, enterprises can move from reacting to governance problems to anticipating and preventing them.
A prime example is IBM's watsonx governance solution, which makes it possible to effectively use AI to help govern AI in this way, ensuring always-on monitoring and faster, data-driven decision-making.
6. CEO's need to be accountable for AI governance
More than half (63%) of enterprises in the survey report that their CEO is involved in ensuring effective AI governance. This rises to 81% for organizations with a higher AI governance maturity. In fact, in the sample, around one in three enterprises have a CEO who is directly responsible or directly accountable for AI governance.
This reflects the importance of AI to the business. Today, AI decisions touch every aspect of business operations, from customer interactions to supply chain management to financial planning. The opportunities and risks are too broad and significant to leave to the operational team to oversee.
This article was originally published on the IBM Community.
