AI in the Fast Lane, AI Governance Stuck in Traffic

Em Blog Ai Governance Main Image

Artificial intelligence (AI) is accelerating down the enterprise highway. But according to new research, governance is idling in the slow lane, and the gap is already raising breach costs, fueling shadow AI adoption—when employees use generative AI tools outside official oversight—forcing regulators to wave the caution flag.

Global average data breach costs fell in 2025 for the first time in five years, dropping to an average of $4.44 million, according to IBM’s Cost of a Data Breach Report 2025.1 Faster containment—driven in part by AI and automation in defense—helped bend the curve. Yet the United States proved an outlier: average costs rose by 9% to $10.22 million, driven by oversight gaps and regulatory penalties.

Rapid AI Adoption, Rising Security Concern

The speed of AI adoption is striking. Seventy-nine percent of security leaders reported deploying AI or machine learning technologies in the past 12 months, with another 19% planning to do so soon.2 But many are realizing they may have hit the accelerator too hard. More than half (54%) now say they adopted AI too fast and are struggling to scale it back or apply it more responsibly.3

This rapid surge has created blind spots. Thirty-seven percent of organizations observed their employees using generative AI without authorization, and 34% flagged shadow AI as a major emerging threat.4 IBM’s breach data confirms the risk: one in five organizations experienced incidents tied to shadow AI, which added an estimated $670,000 to the breach bill.5

Shadow AI in the Breakdown Lane

Shadow AI is like a runaway truck, often plowing through unsanctioned apps, APIs, or plug-ins. IBM reports that 97% of reported AI-related security breaches involved systems lacking proper access controls.6 What’s more, shadow AI incidents frequently spanned multiple environments, exposed personally identifiable information in 65% of cases, and extended detection and containment timelines.7

Much like vehicles weaving into traffic without signaling, shadow AI introduces hidden dangers that can destabilize the entire road. Without clear lanes and guardrails, the cost of containment keeps climbing.

AI Governance Gains Traction — But Unevenly

Even as the AI traffic builds, many organizations remain slow to install rules of the road. Only 21% of respondents to the ISMS.online State of Information Security Report 2025 named establishing or enforcing responsible AI policies as a top priority for the year.8 Yet nearly all—95%—said they plan to invest in AI governance and policy enforcement.9

IBM found an even starker reality: 87% of organizations lack governance policies or processes to mitigate AI risk, and fewer than half of those with policies require strict approvals for deployments.10

Yet, regulators are tightening their grip on AI, and enterprises are feeling the strain. Fresh laws are stacking onto existing standards including GDPR, GLBA, HIPAA, and SOC 2, raising compliance hurdles just as companies scale AI across their systems. Privacy safeguards cannot be compromised, but the complexity of AI models and LLMs makes explaining outcomes and proving transparency a significant challenge.

Regulators Tighten the Rules of the Road

If enterprises are slow to lift their foot off the accelerator, regulators are stepping in with speed limits.

  • European Union. The AI Act entered into force on August 1, 2024, with prohibitions and AI literacy requirements applying from February 2, 2025. Obligations for general-purpose AI began on August 2, 2025, with high-risk systems following in 2026–27.11 Despite lobbying from industry, the European Commission has rejected calls to delay. “There is no stop the clock. There is no grace period. There is no pause,” a spokesperson said.12
  • United States. The White House Office of Management and Budget issued Memorandum M-24-10 in March 2024, requiring federal agencies to adopt minimum governance and risk management practices for AI that affects rights or safety.13
  • Standards bodies. NIST’s AI Risk Management Framework (2023) defines “Govern” as a cross-cutting function in the AI lifecycle,14 and its Cybersecurity Framework 2.0 (2024) elevated governance to a core cybersecurity function.15 ISO/IEC 42001, published in December 2023, established the first AI management-system standard, aligning oversight with approaches such as ISO 27001.16

Meanwhile, the European Union Agency for Cybersecurity (ENISA) has warned that adversaries are already using AI for phishing and disinformation campaigns, reinforcing why lawmakers are holding firm on timelines.17

Cybersecurity Awareness Month: Green Light for Responsible AI

This October’s Cybersecurity Awareness Month highlights AI governance as the next essential step in resilience. Organizations are already reaping the benefits of AI-powered automation, cutting breach lifecycles and lowering global averages. At the same time, attackers are accelerating with phishing, deepfakes, and evasion powered by the same tools.

AI adoption may be in the fast lane, but responsible governance is the green light that keeps progress on track. Just as traffic laws enable safe travel at speed, governance enables innovation with trust—and the sooner it’s applied, the greater the rewards.

  1. IBM, Cost of a Data Breach Report 2025, 2025
  2. ISMS.online, State of Information Security Report 2025, 2025
  3. Ibid.
  4. Ibid.
  5. IBM, Cost of a Data Breach Report 2025, 2025
  6. Ibid.
  7. Ibid.
  8. ISMS.online, State of Information Security Report 2025, 2025
  9. Ibid.
  10. IBM, Cost of a Data Breach Report 2025, 2025
  11. European Commission, “AI Act enters into force”, Aug 1, 2024
  12. Reuters, “EU sticks with timeline for AI rules”, Jul 4, 2025
  13. Office of Management and Budget, Memorandum M-24-10: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence, Mar 28, 2024
  14. National Institute of Standards and Technology (NIST), Artificial Intelligence Risk Management Framework (AI RMF 1.0), 2024
  15. NIST, The NIST Cybersecurity Framework (CSF 2.0), Feb 26, 2024
  16. ISO/IEC, ISO/IEC 42001:2023 – AI Management Systems, 2023
  17. European Union Agency for Cybersecurity (ENISA), ENISA Threat Landscape 2024, 2024