AI technology is advancing faster than regulations can keep up, putting people and organizations at risk of extensive data collection and AI’s opaque decision-making and unpredictable behavior. This creates both opportunities and challenges for governance, risk, and compliance (GRC) practices.
The lag between innovation and regulation is stark: ChatGPT reached 100 million users in two months after its launch in late 2023,1 but although prominent members of the AI community called for a pause in AI training to develop shared safety protocols, including robust governance systems,2 AI regulation remains a patchwork of regional and industry standards.
Amid a broad push for greater oversight and regulation of AI to ensure responsible use and data privacy, AI is both a boon and a burden for GRC practices. As AI’s powerful capabilities are adapted into more GRC workflows, forward-thinking GRC teams are considering how to embed ethical guidelines into their governance policies for AI’s adoption and use.
A Powerful Advisor for GRC Functions
GRC teams focus on ensuring organizations and individuals follow applicable rules for operating ethically and compliantly, with an acceptable level of risk. Many GRC functions—such as evidence collection, audits, and fraud detection—are driven by the compilation and analysis of huge datasets, which presents opportunities for AI-enabled automation and pattern recognition to ease the strain on GRC teams. The global market for off-the-shelf AI governance software is expected to quadruple by 2030 and account for 7% of all AI software spending.3
Depending on the scope of data collection and analysis, GRC practitioners can use secure, purpose-trained LLMs to receive direct, incisive answers to complex questions in real time. These questions could cover everything from the regulatory landscape and upcoming policy changes to a wide range of use cases, from contract reviews and compliance monitoring to risk assessments and threat intelligence.
A Vector for Security Risks and Auditability Issues
However, the use of AI in GRC functions carries several associated risks. The primary tradeoff may be the “black-box problem” of auditing AI’s decision-making processes, which are essentially inscrutable to humans. Relying on AI to audit itself can lead to a false sense of security—especially given AI’s well-known tendency to hallucinate—as well as accountability issues for errors. To facilitate transparency, emerging AI-specific regulations like the EU AI Act4 stipulate a “right to explanation” for AI decisions that have significant impacts, requiring audit trails and documentation to support human-understandable answers.
The usual risks of AI implementation—including algorithmic bias and data security—are also concerns in GRC applications. The immense amounts of data that LLMs consume pose significant privacy risks, as confidential data could be exposed through any number of failure points: prompt injection, insecure data pipelines, misconfigured models, or faulty integrations in the software supply chain. Data in AI systems is also susceptible to other types of privacy threats such as data reconstruction or inference attacks, which reverse-engineer AI outputs to uncover sensitive information.
Managing Risks Amid Full-Tilt Innovation
As the regulatory landscape continues to evolve, GRC teams may not want to wait on the sidelines for comprehensive frameworks to emerge. The gulf between AI’s rapid advancement and regulatory development presents opportunities and risks that invite proactive management today—especially given how rapidly AI is being adopted by workforces, with or without approval from GRC teams.
The most successful GRC teams could be those that recognize AI as both a transformative tool and a new risk category. GRC teams may not be able to wait for AI regulations to be finalized, but they can use existing frameworks, best practices, and ethical guardrails (such as the international Organisation for Economic Co-operation and Development (OECD) AI Principles5) to inform their own approach toward effective, responsible AI governance. This includes developing clear protocols for data handling, audit trails, and human oversight, with cross-functional coordination across IT, data, and legal teams all the way up to the C-suite and governing boards.
The choice isn’t between perfect regulation and reckless adoption—it’s between proactive or reactive governance. As long as AI continues to evolve rapidly, the greatest risk for GRC teams may be inaction. By building comprehensive AI governance policies now, GRC teams can help their organizations harness AI’s potential while maintaining the risk controls and ethical standards that define effective GRC practices.
- Reuters, ChatGPT sets record for fastest-growing user base – analyst note, Feb 2023
- Future of Life Institute, Pause Giant AI Experiments: An Open Letter, Mar 2023
- Forrester, AI Governance Software Spend Will See 30% CAGR From 2024 To 2030, Nov 2024
- EU Artificial Intelligence Act, Article 86: Right to Explanation of Individual Decision-Making, May 2024
- Organisation for Economic Co-operation and Development, OECD AI Principles overview, May 2024