2024 | A Landmark Year for AI Regulations and Safety

Em Blog Eu Ai Benchmark Law Main Image

On Tuesday, May 21, 2024, world leaders met virtually at the AI Seoul Summit—a follow-up to the AI Safety Summit in November—to adopt a new AI agreement. The gathering, co-hosted by South Korea and the UK, included leaders from the G7, Australia, Singapore, the UN, the EU, and representatives from global tech organizations. Sixteen leading AI companies, including Google, Meta, Microsoft, OpenAI, and firms from China, South Korea, and the UAE, pledged to develop AI technology safely. The agenda included:

  • Addressing AI regulation concerns such as job loss, disinformation, and privacy
  • Collaborating to set shared approaches to AI safety standards
  • Committing to publishing safety frameworks, avoiding high-risk models, ensuring governance and transparency, and prioritizing AI safety, innovation, and inclusivity

On the same day, the Council of the European Union (the Council) formally signed the European Union’s Artificial Intelligence Act (AI Act) into law.

What Is the EU AI Act?

The EU AI Act is a comprehensive set of rules governing the development, deployment, and use of AI systems within the EU bloc. The Act prioritizes safety, transparency, and accountability, aiming to balance technological innovation and citizen protection. Enforcement will begin in late 2024/early 2025, extending to nearly all AI systems by mid-2027.

Why Is the EU AI Act Important?

The EU AI Act provides a unified regulatory framework for AI across the 27 EU Member States, ensuring consistency and clarity. Its extraterritorial reach sets a global standard, impacting AI systems that affect people in the EU regardless of where those systems originate.

The AI Act’s risk-based approach ensures that higher-risk systems face stricter regulations, promoting safer AI deployment. The categories include:

  • Prohibited: Systems posing “an unacceptable risk to people’s safety, security, and fundamental rights” will be banned—for example, those using social scoring systems.
  • High risk: These include AI used for facial recognition, credit scoring, and autonomous vehicles, which will face stricter regulations.
  • Minimal risk: Beyond the initial risk assessment and some transparency requirements for specific AI systems, the AI Act imposes no additional obligations on these systems but invites companies to voluntarily commit to codes of conduct.

Another critical component of the law is substantial financial penalties for noncompliance. Leaders hope this encourages organizations to prioritize AI governance and ethical practices. By working in tandem with other regulations, like data privacy laws, the AI Act fosters a comprehensive regulatory environment intended to protect users.

What’s Next?

Experts recommend steps that organizations can take to prepare for the new rules.1

  1. Inventory AI systems: List all AI systems running or in development and determine if they fall under the AI Act.
  2. Assess and categorize: Classify in-scope AI systems by risk level and identify the relevant compliance requirements.
  3. Understand compliance agreements: Understand the company’s role in the AI value chain and the associated duties.
  4. Evaluate risks and opportunities: Consider how the AI Act interacts with other regulations, such as data privacy laws, and explore opportunities including access to AI Act sandboxes for innovators.
  5. Develop a compliance plan: Create and implement a process to establish accountability, governance frameworks, risk management systems, and documentation.

This week’s events reflect the increasing global focus on AI regulation and safety. Coordinated efforts by the AI community at both regional and international levels will be needed so that AI technologies can contribute positively to society while protecting fundamental rights and enhancing public trust.

  1. 1. EY, The European Union Artificial Intelligence Act, Feb. 2024