Break through in regulation of AI: The AI Act

This text is translated from Dutch with the help of AI - On December 8, 2023, the European Union reached a preliminary political agreement on the Artificial Intelligence Regulation (hereafter referred to as the AI Act). The final text is formal approved by the European Parliament. 20 days after official publication the AI Act will enter into force. This landmark legislation marks the EU's first step towards regulating artificial intelligence, targeting developers, distributors, and users of AI systems, including those systems used for predictions, classifications, and analyses.

The potential impact on governments and businesses deploying AI systems is significant. In this insight, we'll walk you through the key obligations, how they affect your organization, and proactive steps you can take to ensure compliance.

placeholder

Background: Protecting Citizens and Businesses

With the AI Act, citizens and businesses across the EU are set to benefit from enhanced protection against irresponsible AI use. The regulation aims to mitigate potential risks associated with AI systems by imposing requirements on risky applications and establishing fundamental operational standards. Transparency regarding the development, operation, and data usage of AI systems plays a crucial role.

Additionally, the introduction of a European register for high-risk AI systems will increase visibility and accountability. However, the AI Act also creates opportunities for leveraging AI's potential through regulated innovation frameworks while safeguarding public interests.

Draft agreement AI Act (d.d. January 21st) on the benefits and risks of AI systems, recital 3 and 4:

“Artificial intelligence is a fast evolving family of technologies that contributes to a wide array of economic, environmental and societal benefits across the entire spectrum of industries and social activities. By improving prediction, optimising operations and resource allocation, and personalising digital solutions available for individuals and organisations, the use of artificial intelligence can provide key competitive advantages to companies and support socially and environmentally beneficial outcomes, for example in healthcare, farming, food safety, education and training, media, sports, culture, infrastructure management, energy, transport and logistics, public services, security, justice, resource and energy efficiency, environmental monitoring, the conservation and restoration of biodiversity and ecosystems and climate change mitigation and adaptation.

At the same time, depending on the circumstances regarding its specific application, use, and level of technological development, artificial intelligence may generate risks and cause harm to public interests and fundamental rights that are protected by Union law. Such harm might be material or immaterial, including physical, psychological, societal or economic harm.”

Risk-Based Approach: Navigating Three Levels

The AI Act adopts a risk-based approach, categorizing AI applications into three risk levels: unacceptable, high, and low. The principle is simple: the higher the risk, the stricter the regulations.

  • Unacceptable Risk: This category prohibits applications deemed undesirable, such as AI systems used for social scoring. These applications score individuals based on social behavior, socio-economic status or personal attributed. This prohibition includes also social scoring based on proxy data resembling personal attributes.
  • High-Risk Applications: Targeting critical domains like infrastructure, education, and training, this category mandates various measures, including:
    • Implementing risk and quality management systems (for example ISO42001);
    • Conducting fundamental rights assessments (FRIA in short);
    • Registration in a European AI register;
    • Ensuring robust data governance;
    • Documenting technical assumptions and descriptions;
    • Providing sufficient transparency;
    • Implementing logging and adequate retention periods;
    • Implementing human oversight;
    • Implementing security measures;
    • Assessing conformity with the AI Act.
  • Low-Risk Applications: While low-risk AI systems have no additional obligations, transparency about AI-generated content and maintaining risk assessment documentation is mandatory. It is obligatory to keep a record of the risk assessment including the weighing and explanation.

In addition, the AI Act pays specific attention to General Purpose AI Systems and Foundation models. Providers of chatbots, deepfake techniques or generative AI systems such as the popular ChatGPT must ensure, among other things, that it is clear to users what generic AI systems are intended for and what they should not be used for. In addition, it must be clear to end users that they are interacting with AI or that certain content is AI generated.

Impact and Preparation: What You Need to Do

The AI Act's potential impact on both public and private sector organizations is substantial. For example, the Act states that AI systems used by public and semi-public organizations to assign certain advantages or disadvantages to citizens are classified as high risk. This means that AI systems used in public services or law enforcement will likely fall under the ‘high risk’-category, necessitating compliance with stringent regulations.

To comply in 2026 when this legislation comes into effect, organizations can start today with the following three steps.

Step 1: Gather and Document Information:

Identify and document the AI systems your organization deploys. Gather information on these systems and assess their associated risk levels. Transparency is key, so consider publishing relevant information in your organization's or a national algorithm register.

Step 2: Establish Clear Agreements:

Formalize agreements regarding AI development and procurement, integrating them into your algorithm governance framework. Align these agreements with existing governance structures for IT, data, privacy, and information security.

Step 3: Implement Risk and Quality Management:

Initiate the setup of an AI management system to mitigate risks and uphold quality standards. International standards like ISO/IEC 42001 can guide an organization in the responsible AI development and usage.

Act Now: Time to Prepare

The AI Act is expected to come into effect in early 2024, with a possible ban on unacceptable AI just six months later. General-purpose AI and high-risk applications may have a transition period of one to two years, implying full compliance by 2026.

Be ready: start preparing by identifying existing AI systems, establishing management procedures in an AI governance, and ensuring regulatory compliance with an AI management system.

placeholder

Want to know more or need assistance with implementation?

Reach out to our AI Act, AI and digital law experts.

Related insights

divider