Skip to content

AI regulation

AI Act

What is the Artificial Intelligence Act?

With the Artificial Intelligence Act (AI Act), the EU Commission is creating the legal framework for the use of AI systems in business, administration and research. The regulation categorises AI applications into risk classes, each of which is associated with different requirements and obligations.

Who is affected by the AI Act?

The provisions of the AI Act apply to:

  • Suppliers (also from third countries) who place their developed AI systems on the market or put them into operation in the EU
  • importers
  • operators
  • distributors
  • Users of AI systems located within the EU
  • Providers & users located in a third country if the output generated by an AI system is used in the EU
One hand of a robot points to the right with the index finger

What risk classes does the AI Act contain?

The AI Act categorises AI systems into 4 different risk classes, which are associated with different legal requirements:

  1. Unacceptable risk
  2. High risk (high-risk AI systems)
  3. Limited risk
  4. Low risk

The higher the potential risk associated with the use of an AI system, the more extensive the associated requirements, such as risk assessments, documentation obligations, EU declarations of conformity or monitoring on the part of the operator.
 

The AI Act & other regulations
In addition to the general regulation by the AI Act, there are already - and will be - sector-specific regulations that require additional or different risk-based measures, such as the Machinery Directive for industrial plants, the UN ECE Cybersecurity Automotive Guidelines for motor vehicles or the MDR for AI-supported machines in the healthcare sector.

AI systems with unacceptable risk

AI systems that fall into this risk class will be banned in future under the AI Act. This is because they harbour considerable potential for violating human rights or fundamental principles. This includes applications that

  • Manipulate people through subliminal techniques or could cause them physical harm,
  • Exploit the weaknesses of certain groups of people due to their age or physical and mental impairments in order to deliberately influence them,
  • make assessments or classifications of people's trustworthiness over a certain period of time based on their social behaviour or personality-related characteristics that could lead to a negative social evaluation, or
  • allow biometric identification of people in publicly accessible spaces in real time from a distance (exception: law enforcement)

High-risk AI systems

An application is considered a high-risk AI system if it poses a potentially high risk to the health, safety or fundamental rights of individuals. Systems that fall under this category are, for example

  • AI systems that are used for the biometric identification of individuals,
  • AI systems that draw conclusions about personal characteristics of individuals, including emotion recognition systems,
  • Systems used for the management & operation of critical infrastructure,
  • AI systems for education or training related to assessment & evaluation of exams & educational attainment, and
  • AI systems relating to the screening or filtering of job applications or changes in employment or task assignment

Obligations for high-risk AI systems

Obligations associated with high-risk AI systems include:

- Establishment, implementation & documentation of a risk management system

- Compliance with data administration & data management requirements, particularly with regard to training, validation and test data sets

- Creation & regular updating of technical documentation (technical documentation)

- Obligation to record events ("logs") & logging

- Transparency & provision of information for users

- Monitoring by human personnel

- Fulfilment of an appropriate level of accuracy, robustness & cyber security In the case of the use of private data or use in critical infrastructure environments in accordance with NIS2, organisational and technical cybersecurity measures must be implemented in accordance with the state of the art.

AI systems with limited risk

Applications with a limited risk include AI systems that interact with humans. Examples of this are emotion recognition systems or biometric categorisation systems. With regard to these, providers must ensure that natural persons are informed that they are interacting with an AI system or that this must be apparent to users due to the context.

Low-risk AI systems

There are currently no legal requirements for applications that fall into the "low" risk category. These include, for example, spam filters or predictive maintenance systems.

However, technical documentation and a risk assessment are required in order to be categorised as a low-risk application.

Penalties for violations

Non-compliance with the prohibition of an AI system (Article 5) or non-compliance with the defined requirements (Article 10) is punishable by fines of up to 35 million euros or up to 7% of the total annual global turnover.

Violation of the requirements and obligations set out in the AI Act - with the exception of Articles 5 & 10 - is punishable by fines of up to EUR 15 million or up to 3% of the total annual global turnover.

The provision of false, incomplete or misleading information to notified bodies and competent national authorities is punishable by fines of up to €7.5 million or up to 1.5% of total annual worldwide turnover.

What is the current status of the AI Act?

February 2025

Ban on AI systems with unacceptable risk and obligation for AI literacy

August 2025

Provisions for General-Purpose AI (GPAI), e.g. Large Language Models

August 2026

Provisions for systems with limited and high risk

August 2027

Provisions for high-risk systems that are already covered by EU law

We will support you – no matter what

Get started at last!

We advise you


TÜV NORD IT Secure Communication I Berlin
Goal achieved?

We check that


TÜV Informationstechnik I Essen

How can companies prepare for the AI Act today?

Assign responsibilities

Even if the AI Act has not yet been officially adopted, it makes sense for companies to address the upcoming requirements at an early stage. It is advisable to define appropriate responsibilities from the outset. In this way, precautionary arrangements can already be made as to how new developments or the purchase of AI applications or their use should be handled in the future.

 

Gap Analysis & Action Plan

In order to determine the current status quo with regard to the implementation of the AI Act, it is advisable to carry out a gap analysis. Based on this, an action plan can be developed in the next step. In this context, it is also important for companies to find out which AI applications and solutions are actually in use and which of the various risk categories they belong to. This categorisation then results in the basic requirements that need to be observed.

 

Considering AI security from the outset

TÜVIT's AI experts provide companies with support during the development of AI systems in order to take aspects such as security, data quality, explainability, data protection, robustness and transparency into account right from the start. The focus here is on robustness, explainability and security. They also carry out gap analyses and other detailed analyses (e.g. in relation to the model design) and offer security checks using special test software. The best way for companies to prepare for the upcoming requirements is to have the security properties of their AI solution tested and/or certified by an independent third party. This is currently possible, for example, according to TÜVIT's own Trusted Product Scheme.

Continuous monitoring

In addition, it is advisable to continuously monitor whether new AI applications are added or legal changes occur.