AI regulation
What is the Artificial Intelligence Act?
With the Artificial Intelligence Act (AI Act), the EU Commission is creating the legal framework for the use of AI systems in business, administration and research. The regulation categorises AI applications into risk classes, each of which is associated with different requirements and obligations.
The AI Act categorises AI systems into 4 different risk classes, which are associated with different legal requirements:
The higher the potential risk associated with the use of an AI system, the more extensive the associated requirements, such as risk assessments, documentation obligations, EU declarations of conformity or monitoring on the part of the operator.
The AI Act & other regulations
In addition to the general regulation by the AI Act, there are already - and will be - sector-specific regulations that require additional or different risk-based measures, such as the Machinery Directive for industrial plants, the UN ECE Cybersecurity Automotive Guidelines for motor vehicles or the MDR for AI-supported machines in the healthcare sector.
AI systems that fall into this risk class will be banned in future under the AI Act. This is because they harbour considerable potential for violating human rights or fundamental principles. This includes applications that
An application is considered a high-risk AI system if it poses a potentially high risk to the health, safety or fundamental rights of individuals. Systems that fall under this category are, for example
Obligations associated with high-risk AI systems include:
- Establishment, implementation & documentation of a risk management system
- Compliance with data administration & data management requirements, particularly with regard to training, validation and test data sets
- Creation & regular updating of technical documentation (technical documentation)
- Obligation to record events ("logs") & logging
- Transparency & provision of information for users
- Monitoring by human personnel
- Fulfilment of an appropriate level of accuracy, robustness & cyber security In the case of the use of private data or use in critical infrastructure environments in accordance with NIS2, organisational and technical cybersecurity measures must be implemented in accordance with the state of the art.
Applications with a limited risk include AI systems that interact with humans. Examples of this are emotion recognition systems or biometric categorisation systems. With regard to these, providers must ensure that natural persons are informed that they are interacting with an AI system or that this must be apparent to users due to the context.
There are currently no legal requirements for applications that fall into the "low" risk category. These include, for example, spam filters or predictive maintenance systems.
However, technical documentation and a risk assessment are required in order to be categorised as a low-risk application.
Non-compliance with the prohibition of an AI system (Article 5) or non-compliance with the defined requirements (Article 10) is punishable by fines of up to 35 million euros or up to 7% of the total annual global turnover.
Violation of the requirements and obligations set out in the AI Act - with the exception of Articles 5 & 10 - is punishable by fines of up to EUR 15 million or up to 3% of the total annual global turnover.
The provision of false, incomplete or misleading information to notified bodies and competent national authorities is punishable by fines of up to €7.5 million or up to 1.5% of total annual worldwide turnover.
Even if the AI Act has not yet been officially adopted, it makes sense for companies to address the upcoming requirements at an early stage. It is advisable to define appropriate responsibilities from the outset. In this way, precautionary arrangements can already be made as to how new developments or the purchase of AI applications or their use should be handled in the future.
In order to determine the current status quo with regard to the implementation of the AI Act, it is advisable to carry out a gap analysis. Based on this, an action plan can be developed in the next step. In this context, it is also important for companies to find out which AI applications and solutions are actually in use and which of the various risk categories they belong to. This categorisation then results in the basic requirements that need to be observed.
TÜVIT's AI experts provide companies with support during the development of AI systems in order to take aspects such as security, data quality, explainability, data protection, robustness and transparency into account right from the start. The focus here is on robustness, explainability and security. They also carry out gap analyses and other detailed analyses (e.g. in relation to the model design) and offer security checks using special test software. The best way for companies to prepare for the upcoming requirements is to have the security properties of their AI solution tested and/or certified by an independent third party. This is currently possible, for example, according to TÜVIT's own Trusted Product Scheme.
In addition, it is advisable to continuously monitor whether new AI applications are added or legal changes occur.