Skip to content

News

AI & data protection - how do they go together?

Artificial intelligence is developing rapidly. With regard to the processing of personal data, however, the use of AI-supported systems repeatedly raises data protection issues. In light of European Data Protection Day, we take a look at what companies should consider when it comes to data protection and AI.

Essen | 26.01.2024

Artificial Intelligence & the GDPR

Artificial intelligence (AI) is not explicitly mentioned as a term within the European General Data Protection Regulation (EU GDPR). However, the GDPR is fundamentally designed to be technology-neutral and therefore also includes new technologies such as AI.

The scope of the GDPR is opened up as soon as a technology processes personal data - i.e. all information relating to an identified or identifiable natural person. If a company wishes to use an AI-supported system in this context, it must comply with certain data protection regulations. Accordingly, AI and data protection should be considered together from the outset and taken into account during development, implementation and use. It is also advisable to involve experienced data protection experts to ensure that the limits imposed by data protection are observed.

Question about data protection responsibility

If personal data is processed by an AI-based system, it must first be clarified who is responsible for the data processing. This is because different rights and obligations are derived from this. The following constellations are generally possible:

  • Sole responsibility: If, for example, an AI-based system is developed in-house or integrated into the company's own processes, the company in question is also responsible for the data processing. The background to this is that the company decides or can decide independently on the purposes and means of processing.
  • Joint controllership: If two or more parties (e.g. company and AI provider) pursue common objectives and decide on the purposes and means or are inextricably linked with regard to the processing operations, there is joint controllership. In this case, an agreement on joint controllership must be concluded.
  • Order processing: If an AI provider processes personal data on its server, but on behalf of another company (controller) or service provider, this is an order processing relationship. One example is the use of an existing AI system from a cloud service provider. In this context, it is necessary to conclude a data processing agreement.

The responsibilities may vary depending on the phase an AI is in.

Possible processing stages that must be considered and evaluated individually in this context are

  • The collection of training data for AI,
  • the processing of data for the training of AI,
  • the provision of AI applications,
  • the use of AI applications and
  • the utilisation of AI results.

Legal basis for data processing

In order to be authorised to process personal data, companies require a valid legal basis. This is usually the case if the data subjects have given their legally compliant consent for clearly defined purposes. In this context, the data subjects must be informed about which data is processed for which purposes, how and by whom, and who the recipients are. Based on this information, they can then decide whether they wish to give their consent. A legitimate interest is not sufficient at this point. Companies should ensure that the chosen legal basis fulfils the requirements of the GDPR.

Principles of data processing

Rights of data subjects

Rights to information, rectification and erasure of personal data must be respected. In the case of the right to erasure, the loss of data may affect the functionality of the AI system or the system may not even be able to forget certain data.

Transparency

Transparency requirements also apply to the use of AI-based systems with regard to the functioning and processing of personal data. These arise on the one hand from the GDPR and on the other from the EU Artificial Intelligence Act (AI Act).

Earmarking

Artificial intelligence may only be used for constitutionally legitimised purposes. For this reason - and with a view to information obligations - it must be clear what purpose the data processed by the AI system is being used for. If this purpose changes, a further legal basis is required.

Data minimisation

When inputting and outputting data, the use of personal data should be reduced to what is necessary for the purposes of processing, including anonymisation or pseudonymisation. Data minimisation should already be taken into account during development (privacy by design/privacy by default).

Data protection impact assessment in the context of AI

If the use of AI is associated with a potential risk of discrimination, a data protection impact assessment (DPIA) must be carried out in accordance with Art. 35 GDPR. The same applies if there is a particularly high risk to the rights and freedoms of individuals due to the nature, scope, circumstances or purposes of the data processing.

Whether a DPIA must be carried out is decided on the basis of an assessment of the risks of the processing operations. If this results in an expected high risk, a DPIA obligation applies. This is usually the case if, for example, an AI makes automated decisions or systematically and comprehensively evaluates personal aspects of a natural person.

The obligation to appoint a data protection officer (DPO) is also linked to the DPIA obligation.

Questions companies should ask themselves about AI and data protection

  • Which phase of data processing is considered?
    For example: Collection of training data for AI, processing of data for the training of AI, provision of AI applications, use of AI applications, use of AI results
  • Is personal data processed by AI?
    Yes: The scope of the GDPR applies.
    No: The GDPR does not apply.
  • Who is responsible for data processing?
    Sole responsibility? Joint responsibility? Contract processing?
  • Is there a legal basis for the data processing (informed consent of the data subjects)? For which phase(s) of data processing does this exist?
  • Is a data protection impact assessment to be carried out?
  • Is an (external) data protection officer required?

Conclusion: Consider data protection when using AI from the outset

To summarise, it can be said that the use of AI-supported systems for companies is always accompanied by data protection issues and challenges. In this context, the adopted AI Act is not intended to replace the data protection requirements of the GDPR, but should be seen as supplementary regulation. Even if there are currently legal requirements in principle, the framework conditions can change at any time as AI technologies progress. For this reason, it makes sense to keep a watchful eye on current developments.

It is also advisable to consider data protection aspects during development, implementation and utilisation from the outset. The specific requirements that need to be met depend on the respective use case. It may be advisable to consult a data protection expert at an early stage.

This article represents a snapshot of the current status of AI and data protection and does not claim to be exhaustive.

About TÜVIT

TÜV Informationstechnik GmbH (headquartered in Essen) is a renowned IT security service provider, an independent testing institute and laboratory for IT and cyber security in digitalisation. TÜVIT has been accredited worldwide since 1995 and, together with ALTER TECHNOLOGY, forms the Digital & Semiconductor business unit. The business unit is a mainstay of the TÜV NORD GROUP knowledge company, which has stood for security and trust worldwide for over 150 years. Engineers and IT security experts in more than 100 countries ensure that companies become even more successful in the networked world.