Skip to content

AI Hallucinations

Help, my AI ʇɹǝıuıznןןɐɥ!

What is an AI Hallucination?

An AI Hallucination is when a large generative language model (LLM) generates false information or facts that do not correspond to reality. The hallucinations often appear plausible - at least at first glance - as fluent, coherent texts are generated.

However, it is important to emphasise that LLMs do not deliberately lie, but simply have no awareness of the texts they generate.

Large language models tend to be very confident in inventing new (false) information.

Thora Markert

Head of AI Research and Governance at TÜVIT

Summarised

How do AI Hallucinations arise?

Possible factors as causes

The technical reasons for AI Hallucinations can be manifold:

  • Outdated, poor or inconsistent training data on which the LLM relies
  • Incorrect classification of data
  • Lack of context or unclear or inconsistent user input
  • Difficulties in recognising colloquial language, sarcasm, etc.
  • Inadequate training and generation methods/programming

It is therefore possible for LLMs to generate hallucinations even though they are based on consistent and reliable data sets.

The containment of hallucinations is therefore one of the fundamental challenges for AI users and developers. This is because LLMs are usually a black box, which can make it difficult to determine why a particular hallucination was generated.

What are the types of AI Hallucinations?

The term AI Hallucinations covers a broad spectrum: from minor inconsistencies to fictitious information. Types of AI hallucinations include

Sentence contradictions
Generated sentences contradict previous sentences or parts of the generated response.

Contradictions with the prompt
The generated response or parts of it do not match the user's prompt.

Factual contradictions
Information invented by the LLM is sold as fact.

Random hallucinations
The LLM generates random information that has nothing to do with the actual prompt.

What are the dangers of AI Hallucinations?

If users rely too much on the results of an AI system because they look very convincing and reliable, they may not only believe the false information themselves, but also spread it further.

For companies that use LLM-supported services as part of customer communication, there is also a potential risk that customers will be provided with untrue information. This, in turn, can have a negative impact on the company's reputation.

LLMs are powerful tools, but they also come with challenges such as the phenomenon of AI hallucination.Through comprehensive audits, we support AI developers in identifying and minimising existing risks in the best possible way and further strengthening confidence in the technology.

Vasilios Danos

Head of AI Security and Trustworthiness at TÜVIT

Good to know

First aid – what you can do

How can I recognize AI Hallucinations?

The easiest way to recognise or unmask an AI hallucination is to carefully check the accuracy of the information provided. As a user:in a generative AI, you should therefore always bear in mind that it can also make mistakes and proceed according to the "four-eyes principle" of AI and human.

How do I prevent AI Hallucinations?

In order to counteract AI Hallucinations and other challenges posed by AI systems, it is advisable to have appropriate tests carried out by independent third parties. In the best case scenario, vulnerabilities can be identified and rectified before applications are officially deployed.

We will support you – no matter what

Get started at last!

We advise you


TÜV NORD IT Secure Communication I Berlin
Goal achieved?

We check that


TÜV Informationstechnik I Essen