Fact or fiction? The trouble with AI hallucinations

Nigel Brennan delves into the intricacies of why large language models occasionally veer into the realm of the untrue, exploring the technical, ethical and practical implications of this enigmatic occurrence

Illustrated image of a robot head with swirls in the place of eyes

In the changing landscape of artificial intelligence (AI), the phenomenon of hallucinating facts and figures has emerged as a perplexing challenge. Determination of the root causes behind this curious behaviour involves a deep understanding of the underlying technology.

 What is a large language model?

A large language model (LLM) is a type of artificial intelligence system designed to understand and generate human-like language.

It is built on sophisticated neural network architectures, trained on vast datasets and can perform various natural language processing tasks, such as language translation, text completion and answering questions.

LLMs can comprehend context, infer meanings and generate coherent and contextually relevant text based on the input they receive.

In the absence of a concrete understanding of every learned parameter, the models may occasionally generate seemingly plausible information that is a fabrication, however.

This phenomenon—known as hallucination—occurs when the LLM extrapolates from its training data, creating information that appears accurate but lacks a factual basis.

The Black Box of neural networks

LLMs are built on a type of neural network called a transformer model. There is no way of working out exactly how these models arrive at specific conclusions.

In other words, the output of an LLM is considered to be ‘non-deterministic’.

This means that from the output, it is not possible to determine the input, meaning that detection of AI-generated content can only be evaluated based on a margin of confidence, rather than certain ‘true/false’ evaluation.

Overfitting challenges

Hallucination in LLMs can be attributed, in part, to the challenges associated with overfitting.

Overfitting occurs when a model becomes overly reliant on its training data. As a result, the model may hallucinate information that aligns with the peculiarities of the training dataset.

For example, if the machine learning model was trained on a data set mostly containing photos showing dogs outside in parks, it may learn to use grass as a feature for classification and may not recognise a dog inside a room.

When faced with novel scenarios or inputs, LLMs may resort to generating responses based on superficial similarities to the learned data, leading to the production of inaccurate or hallucinated information.

Ethical considerations

There is a fine line between assistance and misinformation.

The implications of LLMs’ hallucination extend beyond technical challenges, delving into ethical territory.

As these systems become integral to decision-making processes in various fields, from healthcare to finance, the potential for disseminating misinformation raises concerns.

When an LLM hallucinates facts or figures, it may inadvertently contribute to the spread of false information, with consequences ranging from misinformation in news articles to inaccuracies in critical decision-making processes.

Striking the delicate balance between providing assistance and avoiding the propagation of misinformation poses a significant ethical challenge for developers, researchers and policymakers.

The quest for explainability and accountability

Addressing the issue of hallucination requires a concerted effort to enhance the reference-ability of LLMs.

Researchers are exploring methods to make neural networks more interpretable, allowing stakeholders to trace the decision-making processes of these systems.

Additionally, accountability measures must be implemented to ensure the responsible development and deployment of LLMs.

The road ahead involves refining algorithms, establishing robust evaluation frameworks and fostering interdisciplinary collaboration to create LLMs that not only perform functionally but also uphold ethical standards.

As we navigate the evolving landscape of LLMs, a deeper understanding of hallucination paves the way for more transparent, accountable and reliable artificial intelligence systems.

Nigel Brennan is Senior Software Consultant at KPMG