---
title: "Every output of an LLM is a hallucination"
description: "LLMs don't do anything different when they "make up" information because they always "make it up"."
date: 2023-12-24
tags: [ai, how-llms-work]
url: https://nem035.com/thoughts/llm-always-hallucinates
---

The commonly shared mental model of LLMs hallucinating _sometimes_ is incorrect.

_Every_ output of an LLM is a [hallucination](/thoughts/llm-dreamer).

LLMs don't do anything different when they "make up" information because they always "make it up".

They don't do rigorous lookup through the data, they lazily remember the best guess.

It just happens to coincide with reality a lot of the time, in particular because we fine-tune its output and write clear prompts behind the scenes.