Every output of an LLM is a hallucination

You, ai
Back

The commonly shared mental model of LLMs hallucinating sometimes is incorrect.

Every output of an LLM is a hallucination.

LLMs don't do anything different when they "make up" information because they always "make it up".

They don't do rigorous lookup through the data, they lazily remember the best guess.

It just happens to coincide with reality a lot of the time, in particular because we fine-tune its output and write clear prompts behind the scenes.

© nem035RSS