A new and growing problem is surfacing in the world of AI-powered search: second-order hallucinations. While standard AI hallucinations involve a language model making up facts or misrepresenting information, second-order hallucinations occur when a search engine picks up errors that were themselves generated by another AI system—and presents them as fact. The issue is subtle, but potentially far more dangerous than simple mistakes, as it allows misinformation to spread in a feedback loop between machines.
The Rise of AI-Created Sources
The term is gaining traction as AI-driven search tools become more popular and powerful. These tools—such as Perplexity, You.com, and AI-enhanced versions of Bing and Google—use large language models to analyse and synthesise information from across the web. But unlike traditional search engines that list links for users to evaluate, these systems offer complete answers, often citing sources. The trouble begins when those sources are not human-authored, but AI-generated content that already contains hallucinations.
An Investigation into Perplexity
In a recent investigation into Perplexity’s AI search tool, researchers found that the system frequently referenced websites and pages that were themselves created or heavily influenced by other AI models. In some cases, these sources had generated entirely fabricated facts, which were then reprocessed and presented in Perplexity’s output. The result: hallucinations of hallucinations, repackaged as reliable knowledge.
Where the Problem Appears Most
These second-hand errors are especially prevalent in topics related to AI, technology, speculative science, and new trends—areas where online content is both fast-moving and increasingly generated by machines. But they are also creeping into more mainstream areas like travel planning, product comparisons, and even legal or medical summaries.
A Challenge of Knowledge Integrity
The fundamental challenge lies in the nature of how generative AI learns and responds. Language models are trained to predict the next most likely word, not to verify truth. While efforts like retrieval-augmented generation (RAG), knowledge graphs, and human-in-the-loop evaluation aim to improve factual grounding, they don’t eliminate the risk when the data pipeline itself is contaminated with earlier hallucinations.
When AI Feeds on AI
Second-order hallucinations expose a flaw in the circular knowledge economy of AI. If one AI generates falsehoods, another AI adopts them, and a third distributes them to the public, the chain becomes hard to break. This is not just a technical issue but a structural one, calling into question how trust, authorship and verification will work in the future of web search.
Can Anything Be Done?
Some experts argue that AI search needs a more curated index of human-vetted sources, or stricter filters for detecting machine-generated content. Others call for watermarking AI content and full transparency about data provenance. For now, however, users may need to apply a layer of critical thinking even when the AI confidently cites its sources.
A Warning for the Future
The story of second-order hallucinations is, ultimately, one of echo chambers in the age of automation: a world where falsehoods don’t just emerge—they replicate, compound and evolve.
Post picture: Created with ChatGPT

