BeClaude
Research2026-05-11

LLM hallucinations in the wild: Large-scale evidence from non-existent citations

Source: Arxiv CS.AI

arXiv:2605.07723v1 Announce Type: cross Abstract: Large language models (LLMs) are known to generate plausible but false information across a wide range of contexts, yet the real-world magnitude and consequences of this hallucination problem remain poorly understood. Here we leverage a uniquely...

arxivpapers