Research2026-05-06
LLM Ghostbusters: Surgical Hallucination Suppression via Adaptive Unlearning
Source: Arxiv CS.AI
arXiv:2605.01047v1 Announce Type: cross Abstract: Hallucinations, outputs that sound plausible but are factually incorrect, remain an open challenge for deployed LLMs. In code generation, models frequently hallucinate non-existent software packages, recommending imports and installation commands...
arxivpapers