Research2026-04-20
Why Fine-Tuning Encourages Hallucinations and How to Fix It
Source: Arxiv CS.AI
arXiv:2604.15574v1 Announce Type: cross Abstract: Large language models are prone to hallucinating factually incorrect statements. A key source of these errors is exposure to new factual information through supervised fine-tuning (SFT), which can increase hallucinations w.r.t. knowledge acquired...
arxivpapersragfine-tuning