Research2026-05-01
Evaluating Epistemic Guardrails in AI Reading Assistants: A Behavioral Audit of a Minimal Prototype
Source: Arxiv CS.AI
arXiv:2604.27275v1 Announce Type: cross Abstract: Large language model (LLM) reading assistants are increasingly used in settings that require interpretation rather than simple retrieval. In these contexts, the central risk is not only error or unsafe output, but interpretive displacement: the...
arxivpapers