BeClaude
Research2026-05-14

When Attention Closes: How LLMs Lose the Thread in Multi-Turn Interaction

Source: Arxiv CS.AI

arXiv:2605.12922v1 Announce Type: new Abstract: Large language models can follow complex instructions in a single turn, yet over long multi-turn interactions they often lose the thread of instructions, persona, and rules. This degradation has been measured behaviorally but not mechanistically...

arxivpapers