Research2026-04-20
LLM Reasoning Is Latent, Not the Chain of Thought
Source: Arxiv CS.AI
arXiv:2604.15726v1 Announce Type: new Abstract: This position paper argues that large language model (LLM) reasoning should be studied as latent-state trajectory formation rather than as faithful surface chain-of-thought (CoT). This matters because claims about faithfulness, interpretability,...
arxivpapersreasoning