BeClaude
Research2026-04-28

Think-at-Hard: Selective Latent Iterations to Improve Reasoning Language Models

Source: Arxiv CS.AI

arXiv:2511.08577v2 Announce Type: replace-cross Abstract: Improving reasoning abilities of Large Language Models (LLMs), especially under parameter constraints, is crucial for real-world applications. Looped transformers address this by performing multiple latent iterations to refine each token...

arxivpapersreasoning