BeClaude
Research2026-04-27

Introducing Background Temperature to Characterise Hidden Randomness in Large Language Models

Source: Arxiv CS.AI

arXiv:2604.22411v1 Announce Type: new Abstract: Even when decoding with temperature $T=0$, large language models (LLMs) can produce divergent outputs for identical inputs. Recent work by Thinking Machines Lab highlights implementation-level sources of nondeterminism, including batch-size variation,...

arxivpapers