Research2026-04-28
The Randomness Floor: Measuring Intrinsic Non-Randomness in Language Model Token Distributions
Source: Arxiv CS.AI
arXiv:2604.22771v1 Announce Type: cross Abstract: Language models cannot be random. This paper introduces Entropic Deviation (ED), the normalised KL divergence between a model's token distribution and the uniform distribution, and measures it systematically across 31,200 generations spanning seven...
arxivpapers