Research2026-05-05
The Quantization Trap: Breaking Linear Scaling Laws in Multi-Hop Reasoning
Source: Arxiv CS.AI
arXiv:2602.13595v2 Announce Type: replace Abstract: Neural scaling laws provide a predictable recipe for AI advancement: reducing numerical precision should linearly improve computational efficiency and energy profile ($E \propto \mathrm{bits}$). In this paper, we demonstrate that this scaling law...
arxivpapersreasoning