BeClaude
Research2026-04-20

Where does output diversity collapse in post-training?

Source: Arxiv CS.AI

arXiv:2604.16027v1 Announce Type: cross Abstract: Post-trained language models produce less varied outputs than their base counterparts. This output diversity collapse undermines inference-time scaling methods that rely on varied samples, and risks homogenizing model outputs on creative and...

arxivpapers