Policy2026-05-12
Reasoning Compression with Mixed-Policy Distillation
Source: Arxiv CS.AI
arXiv:2605.08776v1 Announce Type: new Abstract: Reasoning-centric large language models (LLMs) achieve strong performance by generating intermediate reasoning trajectories, but often incur excessive token usage and high inference-time decoding cost. We observe that, when solving the same problems,...
arxivpapersreasoning