BeClaude
Policy2026-05-14

Respecting Self-Uncertainty in On-Policy Self-Distillation for Efficient LLM Reasoning

Source: Arxiv CS.AI

arXiv:2605.13255v1 Announce Type: new Abstract: On-policy self-distillation trains a reasoning model on its own rollouts while a teacher, often the same model conditioned on privileged context, provides dense token-level supervision. Existing objectives typically weight the teacher's token-level...

arxivpapersreasoning