BeClaude
Policy2026-05-14

Multi-Rollout On-Policy Distillation via Peer Successes and Failures

Source: Arxiv CS.AI

arXiv:2605.12652v1 Announce Type: cross Abstract: Large language models are often post-trained with sparse verifier rewards, which indicate whether a sampled trajectory succeeds but provide limited guidance about where reasoning succeeds or fails. On-policy distillation (OPD) offers denser...

arxivpapers