Research2026-05-08
OPSD Compresses What RLVR Teaches: A Post-RL Compaction Stage for Reasoning Models
Source: Arxiv CS.AI
arXiv:2605.06188v1 Announce Type: new Abstract: On-Policy Self-Distillation (OPSD) has recently emerged as an alternative to Reinforcement Learning with Verifiable Rewards (RLVR), promising higher accuracy and shorter responses through token-level credit assignment from a self-teacher conditioned...
arxivpapersreasoning