BeClaude
Policy2026-05-08

Asymmetric On-Policy Distillation: Bridging Exploitation and Imitation at the Token Level

Source: Arxiv CS.AI

arXiv:2605.06387v1 Announce Type: cross Abstract: On-policy distillation (OPD) trains a student on its own trajectories with token-level teacher feedback and often outperforms off-policy distillation and standard reinforcement learning. However, we find that its standard advantage weighted policy...

arxivpapers