BeClaude
Research2026-05-06

Distilling Long-CoT Reasoning through Collaborative Step-wise Multi-Teacher Decoding

Source: Arxiv CS.AI

arXiv:2605.02290v1 Announce Type: new Abstract: Distilling large reasoning models is essential for making Long-CoT reasoning practical, as full-scale inference remains computationally prohibitive. Existing curation-based approaches select complete reasoning traces post-hoc, overlooking...

arxivpapersreasoning