BeClaude
Research2026-05-12

SlimQwen: Exploring the Pruning and Distillation in Large MoE Model Pre-training

Source: Arxiv CS.AI

arXiv:2605.08738v1 Announce Type: cross Abstract: Structured pruning and knowledge distillation (KD) are typical techniques for compressing large language models, but it remains unclear how they should be applied at pretraining scale, especially to recent mixture-of-experts (MoE) models. In this...

arxivpapers