Research2026-04-20
Protecting Language Models Against Unauthorized Distillation through Trace Rewriting
Source: Arxiv CS.AI
arXiv:2602.15143v2 Announce Type: replace Abstract: Knowledge distillation is a widely adopted technique for transferring capabilities from LLMs to smaller, more efficient student models. However, unauthorized use of knowledge distillation takes unfair advantage of the considerable effort and cost...
arxivpapers