BeClaude
Research2026-05-06

Compress Then Adapt? No, Do It Together via Task-aware Union of Subspaces

Source: Arxiv CS.AI

arXiv:2605.02829v1 Announce Type: new Abstract: Adapting large pretrained models to diverse tasks is now routine, yet the two dominant strategies of parameter-efficient fine-tuning (PEFT) and low-rank compression are typically composed in sequence. This decoupled practice first compresses and then...

arxivpapers