BeClaude
Research2026-05-06

CoSpaDi: Compressing LLMs via Calibration-Guided Sparse Dictionary Learning

Source: Arxiv CS.AI

arXiv:2509.22075v5 Announce Type: replace-cross Abstract: Post-training compression of large language models (LLMs) often relies on low-rank weight approximations that represent each column of the weight matrix in a shared low-dimensional subspace. This strategy is computationally efficient but the...

arxivpapers