BeClaude
Research2026-05-14

Preserve-Then-Quantize: Balancing Rank Budgets for Quantization Error Reconstruction in LLMs

Source: Arxiv CS.AI

arXiv:2602.02001v2 Announce Type: replace-cross Abstract: Quantization Error Reconstruction (QER) reduces accuracy loss in Post-Training Quantization (PTQ) by approximating weights as $\mathbf{W} \approx \mathbf{Q} + \mathbf{L}\mathbf{R}$, using a rank-$r$ correction to reconstruct quantization...

arxivpapers