Research2026-05-12
CoreQ: Learning-Free Mismatch Correction and Successive Rounding for Quantization
Source: Arxiv CS.AI
arXiv:2602.05902v2 Announce Type: replace-cross Abstract: Post-training quantization (PTQ) enables efficient deployment of large language models by mapping pretrained weights to low-bit formats without retraining, typically using a small calibration set to minimize a layer-wise calibration...
arxivpapers