BeClaude
Research2026-04-23

Task-Stratified Knowledge Scaling Laws for Post-Training Quantized Large Language Models

Source: Arxiv CS.AI

arXiv:2508.18609v4 Announce Type: replace-cross Abstract: Post-Training Quantization (PTQ) is a critical strategy for efficient Large Language Models (LLMs) deployment. However, existing scaling laws primarily focus on general performance, overlooking crucial fine-grained factors and how...

arxivpapers