BeClaude
Research2026-05-11

Amortized-Precision Quantization for Early-Exit Vision Transformers

Source: Arxiv CS.AI

arXiv:2605.07317v1 Announce Type: cross Abstract: Vision Transformers (ViTs) achieve strong performance across vision tasks, yet their deployment with low-precision early exiting remains fragile. Existing quantization methods assume static full-depth execution, making them unstable when exit...

arxivpapersvision