Research2026-05-08
Taming the Entropy Cliff: Variable Codebook Size Quantization for Autoregressive Visual Generation
Source: Arxiv CS.AI
arXiv:2605.06207v1 Announce Type: cross Abstract: Most discrete visual tokenizers rely on a default design: every position in the sequence shares the same codebook. Researchers try to scale the codebook size $K$ to get better reconstruction performance. Such a constant-codebook design hits a...
arxivpapers