Research2026-05-12
Pretraining large language models with MXFP4
Source: Arxiv CS.AI
arXiv:2605.09825v1 Announce Type: cross Abstract: Why does full-pipeline FP4 training of large language models often diverge, even when forward activations and activation gradients remain stable? We address this question through a controlled study of MXFP4 quantization in transformer training,...
arxivpapers