BeClaude
Research2026-05-14

Pretraining large language models with MXFP4 on Native FP4 Hardware

Source: Arxiv CS.AI

arXiv:2605.09825v2 Announce Type: replace-cross Abstract: Why does full-pipeline FP4 training of large language models often diverge, even when forward activations and activation gradients remain stable? We address this question through a controlled study of MXFP4 quantization in transformer...

arxivpapers