BeClaude
Research2026-05-05

Rethinking Network Topologies for Cost-Effective Mixture-of-Experts LLM Serving

Source: Arxiv CS.AI

arXiv:2605.00254v1 Announce Type: cross Abstract: Mixture-of-experts (MoE) architectures have turned LLM serving into a cluster-scale workload in which communication consumes a considerable portion of LLM serving runtime. This has prompted industry to invest heavily in expensive high-bandwidth...

arxivpapers