BeClaude
Research2026-04-22

$R^2$-dLLM: Accelerating Diffusion Large Language Models via Spatio-Temporal Redundancy Reduction

Source: Arxiv CS.AI

arXiv:2604.18995v1 Announce Type: cross Abstract: Diffusion Large Language Models (dLLMs) have emerged as a promising alternative to autoregressive generation by enabling parallel token prediction. However, practical dLLM decoding still suffers from high inference latency, which limits deployment....

arxivpapersimage-generation