BeClaude
Research2026-05-07

S2O: Early Stopping for Sparse Attention via Online Permutation

Source: Arxiv CS.AI

arXiv:2602.22575v2 Announce Type: replace-cross Abstract: Attention scales quadratically with sequence length, fundamentally limiting long-context inference. Existing block-granularity sparsification can reduce latency, but coarse blocks impose an intrinsic sparsity ceiling, making further...

arxivpapers