BeClaude
Research2026-04-23

LaplacianFormer:Rethinking Linear Attention with Laplacian Kernel

Source: Arxiv CS.AI

arXiv:2604.20368v1 Announce Type: cross Abstract: The quadratic complexity of softmax attention presents a major obstacle for scaling Transformers to high-resolution vision tasks. Existing linear attention variants often replace the softmax with Gaussian kernels to reduce complexity, but such...

arxivpapers