BeClaude
Research2026-04-28

Learning to Rotate: Temporal and Semantic Rotary Encoding for Sequential Modeling

Source: Arxiv CS.AI

arXiv:2604.24717v1 Announce Type: new Abstract: Every Transformer architecture dedicates enormous capacity to learning rich representations in semantic embedding space -- yet the rotation manifold acted upon by Rotary Positional Embeddings (RoPE) has been treated as a fixed, hand-crafted structure,...

arxivpapers