Back to News
Research2026-04-17
Can Cross-Layer Transcoders Replace Vision Transformer Activations? An Interpretable Perspective on Vision
Source: Arxiv CS.AI
arXiv:2604.13304v1 Announce Type: cross Abstract: Understanding the internal activations of Vision Transformers (ViTs) is critical for building interpretable and trustworthy models. While Sparse Autoencoders (SAEs) have been used to extract human-interpretable features, they operate on individual...
arxivpapersvision