Research2026-04-30
reward-lens: A Mechanistic Interpretability Library for Reward Models
Source: Arxiv CS.AI
arXiv:2604.26130v1 Announce Type: cross Abstract: Every RLHF-trained language model is shaped by a reward model, yet the mechanistic interpretability toolkit -- logit lens, direct logit attribution, activation patching, sparse autoencoders -- was built for generative LLMs whose primitives all...
arxivpapers