Research2026-04-28
Pref-CTRL: Preference Driven LLM Alignment using Representation Editing
Source: Arxiv CS.AI
arXiv:2604.23543v1 Announce Type: cross Abstract: Test-time alignment methods offer a promising alternative to fine-tuning by steering the outputs of large language models (LLMs) at inference time with lightweight interventions on their internal representations. Recently, a prominent and effective...
arxivpapers