Research2026-05-05
TUR-DPO: Topology- and Uncertainty-Aware Direct Preference Optimization
Source: Arxiv CS.AI
arXiv:2605.00224v1 Announce Type: new Abstract: Aligning large language models (LLMs) with human preferences is commonly done via reinforcement learning from human feedback (RLHF) with Proximal Policy Optimization (PPO) or, more simply, via Direct Preference Optimization (DPO). While DPO is stable...
arxivpapers