BeClaude
Research2026-05-12

RLearner-LLM: Balancing Logical Grounding and Fluency in Large Language Models via Hybrid Direct Preference Optimization

Source: Arxiv CS.AI

arXiv:2605.04539v2 Announce Type: replace-cross Abstract: Direct Preference Optimization (DPO), the efficient alternative to PPO-based RLHF, falls short on knowledge-intensive generation: standard preference signals from human annotators or LLM judges exhibit a systematic verbosity bias that...

arxivpapers