Research2026-05-12
Pseudo-Deliberation in Language Models: When Reasoning Fails to Align Values and Actions
Source: Arxiv CS.AI
arXiv:2605.09893v1 Announce Type: cross Abstract: Large language models (LLMs) are often evaluated based on their stated values, yet these do not reliably translate into their actions, a discrepancy termed "value-action gap." In this work, we argue that this gap persists even under explicit...
arxivpapersreasoning