Research2026-05-12
When Agents Say One Thing and Do Another: Validating Elicited Beliefs from LLMs
Source: Arxiv CS.AI
arXiv:2602.06286v2 Announce Type: replace Abstract: Large language models (LLMs) are increasingly deployed in high-stakes settings where good decisions require forming beliefs over the probability of unknown outcomes. However, it is unclear whether LLMs act as if they hold coherent beliefs when...
arxivpapersagents