Research2026-05-12
The Grounding Gap: How LLMs Anchor the Meaning of Abstract Concepts Differently from Humans
Source: Arxiv CS.AI
arXiv:2605.08837v1 Announce Type: cross Abstract: Abstract concepts - justice, theory, availability - have no single perceivable referent; in the human brain, their meaning emerges from a web of experiences, affect, and social context. Do large language models (LLMs) ground abstract concepts in a...
arxivpapers