Policy2026-05-01
Compliance versus Sensibility: On the Reasoning Controllability in Large Language Models
Source: Arxiv CS.AI
arXiv:2604.27251v1 Announce Type: cross Abstract: Large Language Models (LLMs) are known to acquire reasoning capabilities through shared inference patterns in pre-training data, which are further elicited via Chain-of-Thought (CoT) practices. However, whether fundamental reasoning patterns, such...
arxivpapersreasoning