Research2026-05-14
Learning Transferable Latent User Preferences for Human-Aligned Decision Making
Source: Arxiv CS.AI
arXiv:2605.12682v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly used as reasoning modules in many applications. While they are efficient in certain tasks, LLMs often struggle to produce human-aligned solutions. Human-aligned decision making requires accounting for both...
arxivpapers