BeClaude
Research2026-04-28

The Alignment Target Problem: Divergent Moral Judgments of Humans, AI Systems, and Their Designers

Source: Arxiv CS.AI

arXiv:2604.24155v1 Announce Type: cross Abstract: The quest to align machine behavior with human values raises fundamental questions about the moral frameworks that should govern AI decision-making. Much alignment research assumes that the appropriate benchmark is how humans themselves would act in...

arxivpapers