BeClaude
Research2026-04-20

Polarization by Default: Auditing Recommendation Bias in LLM-Based Content Curation

Source: Arxiv CS.AI

arXiv:2604.15937v1 Announce Type: cross Abstract: Large Language Models (LLMs) are increasingly deployed to curate and rank human-created content, yet the nature and structure of their biases in these tasks remains poorly understood: which biases are robust across providers and platforms, and which...

arxivpapers