Research2026-05-05
Bring Your Own Prompts: Use-Case-Specific Bias and Fairness Evaluation for LLMs
Source: Arxiv CS.AI
arXiv:2407.10853v5 Announce Type: replace-cross Abstract: Bias and fairness risks in Large Language Models (LLMs) vary substantially across deployment contexts, yet existing approaches lack systematic guidance for selecting appropriate evaluation metrics. We present a decision framework that maps...
arxivpapersprompting