Research2026-05-12
MESD: A Risk-Sensitive Metric for Explanation Fairness Across Intersectional Subgroups
Source: Arxiv CS.AI
arXiv:2603.13452v2 Announce Type: replace Abstract: Fairness in machine learning is predominantly evaluated through outcome-oriented metrics, such as Demographic parity, which measure whether predictions are statistically consistent across protected groups. However, these metrics cannot detect...
arxivpapers