Research2026-05-14
LLMs as annotators of credibility assessment in Danish asylum decisions: evaluating classification performance and errors beyond aggregated metrics
Source: Arxiv CS.AI
arXiv:2605.13412v1 Announce Type: cross Abstract: Off-the-shelf large language models (LLMs) are increasingly used to automate text annotation, yet their effectiveness remains underexplored for underrepresented languages and specialized domains where the class definition requires subtle expert...
arxivpapers