Research2026-04-22
SCURank: Ranking Multiple Candidate Summaries with Summary Content Units for Enhanced Summarization
Source: Arxiv CS.AI
arXiv:2604.19185v1 Announce Type: cross Abstract: Small language models (SLMs), such as BART, can achieve summarization performance comparable to large language models (LLMs) via distillation. However, existing LLM-based ranking strategies for summary candidates suffer from instability, while...
arxivpapers