BeClaude
Research2026-05-01

Position-Aware Drafting for Inference Acceleration in LLM-Based Generative List-Wise Recommendation

Source: Arxiv CS.AI

arXiv:2604.27747v1 Announce Type: cross Abstract: Large language model (LLM)-based generative list-wise recommendation has advanced rapidly, but decoding remains sequential and thus latency-prone. To accelerate inference without changing the target distribution, speculative decoding (SD) uses a...

arxivpapers