Research2026-04-24
Stream2LLM: Overlap Context Streaming and Prefill for Reduced Time-to-First-Token (TTFT)
Source: Arxiv CS.AI
arXiv:2604.16395v2 Announce Type: replace-cross Abstract: Context retrieval systems for LLM inference face a critical challenge: high retrieval latency creates a fundamental tension between waiting for complete context (poor time-to-first-token) and proceeding without it (reduced quality)....
arxivpapers