Research2026-04-22
From Verbatim to Gist: Distilling Pyramidal Multimodal Memory via Semantic Information Bottleneck for Long-Horizon Video Agents
Source: Arxiv CS.AI
arXiv:2603.01455v3 Announce Type: replace-cross Abstract: While multimodal large language models have demonstrated impressive short-term reasoning, they struggle with long-horizon video understanding due to limited context windows and static memory mechanisms that fail to mirror human cognitive...
arxivpapersagentsmultimodal