BeClaude
Research2026-04-24

Preserving Knowledge in Large Language Model with Model-Agnostic Self-Decompression

Source: Arxiv CS.AI

arXiv:2406.11354v3 Announce Type: replace-cross Abstract: Humans can retain old knowledge while learning new information, but Large Language Models (LLMs) often suffer from catastrophic forgetting when post-pretrained or supervised fine-tuned (SFT) on domain-specific data. Moreover, for Multimodal...

arxivpapers