Research2026-05-12
Towards Understanding Continual Factual Knowledge Acquisition of Language Models: From Theory to Algorithm
Source: Arxiv CS.AI
arXiv:2605.10640v1 Announce Type: cross Abstract: Continual Pre-Training (CPT) is essential for enabling Language Models (LMs) to integrate new knowledge without erasing old. While classical CPT techniques like data replay have become the standard paradigm, the mechanisms underlying how LMs acquire...
arxivpapers