Research2026-04-22
ShadowPEFT: Shadow Network for Parameter-Efficient Fine-Tuning
Source: Arxiv CS.AI
arXiv:2604.19254v1 Announce Type: cross Abstract: Parameter-efficient fine-tuning (PEFT) reduces the training cost of full-parameter fine-tuning for large language models (LLMs) by training only a small set of task-specific parameters while freezing the pretrained backbone. However, existing...
arxivpapersfine-tuning