BeClaude
Research2026-05-01

BoostLoRA: Growing Effective Rank by Boosting Adapters

Source: Arxiv CS.AI

arXiv:2604.27308v1 Announce Type: cross Abstract: Parameter-efficient fine-tuning (PEFT) methods face a tradeoff between adapter size and expressivity: ultra-low-parameter adapters are confined to fixed low-rank subspaces, capping performance even with extended training. We propose BoostLoRA, a...

arxivpapers