Research2026-05-14
LoRA-Mixer: Coordinate Modular LoRA Experts Through Serial Attention Routing
Source: Arxiv CS.AI
arXiv:2507.00029v2 Announce Type: replace-cross Abstract: Recent attempts to combine low-rank adaptation (LoRA) with mixture-of-experts (MoE) for multi-task adaptation of Large Language Models (LLMs) often replace whole attention/FFN layers with switch experts or append parallel expert branches,...
arxivpapers