Research2026-04-22
Decomposed Trust: Privacy, Adversarial Robustness, Ethics, and Fairness in Low-Rank LLMs
Source: Arxiv CS.AI
arXiv:2511.22099v3 Announce Type: replace-cross Abstract: Large language models (LLMs) have driven major advances across domains, yet their massive size hinders deployment in resource-constrained settings. Low-rank factorization addresses this challenge by compressing models to effectively reduce...
arxivpapers