BeClaude
Research2026-05-08

PACZero: PAC-Private Fine-Tuning of Language Models via Sign Quantization

Source: Arxiv CS.AI

arXiv:2605.06505v1 Announce Type: cross Abstract: We introduce PACZero, a family of PAC-private zeroth-order mechanisms for fine-tuning large language models that delivers usable utility at $I(S^*; Y_{1:T})=0$. This privacy regime bounds the membership-inference attack (MIA) posterior success rate...

arxivpapersfine-tuning