BeClaude
Research2026-05-06

Where Do Prompt Perturbations Break Generation? A Segment-Level View of Robustness in LoRA-Tuned Language Models

Source: Arxiv CS.AI

arXiv:2605.01605v1 Announce Type: cross Abstract: Large language models are sensitive to minor prompt perturbations, yet existing robustness methods usually enforce consistency at the whole-sequence level. This holistic view can hide an important failure mode: a perturbed response may remain...

arxivpapersprompting