Research2026-05-11
Is Your Prompt Poisoning Code? Defect Induction Rates and Security Mitigation Strategies
Source: Arxiv CS.AI
arXiv:2510.22944v2 Announce Type: replace-cross Abstract: Large language models (LLMs) have become indispensable for automated code generation, yet the quality and security of their outputs remain a critical concern. Existing studies predominantly concentrate on adversarial attacks or inherent...
arxivpapersprompting