BeClaude
Research2026-05-12

Parameter-Efficient Neuroevolution for Diverse LLM Generation: Quality-Diversity Optimization via Prompt Embedding Evolution

Source: Arxiv CS.AI

arXiv:2605.09781v1 Announce Type: cross Abstract: Large Language Models exhibit mode collapse, producing homogeneous outputs that fail to explore valid solution spaces. We present QD-LLM, a framework for parameter-efficient neuroevolution that evolves prompt embeddings, compact neural interfaces...

arxivpapersprompting