BeClaude
Research2026-05-14

Correcting Influence: Unboxing LLM Outputs with Orthogonal Latent Spaces

Source: Arxiv CS.AI

arXiv:2605.12809v1 Announce Type: cross Abstract: A critical step for reliable large language models (LLMs) use in healthcare is to attribute predictions to their training data, akin to a medical case study. This requires token-level precision: pinpointing not just which training examples influence...

arxivpapers