Overthinking the truth: Understanding how language models process false demonstrations

D Halawi, JS Denain, J Steinhardt - arXiv preprint arXiv:2307.09476, 2023 - arxiv.org
arXiv preprint arXiv:2307.09476, 2023arxiv.org
Modern language models can imitate complex patterns through few-shot learning, enabling
them to complete challenging tasks without fine-tuning. However, imitation can also lead
models to reproduce inaccuracies or harmful content if present in the context. We study
harmful imitation through the lens of a model's internal representations, and identify two
related phenomena: overthinking and false induction heads. The first phenomenon,
overthinking, appears when we decode predictions from intermediate layers, given correct …
Modern language models can imitate complex patterns through few-shot learning, enabling them to complete challenging tasks without fine-tuning. However, imitation can also lead models to reproduce inaccuracies or harmful content if present in the context. We study harmful imitation through the lens of a model's internal representations, and identify two related phenomena: overthinking and false induction heads. The first phenomenon, overthinking, appears when we decode predictions from intermediate layers, given correct vs. incorrect few-shot demonstrations. At early layers, both demonstrations induce similar model behavior, but the behavior diverges sharply at some "critical layer", after which the accuracy given incorrect demonstrations progressively decreases. The second phenomenon, false induction heads, are a possible mechanistic cause of overthinking: these are heads in late layers that attend to and copy false information from previous demonstrations, and whose ablation reduces overthinking. Beyond scientific understanding, our results suggest that studying intermediate model computations could be a promising avenue for understanding and guarding against harmful model behaviors.
arxiv.org