[2604.03877] When Models Know More Than They Say: Probing Analogical Reasoning in LLMs
About this article
Abstract page for arXiv paper 2604.03877: When Models Know More Than They Say: Probing Analogical Reasoning in LLMs
Computer Science > Computation and Language arXiv:2604.03877 (cs) [Submitted on 4 Apr 2026] Title:When Models Know More Than They Say: Probing Analogical Reasoning in LLMs Authors:Hope McGovern, Caroline Craig, Thomas Lippincott, Hale Sirin View a PDF of the paper titled When Models Know More Than They Say: Probing Analogical Reasoning in LLMs, by Hope McGovern and 3 other authors View PDF HTML (experimental) Abstract:Analogical reasoning is a core cognitive faculty essential for narrative understanding. While LLMs perform well when surface and structural cues align, they struggle in cases where an analogy is not apparent on the surface but requires latent information, suggesting limitations in abstraction and generalisation. In this paper we compare a model's probed representations with its prompted performance at detecting narrative analogies, revealing an asymmetry: for rhetorical analogies, probing significantly outperforms prompting in open-source models, while for narrative analogies, they achieve a similar (low) performance. This suggests that the relationship between internal representations and prompted behavior is task-dependent and may reflect limitations in how prompting accesses available information. Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) Cite as: arXiv:2604.03877 [cs.CL] (or arXiv:2604.03877v1 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2604.03877 Focus to learn more arXiv-issued...