[2604.03877] When Models Know More Than They Say: Probing Analogical Reasoning in LLMs

[2604.03877] When Models Know More Than They Say: Probing Analogical Reasoning in LLMs

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2604.03877: When Models Know More Than They Say: Probing Analogical Reasoning in LLMs

Computer Science > Computation and Language arXiv:2604.03877 (cs) [Submitted on 4 Apr 2026] Title:When Models Know More Than They Say: Probing Analogical Reasoning in LLMs Authors:Hope McGovern, Caroline Craig, Thomas Lippincott, Hale Sirin View a PDF of the paper titled When Models Know More Than They Say: Probing Analogical Reasoning in LLMs, by Hope McGovern and 3 other authors View PDF HTML (experimental) Abstract:Analogical reasoning is a core cognitive faculty essential for narrative understanding. While LLMs perform well when surface and structural cues align, they struggle in cases where an analogy is not apparent on the surface but requires latent information, suggesting limitations in abstraction and generalisation. In this paper we compare a model's probed representations with its prompted performance at detecting narrative analogies, revealing an asymmetry: for rhetorical analogies, probing significantly outperforms prompting in open-source models, while for narrative analogies, they achieve a similar (low) performance. This suggests that the relationship between internal representations and prompted behavior is task-dependent and may reflect limitations in how prompting accesses available information. Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) Cite as: arXiv:2604.03877 [cs.CL]   (or arXiv:2604.03877v1 [cs.CL] for this version)   https://doi.org/10.48550/arXiv.2604.03877 Focus to learn more arXiv-issued...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

[2603.07475] A Comparative analysis of Layer-wise Representational Capacity in AR and Diffusion LLMs
Llms

[2603.07475] A Comparative analysis of Layer-wise Representational Capacity in AR and Diffusion LLMs

Abstract page for arXiv paper 2603.07475: A Comparative analysis of Layer-wise Representational Capacity in AR and Diffusion LLMs

arXiv - Machine Learning · 3 min ·
[2601.22925] BEAR: Towards Beam-Search-Aware Optimization for Recommendation with Large Language Models
Llms

[2601.22925] BEAR: Towards Beam-Search-Aware Optimization for Recommendation with Large Language Models

Abstract page for arXiv paper 2601.22925: BEAR: Towards Beam-Search-Aware Optimization for Recommendation with Large Language Models

arXiv - Machine Learning · 4 min ·
[2512.10551] LLM-Auction: Generative Auction towards LLM-Native Advertising
Llms

[2512.10551] LLM-Auction: Generative Auction towards LLM-Native Advertising

Abstract page for arXiv paper 2512.10551: LLM-Auction: Generative Auction towards LLM-Native Advertising

arXiv - Machine Learning · 3 min ·
[2511.17411] SPEAR-1: Scaling Beyond Robot Demonstrations via 3D Understanding
Llms

[2511.17411] SPEAR-1: Scaling Beyond Robot Demonstrations via 3D Understanding

Abstract page for arXiv paper 2511.17411: SPEAR-1: Scaling Beyond Robot Demonstrations via 3D Understanding

arXiv - Machine Learning · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime