[2603.00883] Knowledge without Wisdom: Measuring Misalignment between LLMs and Intended Impact

[2603.00883] Knowledge without Wisdom: Measuring Misalignment between LLMs and Intended Impact

arXiv - Machine Learning 3 min read

About this article

Abstract page for arXiv paper 2603.00883: Knowledge without Wisdom: Measuring Misalignment between LLMs and Intended Impact

Computer Science > Machine Learning arXiv:2603.00883 (cs) [Submitted on 1 Mar 2026] Title:Knowledge without Wisdom: Measuring Misalignment between LLMs and Intended Impact Authors:Michael Hardy, Yunsung Kim View a PDF of the paper titled Knowledge without Wisdom: Measuring Misalignment between LLMs and Intended Impact, by Michael Hardy and Yunsung Kim View PDF HTML (experimental) Abstract:LLMs increasingly excel on AI benchmarks, but doing so does not guarantee validity for downstream tasks. This study evaluates the performance of leading foundation models (FMs, i.e., generative pre-trained base LLMs) with out-of-distribution (OOD) tasks of the teaching and learning of schoolchildren. Across all FMs, inter-model behaviors on disparate tasks correlate higher than they do with expert human behaviors on target tasks. These biases shared across LLMs are poorly aligned with downstream measures of teaching quality and often \textit{negatively aligned with learning outcomes}. Further, we find multi-model ensembles, both unanimous model voting and expert-weighting by benchmark performance, further exacerbate misalignment with learning. We measure that 50\% of the variation in misalignment error is shared across foundation models, suggesting that common pretraining accounts for much of the misalignment in these tasks. We demonstrate methods for robustly measuring alignment of complex tasks and provide unique insights into both educational applications of foundation models and to un...

Originally published on March 03, 2026. Curated by AI News.

Related Articles

Llms

An attack class that passes every current LLM filter - no payload, no injection signature, no log trace

https://shapingrooms.com/research I published a paper today on something I've been calling postural manipulation. The short version: ordi...

Reddit - Artificial Intelligence · 1 min ·
Llms

[R] An attack class that passes every current LLM filter - no payload, no injection signature, no log trace

https://shapingrooms.com/research I've been documenting what I'm calling postural manipulation: a specific class of language that install...

Reddit - Machine Learning · 1 min ·
There are more AI health tools than ever—but how well do they work? | MIT Technology Review
Llms

There are more AI health tools than ever—but how well do they work? | MIT Technology Review

Earlier this month, Microsoft launched Copilot Health, a new space within its Copilot app where users will be able to connect their medic...

MIT Technology Review · 11 min ·
Llms

What does Gemini think of you?

I noticed that Gemini was referring back to a lot of queries I've made in the past and was using that knowledge to drive follow up prompt...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime