[2603.21854] Reasoning or Rhetoric? An Empirical Analysis of Moral Reasoning Explanations in Large Language Models

[2603.21854] Reasoning or Rhetoric? An Empirical Analysis of Moral Reasoning Explanations in Large Language Models

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2603.21854: Reasoning or Rhetoric? An Empirical Analysis of Moral Reasoning Explanations in Large Language Models

Computer Science > Artificial Intelligence arXiv:2603.21854 (cs) [Submitted on 23 Mar 2026] Title:Reasoning or Rhetoric? An Empirical Analysis of Moral Reasoning Explanations in Large Language Models Authors:Aryan Kasat, Smriti Singh, Aman Chadha, Vinija Jain View a PDF of the paper titled Reasoning or Rhetoric? An Empirical Analysis of Moral Reasoning Explanations in Large Language Models, by Aryan Kasat and 3 other authors View PDF HTML (experimental) Abstract:Do large language models reason morally, or do they merely sound like they do? We investigate whether LLM responses to moral dilemmas exhibit genuine developmental progression through Kohlberg's stages of moral development, or whether alignment training instead produces reasoning-like outputs that superficially resemble mature moral judgment without the underlying developmental trajectory. Using an LLM-as-judge scoring pipeline validated across three judge models, we classify more than 600 responses from 13 LLMs spanning a range of architectures, parameter scales, and training regimes across six classical moral dilemmas, and conduct ten complementary analyses to characterize the nature and internal coherence of the resulting patterns. Our results reveal a striking inversion: responses overwhelmingly correspond to post-conventional reasoning (Stages 5-6) regardless of model size, architecture, or prompting strategy, the effective inverse of human developmental norms, where Stage 4 dominates. Most strikingly, a subse...

Originally published on March 24, 2026. Curated by AI News.

Related Articles

Llms

We hit 150 stars on our AI setup tool!

yo folks, we just hit 150 stars on our open source tool that auto makes AI context files. got 90 PRs merged and 20 issues that ppl are pi...

Reddit - Artificial Intelligence · 1 min ·
Llms

Is ai getting dummer?

Over the past month, it feels like GPT and Gemini have been giving wrong answers a lot. Do you feel the same, or am I exaggerating? submi...

Reddit - Artificial Intelligence · 1 min ·
Llms

If AI is really making us more productive... why does it feel like we are working more, not less...?

The promise of AI was the ultimate system optimisation: Efficiency. On paper, the tools are delivering something similar to what they pro...

Reddit - Artificial Intelligence · 1 min ·
Llms

[R] GPT-5.4-mini regressed 22pp on vanilla prompting vs GPT-5-mini. Nobody noticed because benchmarks don't test this. Recursive Language Models solved it.

GPT-5.4-mini produces shorter, terser outputs by default. Vanilla accuracy dropped from 69.5% to 47.2% across 12 tasks (1,800 evals). The...

Reddit - Machine Learning · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime