[2603.20899] Mitigating Shortcut Reasoning in Language Models: A Gradient-Aware Training Approach

[2603.20899] Mitigating Shortcut Reasoning in Language Models: A Gradient-Aware Training Approach

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2603.20899: Mitigating Shortcut Reasoning in Language Models: A Gradient-Aware Training Approach

Computer Science > Computation and Language arXiv:2603.20899 (cs) [Submitted on 21 Mar 2026] Title:Mitigating Shortcut Reasoning in Language Models: A Gradient-Aware Training Approach Authors:Hongyu Cao, Kunpeng Liu, Dongjie Wang, Yanjie Fu View a PDF of the paper titled Mitigating Shortcut Reasoning in Language Models: A Gradient-Aware Training Approach, by Hongyu Cao and 3 other authors View PDF HTML (experimental) Abstract:Large language models exhibit strong reasoning capabilities, yet often rely on shortcuts such as surface pattern matching and answer memorization rather than genuine logical inference. We propose Shortcut-Aware Reasoning Training (SART), a gradient-aware framework that detects and mitigates shortcut-promoting samples via ShortcutScore and gradient surgery. Our method identifies shortcut signals through gradient misalignment with validation objectives and answer-token concentration, and modifies training dynamics accordingly. Experiments on controlled reasoning benchmarks show that SART achieves +16.5% accuracy and +40.2% robustness over the strongest baseline, significantly improving generalization under distribution shifts. Code is available at: this https URL. Comments: Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI) Cite as: arXiv:2603.20899 [cs.CL]   (or arXiv:2603.20899v1 [cs.CL] for this version)   https://doi.org/10.48550/arXiv.2603.20899 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submis...

Originally published on March 24, 2026. Curated by AI News.

Related Articles

Llms

We hit 150 stars on our AI setup tool!

yo folks, we just hit 150 stars on our open source tool that auto makes AI context files. got 90 PRs merged and 20 issues that ppl are pi...

Reddit - Artificial Intelligence · 1 min ·
Llms

Is ai getting dummer?

Over the past month, it feels like GPT and Gemini have been giving wrong answers a lot. Do you feel the same, or am I exaggerating? submi...

Reddit - Artificial Intelligence · 1 min ·
Llms

If AI is really making us more productive... why does it feel like we are working more, not less...?

The promise of AI was the ultimate system optimisation: Efficiency. On paper, the tools are delivering something similar to what they pro...

Reddit - Artificial Intelligence · 1 min ·
Llms

[R] GPT-5.4-mini regressed 22pp on vanilla prompting vs GPT-5-mini. Nobody noticed because benchmarks don't test this. Recursive Language Models solved it.

GPT-5.4-mini produces shorter, terser outputs by default. Vanilla accuracy dropped from 69.5% to 47.2% across 12 tasks (1,800 evals). The...

Reddit - Machine Learning · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime