[2512.04808] Setting up for failure: automatic discovery of the neural mechanisms of cognitive errors

[2512.04808] Setting up for failure: automatic discovery of the neural mechanisms of cognitive errors

arXiv - AI 4 min read Article

Summary

This article presents a novel approach to uncovering neural mechanisms behind cognitive errors using recurrent neural networks (RNNs) trained on behavioral data.

Why It Matters

Understanding cognitive errors is crucial for advancing neuroscience and artificial intelligence. This research automates the discovery of neural mechanisms, potentially leading to better models of cognition and improved AI systems that mimic human behavior.

Key Takeaways

  • Introduces an automated method for discovering neural mechanisms of cognitive errors.
  • Utilizes a non-parametric generative model to enhance data for training RNNs.
  • Demonstrates the effectiveness of fitting RNNs to rich behavioral patterns.
  • Provides insights into swap errors in visual working memory tasks.
  • Offers predictions that can be empirically tested in future experiments.

Quantitative Biology > Neurons and Cognition arXiv:2512.04808 (q-bio) [Submitted on 4 Dec 2025 (v1), last revised 22 Feb 2026 (this version, v2)] Title:Setting up for failure: automatic discovery of the neural mechanisms of cognitive errors Authors:Puria Radmard, Paul M. Bays, Máté Lengyel View a PDF of the paper titled Setting up for failure: automatic discovery of the neural mechanisms of cognitive errors, by Puria Radmard and Paul M. Bays and M\'at\'e Lengyel View PDF HTML (experimental) Abstract:Discovering the neural mechanisms underpinning cognition is one of the grand challenges of neuroscience. However, previous approaches for building models of RNN dynamics that explain behaviour required iterative refinement of architectures and/or optimisation objectives, resulting in a piecemeal, and mostly heuristic, human-in-the-loop process. Here, we offer an alternative approach that automates the discovery of viable RNN mechanisms by explicitly training RNNs to reproduce behaviour, including the same characteristic errors and suboptimalities, that humans and animals produce in a cognitive task. Achieving this required two main innovations. First, as the amount of behavioural data that can be collected in experiments is often too limited to train RNNs, we use a non-parametric generative model of behavioural responses to produce surrogate data for training RNNs. Second, to capture all relevant statistical aspects of the data, we developed a novel diffusion model-based approa...

Related Articles

Machine Learning

[D] Budget Machine Learning Hardware

Looking to get into machine learning and found this video on a piece of hardware for less than £500. Is it really possible to teach auton...

Reddit - Machine Learning · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Machine Learning

Your prompts aren’t the problem — something else is

I keep seeing people focus heavily on prompt optimization. But in practice, a lot of failures I’ve observed don’t come from the prompt it...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[R], 31 MILLIONS High frequency data, Light GBM worked perfectly

We just published a paper on predicting adverse selection in high-frequency crypto markets using LightGBM, and I wanted to share it here ...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime