[2512.04808] Setting up for failure: automatic discovery of the neural mechanisms of cognitive errors
Summary
This article presents a novel approach to uncovering neural mechanisms behind cognitive errors using recurrent neural networks (RNNs) trained on behavioral data.
Why It Matters
Understanding cognitive errors is crucial for advancing neuroscience and artificial intelligence. This research automates the discovery of neural mechanisms, potentially leading to better models of cognition and improved AI systems that mimic human behavior.
Key Takeaways
- Introduces an automated method for discovering neural mechanisms of cognitive errors.
- Utilizes a non-parametric generative model to enhance data for training RNNs.
- Demonstrates the effectiveness of fitting RNNs to rich behavioral patterns.
- Provides insights into swap errors in visual working memory tasks.
- Offers predictions that can be empirically tested in future experiments.
Quantitative Biology > Neurons and Cognition arXiv:2512.04808 (q-bio) [Submitted on 4 Dec 2025 (v1), last revised 22 Feb 2026 (this version, v2)] Title:Setting up for failure: automatic discovery of the neural mechanisms of cognitive errors Authors:Puria Radmard, Paul M. Bays, Máté Lengyel View a PDF of the paper titled Setting up for failure: automatic discovery of the neural mechanisms of cognitive errors, by Puria Radmard and Paul M. Bays and M\'at\'e Lengyel View PDF HTML (experimental) Abstract:Discovering the neural mechanisms underpinning cognition is one of the grand challenges of neuroscience. However, previous approaches for building models of RNN dynamics that explain behaviour required iterative refinement of architectures and/or optimisation objectives, resulting in a piecemeal, and mostly heuristic, human-in-the-loop process. Here, we offer an alternative approach that automates the discovery of viable RNN mechanisms by explicitly training RNNs to reproduce behaviour, including the same characteristic errors and suboptimalities, that humans and animals produce in a cognitive task. Achieving this required two main innovations. First, as the amount of behavioural data that can be collected in experiments is often too limited to train RNNs, we use a non-parametric generative model of behavioural responses to produce surrogate data for training RNNs. Second, to capture all relevant statistical aspects of the data, we developed a novel diffusion model-based approa...