[2602.13017] Synaptic Activation and Dual Liquid Dynamics for Interpretable Bio-Inspired Models
Summary
This paper presents a unified framework for bio-inspired models that enhances interpretability in recurrent neural networks (RNNs) through synaptic activation and dual liquid dynamics.
Why It Matters
Understanding the structural and functional differences in bio-inspired models is crucial for advancing artificial intelligence. This research provides insights that could lead to more interpretable and accurate AI systems, particularly in complex tasks like lane-keeping control.
Key Takeaways
- Introduces a framework for bio-inspired models to improve interpretability.
- Demonstrates that liquid-capacitance models enhance RNN performance.
- Combining chemical synapses with synaptic activation yields better accuracy.
- Evaluates model performance using multiple metrics in lane-keeping tasks.
- Findings could influence future AI model design and safety.
Computer Science > Neural and Evolutionary Computing arXiv:2602.13017 (cs) [Submitted on 13 Feb 2026] Title:Synaptic Activation and Dual Liquid Dynamics for Interpretable Bio-Inspired Models Authors:Mónika Farsang, Radu Grosu View a PDF of the paper titled Synaptic Activation and Dual Liquid Dynamics for Interpretable Bio-Inspired Models, by M\'onika Farsang and 1 other authors View PDF HTML (experimental) Abstract:In this paper, we present a unified framework for various bio-inspired models to better understand their structural and functional differences. We show that liquid-capacitance-extended models lead to interpretable behavior even in dense, all-to-all recurrent neural network (RNN) policies. We further demonstrate that incorporating chemical synapses improves interpretability and that combining chemical synapses with synaptic activation yields the most accurate and interpretable RNN models. To assess the accuracy and interpretability of these RNN policies, we consider the challenging lane-keeping control task and evaluate performance across multiple metrics, including turn-weighted validation loss, neural activity during driving, absolute correlation between neural activity and road trajectory, saliency maps of the networks' attention, and the robustness of their saliency maps measured by the structural similarity index. Subjects: Neural and Evolutionary Computing (cs.NE); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) Cite as: arXiv:2602.13017 [cs.NE] ...