[2601.02850] Sample-Efficient Neurosymbolic Deep Reinforcement Learning
About this article
Abstract page for arXiv paper 2601.02850: Sample-Efficient Neurosymbolic Deep Reinforcement Learning
Computer Science > Artificial Intelligence arXiv:2601.02850 (cs) [Submitted on 6 Jan 2026 (v1), last revised 10 Apr 2026 (this version, v2)] Title:Sample-Efficient Neurosymbolic Deep Reinforcement Learning Authors:Celeste Veronese, Alessandro Farinelli, Daniele Meli View a PDF of the paper titled Sample-Efficient Neurosymbolic Deep Reinforcement Learning, by Celeste Veronese and 2 other authors View PDF HTML (experimental) Abstract:Reinforcement Learning (RL) is a well-established framework for sequential decision-making in complex environments. However, state-of-the-art Deep RL (DRL) algorithms typically require large training datasets and often struggle to generalize beyond small-scale training scenarios, even within standard benchmarks. We propose a neuro-symbolic DRL approach that integrates background symbolic knowledge to improve sample efficiency and generalization to more challenging, unseen tasks. Partial policies defined for simple domain instances, where high performance is easily attained, are transferred as useful priors to accelerate learning in more complex settings and avoid tuning DRL parameters from scratch. To do so, partial policies are represented as logical rules, and online reasoning is performed to guide the training process through two mechanisms: (i) biasing the action distribution during exploration, and (ii) rescaling Q-values during exploitation. This neuro-symbolic integration enhances interpretability and trustworthiness while accelerating co...