[2603.27058] Liquid Networks with Mixture Density Heads for Efficient Imitation Learning

[2603.27058] Liquid Networks with Mixture Density Heads for Efficient Imitation Learning

arXiv - Machine Learning 3 min read

About this article

Abstract page for arXiv paper 2603.27058: Liquid Networks with Mixture Density Heads for Efficient Imitation Learning

Computer Science > Machine Learning arXiv:2603.27058 (cs) [Submitted on 28 Mar 2026] Title:Liquid Networks with Mixture Density Heads for Efficient Imitation Learning Authors:Nikolaus Correll View a PDF of the paper titled Liquid Networks with Mixture Density Heads for Efficient Imitation Learning, by Nikolaus Correll View PDF HTML (experimental) Abstract:We compare liquid neural networks with mixture density heads against diffusion policies on Push-T, RoboMimic Can, and PointMaze under a shared-backbone comparison protocol that isolates policy-head effects under matched inputs, training budgets, and evaluation settings. Across tasks, liquid policies use roughly half the parameters (4.3M vs. 8.6M), achieve 2.4x lower offline prediction error, and run 1.8 faster at inference. In sample-efficiency experiments spanning 1% to 46.42% of training data, liquid models remain consistently more robust, with especially large gains in low-data and medium-data regimes. Closed-loop results on Push-T and PointMaze are directionally consistent with offline rankings but noisier, indicating that strong offline density modeling helps deployment while not fully determining closed-loop success. Overall, liquid recurrent multimodal policies provide a compact and practical alternative to iterative denoising for imitation learning. Subjects: Machine Learning (cs.LG); Robotics (cs.RO) Cite as: arXiv:2603.27058 [cs.LG]   (or arXiv:2603.27058v1 [cs.LG] for this version)   https://doi.org/10.48550/ar...

Originally published on March 31, 2026. Curated by AI News.

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Llms

Depth-first pruning seems to transfer from GPT-2 to Llama (unexpectedly well)

TL;DR: Removing the right transformer layers (instead of shrinking all layers) gives smaller, faster models with minimal quality loss — a...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

If frontier AI labs have unlimited shovels, what's stopping them from building everything?

I found myself explaining AI tokens to my mom over the weekend. At first I related them to building bricks: blocks of data the model uses...

Reddit - Artificial Intelligence · 1 min ·
[2603.16790] InCoder-32B: Code Foundation Model for Industrial Scenarios
Llms

[2603.16790] InCoder-32B: Code Foundation Model for Industrial Scenarios

Abstract page for arXiv paper 2603.16790: InCoder-32B: Code Foundation Model for Industrial Scenarios

arXiv - AI · 4 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime