[2603.02854] CoFL: Continuous Flow Fields for Language-Conditioned Navigation
About this article
Abstract page for arXiv paper 2603.02854: CoFL: Continuous Flow Fields for Language-Conditioned Navigation
Computer Science > Robotics arXiv:2603.02854 (cs) [Submitted on 3 Mar 2026] Title:CoFL: Continuous Flow Fields for Language-Conditioned Navigation Authors:Haokun Liu, Zhaoqi Ma, Yicheng Chen, Masaki Kitagawa, Wentao Zhang, Jinjie Li, Moju Zhao View a PDF of the paper titled CoFL: Continuous Flow Fields for Language-Conditioned Navigation, by Haokun Liu and 6 other authors View PDF HTML (experimental) Abstract:Language-conditioned navigation pipelines often rely on brittle modular components or costly action-sequence generation. To address these limitations, we present CoFL, an end-to-end policy that directly maps a bird's-eye view (BEV) observation and a language instruction to a continuous flow field for navigation. Instead of predicting discrete action tokens or sampling action chunks via iterative denoising, CoFL outputs instantaneous velocities that can be queried at arbitrary 2D projected locations. Trajectories are obtained by numerical integration of the predicted field, producing smooth motion that remains reactive under closed-loop execution. To enable large-scale training, we build a dataset of over 500k BEV image-instruction pairs, each procedurally annotated with a flow field and a trajectory derived from BEV semantic maps built on Matterport3D and ScanNet. By training on a mixed distribution, CoFL significantly outperforms modular Vision-Language Model (VLM)-based planners and generative policy baselines on strictly unseen scenes. Finally, we deploy CoFL zero-...