NVIDIA Isaac GR00T N1.7: Open Reasoning VLA Model for Humanoid Robots
About this article
A Blog post by NVIDIA on Hugging Face
Back to Articles NVIDIA Isaac GR00T N1.7: Open Reasoning VLA Model for Humanoid Robots Enterprise + Article Published April 17, 2026 Upvote - Edith Llontop ellontop Follow nvidia Kalyan Vadrevu kalyanvadrevu Follow nvidia We are releasing NVIDIA Isaac GR00T N1.7 (Early Access) β an open-source, commercially licensed Vision-Language-Action model for humanoid robots, built on a simple premise: human data is the most scalable source of robot intelligence. TL;DR π€ GR00T N1.7 β open-source, commercially licensed humanoid foundation model, available now on Hugging Face and GitHub π Factory-floor ready β commercial licensing enables production deployments today, across material handling, packaging, and inspection π§ Reasoning built for multi-step tasks β task and subtask-level reasoning improve reliability on complex workflows π Expanded dexterous manipulation β finger-level control enables contact-rich tasks like small parts assembly π¬ First-ever dexterity scaling law β trained on 20,000+ hours of human egocentric video, more human data directly and predictably improves robot dexterity β without mass teleoperation π GitHub | Hugging Face | Supports LeRobot dataset format What is GR00T N1.7? GR00T N1.7 is a 3B-parameter Vision-Language-Action (VLA) model that maps visual observations and natural language instructions to continuous robot actions. It uses an Action Cascade architecture β a dual-system design that separates high-level reasoning from low-level motor control: System 2 ...