NVIDIA brings agents to life with DGX Spark and Reachy Mini
About this article
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Back to Articles NVIDIA brings agents to life with DGX Spark and Reachy Mini Published January 5, 2026 Update on GitHub Upvote 61 +55 Jeff Boudier jeffboudier Follow Nader Khalil nader-at-nvidia Follow nvidia Alec Fong alecfong Follow nvidia Today at CES 2026, NVIDIA unveiled a world of new open models to enable the future of agents, online and in the real world. From the recently released NVIDIA Nemotron reasoning LLMs to the new NVIDIA Isaac GR00T N1.6 open reasoning VLA and NVIDIA Cosmos world foundation models, all the building blocks are here today for AI Builders to build their own agents. But what if you could bring your own agent to life, right at your desk? An AI buddy that can be useful to you and process your data privately? In the CES keynote today, Jensen Huang showed us how we can do exactly that, using the processing power of NVIDIA DGX Spark with Reachy Mini to create your own little office R2D2 you can talk to and collaborate with. This blog post provides a step-by-step guide to replicate this amazing experience at home using a DGX Spark and Reachy Mini. Let’s dive in! Ingredients If you want to start cooking right away, here’s the source code of the demo. We’ll be using the following: A reasoning model: demo uses NVIDIA Nemotron 3 Nano A vision model: demo uses NVIDIA Nemotron Nano 2 VL A text-to-speech model: demo uses ElevenLabs Reachy Mini (or Reachy Mini Simulation) Python v3.10+ environment, with uv Feel free to adapt the recipe and make it your own ...