Granite 4.0 Nano: Just how small can you go?

Granite 4.0 Nano: Just how small can you go?

Hugging Face Blog 4 min read

About this article

A Blog post by IBM Granite on Hugging Face

Back to Articles Granite 4.0 Nano: Just how small can you go? Enterprise Article Published October 28, 2025 Upvote 123 +117 Kate Soule katesoule Follow ibm-granite Rameswar Panda rpand002 Follow ibm-granite Today we are excited to share Granite 4.0 Nano, our smallest models yet, released as part of IBM's Granite 4.0 model family. Designed for the edge and on-device applications, these models demonstrate excellent performance for their size and represent IBM's continued commitment to develop powerful, useful, models that don't require hundreds of billions of parameters to get the job done. Like all Granite 4.0 models, the Nano models are released under an Apache 2.0 license with native architecture support on popular runtimes like vLLM, llama.cpp, and MLX. The models were trained with the same improved training methodologies, pipelines, and over 15T tokens of training data developed for the original Granite 4.0 models. This release includes variants benefiting from the Granite 4.0’s new, efficient hybrid architecture, and like all Granite language models, the Granite 4.0 Nano models also carry with them IBM's ISO 42001 certification for responsible model development, giving users added confidence that models are built and governed to global standards. Specifically, Granite 4.0 Nano comprises of 4 instruct models and their base model counterparts: Granite 4.0 H 1B – A ~1.5B parameter, dense LLM featuring a hybrid-SSM based architecture. Granite 4.0 H 350M – A ~350M parameter...

Originally published on February 15, 2026. Curated by AI News.

Related Articles

Granite 4.0 3B Vision: Compact Multimodal Intelligence for Enterprise Documents
Open Source Ai

Granite 4.0 3B Vision: Compact Multimodal Intelligence for Enterprise Documents

A Blog post by IBM Granite on Hugging Face

Hugging Face Blog · 7 min ·
Llms

My AI spent last night modifying its own codebase

I've been working on a local AI system called Apis that runs completely offline through Ollama. During a background run, Apis identified ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Depth-first pruning seems to transfer from GPT-2 to Llama (unexpectedly well)

TL;DR: Removing the right transformer layers (instead of shrinking all layers) gives smaller, faster models with minimal quality loss — a...

Reddit - Artificial Intelligence · 1 min ·
[2603.16430] EngGPT2: Sovereign, Efficient and Open Intelligence
Llms

[2603.16430] EngGPT2: Sovereign, Efficient and Open Intelligence

Abstract page for arXiv paper 2603.16430: EngGPT2: Sovereign, Efficient and Open Intelligence

arXiv - AI · 4 min ·
More in Open Source Ai: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime