Welcome EmbeddingGemma, Google's new efficient embedding model
About this article
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Back to Articles Welcome EmbeddingGemma, Google's new efficient embedding model Published September 4, 2025 Update on GitHub Upvote 273 +267 Tom Aarsen tomaarsen Follow Joshua Xenova Follow Alvaro Bartolome alvarobartt Follow Aritra Roy Gosthipaty ariG23498 Follow Pedro Cuenca pcuenq Follow Sergio Paniego sergiopaniego Follow TL;DR Today, Google releases EmbeddingGemma, a state-of-the-art multilingual embedding model perfect for on-device use cases. Designed for speed and efficiency, the model features a compact size of 308M parameters and a 2K context window, unlocking new possibilities for mobile RAG pipelines, agents, and more. EmbeddingGemma is trained to support over 100 languages and is the highest-ranking text-only multilingual embedding model under 500M on the Massive Text Embedding Benchmark (MTEB) at the time of writing. Table of Contents Introduction Evaluation Demo Usage Sentence Transformers Retrieval LangChain LlamaIndex Haystack txtai Transformers.js Text Embeddings Inference ONNX Runtime Finetuning Full Finetuning Script Training Finetuned Evaluation Further Reading Introduction Text embeddings have become the backbone of modern natural‑language applications, turning words, sentences, and documents into dense vectors that capture meaning, sentiment, and intent. These vectors enable fast similarity search, clustering, classification, and retrieval across massive corpora, powering everything from recommendation engines and semantic search to retrieval-augment...