Sentence Transformers in the Hugging Face Hub
About this article
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Back to Articles Sentence Transformers in the Hugging Face Hub Published June 28, 2021 Update on GitHub Upvote - Omar Sanseviero osanseviero Follow Nils Reimers nreimers Follow Over the past few weeks, we've built collaborations with many Open Source frameworks in the machine learning ecosystem. One that gets us particularly excited is Sentence Transformers. Sentence Transformers is a framework for sentence, paragraph and image embeddings. This allows to derive semantically meaningful embeddings (1) which is useful for applications such as semantic search or multi-lingual zero shot classification. As part of Sentence Transformers v2 release, there are a lot of cool new features: Sharing your models in the Hub easily. Widgets and Inference API for sentence embeddings and sentence similarity. Better sentence-embeddings models available (benchmark and models in the Hub). With over 90 pretrained Sentence Transformers models for more than 100 languages in the Hub, anyone can benefit from them and easily use them. Pre-trained models can be loaded and used directly with few lines of code: from sentence_transformers import SentenceTransformer sentences = ["Hello World", "Hallo Welt"] model = SentenceTransformer('sentence-transformers/paraphrase-MiniLM-L6-v2') embeddings = model.encode(sentences) print(embeddings) But not only this. People will probably want to either demo their models or play with other models easily, so we're happy to announce the release of two new widgets in th...