CPU Optimized Embeddings with 🤗 Optimum Intel and fastRAG
About this article
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Back to Articles CPU Optimized Embeddings with 🤗 Optimum Intel and fastRAG Published March 15, 2024 Update on GitHub Upvote 14 +8 Peter Izsak peterizsak Follow guest Moshe Berchansky mber Follow guest Daniel Fleischer danf Follow guest Ella Charlaix echarlaix Follow Morgan Funtowicz mfuntowicz Follow Moshe Wasserblat moshew Follow guest Embedding models are useful for many applications such as retrieval, reranking, clustering, and classification. The research community has witnessed significant advancements in recent years in embedding models, leading to substantial enhancements in all applications building on semantic representation. Models such as BGE, GTE, and E5 are placed at the top of the MTEB benchmark and in some cases outperform proprietary embedding services. There are a variety of model sizes found in Hugging Face's Model hub, from lightweight (100-350M parameters) to 7B models (such as Salesforce/SFR-Embedding-Mistral). The lightweight models based on an encoder architecture are ideal candidates for optimization and utilization on CPU backends running semantic search-based applications, such as Retrieval Augmented Generation (RAG). In this blog, we will show how to unlock significant performance boost on Xeon based CPUs, and show how easy it is to integrate optimized models into existing RAG pipelines using fastRAG. Information Retrieval with Embedding Models Embedding models encode textual data into dense vectors, capturing semantic and contextual meaning. Thi...