Blazing Fast SetFit Inference with 🤗 Optimum Intel on Xeon
About this article
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Back to Articles Blazing Fast SetFit Inference with 🤗 Optimum Intel on Xeon Published April 3, 2024 Update on GitHub Upvote 11 +5 Daniel Korat danielkorat Follow Intel Tom Aarsen tomaarsen Follow Oren Pereg orenpereg Follow Intel Moshe Wasserblat moshew Follow Intel Ella Charlaix echarlaix Follow Abirami Prabhakaran aprabh2 Follow Intel SetFit is a promising solution for a common modeling problem: how to deal with lack of labeled data for training. Developed with Hugging Face’s research partners at Intel Labs and the UKP Lab, SetFit is an efficient framework for few-shot fine-tuning of Sentence Transformers models. SetFit achieves high accuracy with little labeled data - for example, SetFit outperforms GPT-3.5 in 3-shot prompting and with 5 shot it also outperforms 3-shot GPT-4 on the Banking 77 financial intent dataset. Compared to LLM based methods, SetFit has two unique advantages: 🗣 No prompts or verbalisers: few-shot in-context learning with LLMs requires handcrafted prompts which make the results brittle, sensitive to phrasing and dependent on user expertise. SetFit dispenses with prompts altogether by generating rich embeddings directly from a small number of labeled text examples. 🏎 Fast to train: SetFit doesn't rely on LLMs such as GPT-3.5 or Llama2 to achieve high accuracy. As a result, it is typically an order of magnitude (or more) faster to train and run inference with. For more details on SetFit, check out our paper, blog, code, and data. Setfit has been wide...