Deploy Hugging Face models easily with Amazon SageMaker
About this article
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Back to Articles Deploy Hugging Face models easily with Amazon SageMaker 🏎 Published July 8, 2021 Update on GitHub Upvote 2 Philipp Schmid philschmid Follow Earlier this year we announced a strategic collaboration with Amazon to make it easier for companies to use Hugging Face in Amazon SageMaker, and ship cutting-edge Machine Learning features faster. We introduced new Hugging Face Deep Learning Containers (DLCs) to train Hugging Face Transformer models in Amazon SageMaker. Today, we are excited to share a new inference solution with you that makes it easier than ever to deploy Hugging Face Transformers with Amazon SageMaker! With the new Hugging Face Inference DLCs, you can deploy your trained models for inference with just one more line of code, or select any of the 10,000+ publicly available models from the Model Hub, and deploy them with Amazon SageMaker. Deploying models in SageMaker provides you with production-ready endpoints that scale easily within your AWS environment, with built-in monitoring and a ton of enterprise features. It's been an amazing collaboration and we hope you will take advantage of it! Here's how to use the new SageMaker Hugging Face Inference Toolkit to deploy Transformers-based models: from sagemaker.huggingface import HuggingFaceModel # create Hugging Face Model Class and deploy it as SageMaker Endpoint huggingface_model = HuggingFaceModel(...).deploy() That's it! 🚀 To learn more about accessing and using the new Hugging Face DLCs with the A...