Bringing serverless GPU inference to Hugging Face users
About this article
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Back to Articles Bringing serverless GPU inference to Hugging Face users Published April 2, 2024 Update on GitHub Upvote 11 +5 Philipp Schmid philschmid Follow Jeff Boudier jeffboudier Follow Rita Kozlov rita3ko Follow guest Nikhil Kothari nkothariCF Follow guest Update (November 2024): The integration is no longer available. Please switch to the Hugging Face Inference API, Inference Endpoints, or other deployment options for your AI model needs. Today, we are thrilled to announce the launch of Deploy on Cloudflare Workers AI, a new integration on the Hugging Face Hub. Deploy on Cloudflare Workers AI makes using open models as a serverless API easy, powered by state-of-the-art GPUs deployed in Cloudflare edge data centers. Starting today, we are integrating some of the most popular open models on Hugging Face into Cloudflare Workers AI, powered by our production solutions, like Text Generation Inference. With Deploy on Cloudflare Workers AI, developers can build robust Generative AI applications without managing GPU infrastructure and servers and at a very low operating cost: only pay for the compute you use, not for idle capacity. Generative AI for Developers This new experience expands upon the strategic partnership we announced last year to simplify the access and deployment of open Generative AI models. One of the main problems developers and organizations face is the scarcity of GPU availability and the fixed costs of deploying servers to start building. Deploy on Clo...