Introducing Modular Diffusers - Composable Building Blocks for Diffusion Pipelines

Introducing Modular Diffusers - Composable Building Blocks for Diffusion Pipelines

Hugging Face Blog 9 min read

About this article

We’re on a journey to advance and democratize artificial intelligence through open source and open science.

Back to Articles Introducing Modular Diffusers - Composable Building Blocks for Diffusion Pipelines Published March 5, 2026 Update on GitHub Upvote 4 YiYi Xu YiYiXu Follow Alvaro Somoza OzzyGT Follow Dhruv Nair dn6 Follow Sayak Paul sayakpaul Follow Modular Diffusers introduces a new way to build diffusion pipelines by composing reusable blocks. Instead of writing entire pipelines from scratch, you can mix and match blocks to create workflows tailored to your needs! This complements the existing DiffusionPipeline class with a more flexible, composable alternative. In this post, we'll walk through how Modular Diffusers works — from the familiar API to run a modular pipeline, to building fully custom blocks and composing them into your own workflow. We'll also show how it integrates with Mellon, a node-based visual workflow interface that you can use to wire Modular Diffusers blocks together. Table of contents Quickstart Custom Blocks Modular Repositories Community Pipelines Integration with Mellon Quickstart Here is a simple example of how to run inference with FLUX.2 Klein 4B using pre-built blocks: import torch from diffusers import ModularPipeline # Create a modular pipeline - this only defines the workflow, model weights have not been loaded yet pipe = ModularPipeline.from_pretrained( "black-forest-labs/FLUX.2-klein-4B" ) # Now load the model weights — configure dtype, quantization, etc in this step pipe.load_components(torch_dtype=torch.bfloat16) pipe.to("cuda") # Gene...

Originally published on March 05, 2026. Curated by AI News.

Related Articles

[2603.25112] Do LLMs Know What They Know? Measuring Metacognitive Efficiency with Signal Detection Theory
Llms

[2603.25112] Do LLMs Know What They Know? Measuring Metacognitive Efficiency with Signal Detection Theory

Abstract page for arXiv paper 2603.25112: Do LLMs Know What They Know? Measuring Metacognitive Efficiency with Signal Detection Theory

arXiv - AI · 4 min ·
[2603.24772] Evaluating Fine-Tuned LLM Model For Medical Transcription With Small Low-Resource Languages Validated Dataset
Llms

[2603.24772] Evaluating Fine-Tuned LLM Model For Medical Transcription With Small Low-Resource Languages Validated Dataset

Abstract page for arXiv paper 2603.24772: Evaluating Fine-Tuned LLM Model For Medical Transcription With Small Low-Resource Languages Val...

arXiv - Machine Learning · 4 min ·
[2603.25325] How Pruning Reshapes Features: Sparse Autoencoder Analysis of Weight-Pruned Language Models
Llms

[2603.25325] How Pruning Reshapes Features: Sparse Autoencoder Analysis of Weight-Pruned Language Models

Abstract page for arXiv paper 2603.25325: How Pruning Reshapes Features: Sparse Autoencoder Analysis of Weight-Pruned Language Models

arXiv - AI · 4 min ·
Llms

[D] Why evaluating only final outputs is misleading for local LLM agents

Been running local agents with Ollama + LangChain lately and noticed something kind of uncomfortable — you can get a completely correct f...

Reddit - Machine Learning · 1 min ·
More in Open Source Ai: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime