[2601.06338] Circuit Mechanisms for Spatial Relation Generation in Diffusion Transformers

[2601.06338] Circuit Mechanisms for Spatial Relation Generation in Diffusion Transformers

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2601.06338: Circuit Mechanisms for Spatial Relation Generation in Diffusion Transformers

Computer Science > Artificial Intelligence arXiv:2601.06338 (cs) [Submitted on 9 Jan 2026 (v1), last revised 4 Apr 2026 (this version, v2)] Title:Circuit Mechanisms for Spatial Relation Generation in Diffusion Transformers Authors:Binxu Wang, Jingxuan Fan, Xu Pan View a PDF of the paper titled Circuit Mechanisms for Spatial Relation Generation in Diffusion Transformers, by Binxu Wang and 2 other authors View PDF HTML (experimental) Abstract:Diffusion Transformers (DiTs) have greatly advanced text-to-image generation, but models still struggle to generate the correct spatial relations between objects as specified in the text prompt. In this study, we adopt a mechanistic interpretability approach to investigate how a DiT can generate correct spatial relations between objects. We train, from scratch, DiTs of different sizes with different text encoders to learn to generate images containing two objects whose attributes and spatial relations are specified in the text prompt. We find that, although all the models can learn this task to near-perfect accuracy, the underlying mechanisms differ drastically depending on the choice of text encoder. When using random text embeddings, we find that the spatial-relation information is passed to image tokens through a two-stage circuit, involving two cross-attention heads that separately read the spatial relation and single-object attributes in the text prompt. When using a pretrained text encoder (T5), we find that the DiT uses a differe...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

Llms

How do you test AI agents in production? The unpredictability is overwhelming.[D]

I’ve been in QA for almost a decade. My mental model for quality was always: given input X, assert output Y. Now I’m on a team that’s shi...

Reddit - Machine Learning · 1 min ·
Machine Learning

INT8 quantization gives me better accuracy than FP16 ! [D]

Hi everyone, I’m working on a deep learning model and I noticed something strange. When I compare different precisions: FP32 (baseline) F...

Reddit - Machine Learning · 1 min ·
The Download: DeepSeek’s latest AI breakthrough, and the race to build world models | MIT Technology Review
Machine Learning

The Download: DeepSeek’s latest AI breakthrough, and the race to build world models | MIT Technology Review

China has blocked Meta’s $2 billion acquisition of AI startup Manus.

MIT Technology Review · 6 min ·
Machine Learning

Maths vs machine learning publishing venues [D]

I am a research mathematician that has recently written a (in my opinion) pretty neat paper in theoretical computer science that is proba...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime