[2601.06338] Circuit Mechanisms for Spatial Relation Generation in Diffusion Transformers
About this article
Abstract page for arXiv paper 2601.06338: Circuit Mechanisms for Spatial Relation Generation in Diffusion Transformers
Computer Science > Artificial Intelligence arXiv:2601.06338 (cs) [Submitted on 9 Jan 2026 (v1), last revised 4 Apr 2026 (this version, v2)] Title:Circuit Mechanisms for Spatial Relation Generation in Diffusion Transformers Authors:Binxu Wang, Jingxuan Fan, Xu Pan View a PDF of the paper titled Circuit Mechanisms for Spatial Relation Generation in Diffusion Transformers, by Binxu Wang and 2 other authors View PDF HTML (experimental) Abstract:Diffusion Transformers (DiTs) have greatly advanced text-to-image generation, but models still struggle to generate the correct spatial relations between objects as specified in the text prompt. In this study, we adopt a mechanistic interpretability approach to investigate how a DiT can generate correct spatial relations between objects. We train, from scratch, DiTs of different sizes with different text encoders to learn to generate images containing two objects whose attributes and spatial relations are specified in the text prompt. We find that, although all the models can learn this task to near-perfect accuracy, the underlying mechanisms differ drastically depending on the choice of text encoder. When using random text embeddings, we find that the spatial-relation information is passed to image tokens through a two-stage circuit, involving two cross-attention heads that separately read the spatial relation and single-object attributes in the text prompt. When using a pretrained text encoder (T5), we find that the DiT uses a differe...