[2602.20520] How Do Inpainting Artifacts Propagate to Language?
Summary
This paper investigates how visual artifacts from diffusion-based inpainting affect language generation in vision-language models, revealing significant correlations between image reconstruction quality and caption performance.
Why It Matters
Understanding the impact of visual artifacts on language generation is crucial for improving multimodal AI systems. This research provides insights into how reconstruction fidelity influences the effectiveness of language models, which is vital for applications in computer vision and natural language processing.
Key Takeaways
- Inpainting artifacts systematically affect language generation quality.
- There is a measurable relationship between reconstruction fidelity and caption performance.
- The study employs a two-stage diagnostic setup for controlled comparisons.
- Layer-dependent changes in model behavior are observed due to artifacts.
- Results provide a framework for assessing visual reconstruction's impact on language models.
Computer Science > Computer Vision and Pattern Recognition arXiv:2602.20520 (cs) [Submitted on 24 Feb 2026] Title:How Do Inpainting Artifacts Propagate to Language? Authors:Pratham Yashwante, Davit Abrahamyan, Shresth Grover, Sukruth Rao View a PDF of the paper titled How Do Inpainting Artifacts Propagate to Language?, by Pratham Yashwante and 3 other authors View PDF HTML (experimental) Abstract:We study how visual artifacts introduced by diffusion-based inpainting affect language generation in vision-language models. We use a two-stage diagnostic setup in which masked image regions are reconstructed and then provided to captioning models, enabling controlled comparisons between captions generated from original and reconstructed inputs. Across multiple datasets, we analyze the relationship between reconstruction fidelity and downstream caption quality. We observe consistent associations between pixel-level and perceptual reconstruction metrics and both lexical and semantic captioning performance. Additional analysis of intermediate visual representations and attention patterns shows that inpainting artifacts lead to systematic, layer-dependent changes in model behavior. Together, these results provide a practical diagnostic framework for examining how visual reconstruction quality influences language generation in multimodal systems. Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) Cite as: arXiv:2602.20520 [cs.CV] (or arXiv:260...