[2603.08639] UNBOX: Unveiling Black-box visual models with Natural-language
About this article
Abstract page for arXiv paper 2603.08639: UNBOX: Unveiling Black-box visual models with Natural-language
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.08639 (cs) [Submitted on 9 Mar 2026 (v1), last revised 14 Apr 2026 (this version, v2)] Title:UNBOX: Unveiling Black-box visual models with Natural-language Authors:Simone Carnemolla, Chiara Russo, Simone Palazzo, Quentin Bouniot, Daniela Giordano, Zeynep Akata, Matteo Pennisi, Concetto Spampinato View a PDF of the paper titled UNBOX: Unveiling Black-box visual models with Natural-language, by Simone Carnemolla and 7 other authors View PDF HTML (experimental) Abstract:Ensuring trustworthiness in open-world visual recognition requires models that are interpretable, fair, and robust to distribution shifts. Yet modern vision systems are increasingly deployed as proprietary black-box APIs, exposing only output probabilities and hiding architecture, parameters, gradients, and training data. This opacity prevents meaningful auditing, bias detection, and failure analysis. Existing explanation methods assume white- or gray-box access or knowledge of the training distribution, making them unusable in these real-world settings. We introduce UNBOX, a framework for class-wise model dissection under fully data-free, gradient-free, and backpropagation-free constraints. UNBOX leverages Large Language Models and text-to-image diffusion models to recast activation maximization as a purely semantic search driven by output probabilities. The method produces human-interpretable text descriptors that maximally activate each c...