[2601.07315] VLM-CAD: VLM-Optimized Collaborative Agent Design Workflow for Analog Circuit Sizing
About this article
Abstract page for arXiv paper 2601.07315: VLM-CAD: VLM-Optimized Collaborative Agent Design Workflow for Analog Circuit Sizing
Computer Science > Multiagent Systems arXiv:2601.07315 (cs) [Submitted on 12 Jan 2026 (v1), last revised 24 Mar 2026 (this version, v4)] Title:VLM-CAD: VLM-Optimized Collaborative Agent Design Workflow for Analog Circuit Sizing Authors:Guanyuan Pan, Shuai Wang, Yugui Lin, Tiansheng Zhou, Pietro Liò, Zhenxin Zhao, Yaqi Wang View a PDF of the paper titled VLM-CAD: VLM-Optimized Collaborative Agent Design Workflow for Analog Circuit Sizing, by Guanyuan Pan and 6 other authors View PDF HTML (experimental) Abstract:Vision Language Models (VLMs) have demonstrated remarkable potential in multimodal reasoning, yet they inherently suffer from spatial blindness and logical hallucinations when interpreting densely structured engineering content, such as analog circuit schematics. To address these challenges, we propose a Vision Language Model-Optimized Collaborative Agent Design Workflow for Analog Circuit Sizing (VLM-CAD) designed for robust, step-by-step reasoning over multimodal evidence. VLM-CAD bridges the modality gap by integrating a neuro-symbolic structural parsing module, Image2Net, which transforms raw pixels into explicit topological graphs and structured JSON representations to anchor VLM interpretation in deterministic facts. To ensure the reliability required for engineering decisions, we further propose ExTuRBO, an Explainable Trust Region Bayesian Optimization method. ExTuRBO serves as an explainable grounding engine, employing agent-generated semantic seeds to warm-...