[2506.19420] Commander-GPT: Dividing and Routing for Multimodal Sarcasm Detection
About this article
Abstract page for arXiv paper 2506.19420: Commander-GPT: Dividing and Routing for Multimodal Sarcasm Detection
Computer Science > Artificial Intelligence arXiv:2506.19420 (cs) [Submitted on 24 Jun 2025 (v1), last revised 8 Apr 2026 (this version, v2)] Title:Commander-GPT: Dividing and Routing for Multimodal Sarcasm Detection Authors:Yazhou Zhang, Chunwang Zou, Bo Wang, Jing Qin, Prayag Tiwari View a PDF of the paper titled Commander-GPT: Dividing and Routing for Multimodal Sarcasm Detection, by Yazhou Zhang and 4 other authors View PDF HTML (experimental) Abstract:Multimodal sarcasm understanding is a high-order cognitive task. Although large language models (LLMs) have shown impressive performance on many downstream NLP tasks, growing evidence suggests that they struggle with sarcasm understanding. In this paper, we propose Commander-GPT, a modular decision routing framework inspired by military command theory. Rather than relying on a single LLM's capability, Commander-GPT orchestrates a team of specialized LLM agents where each agent will be selectively assigned to a focused sub-task such as keyword extraction, sentiment analysis, etc. Their outputs are then routed back to the commander, which integrates the information and performs the final sarcasm judgment. To coordinate these agents, we introduce three types of centralized commanders: (1) a trained lightweight encoder-based commander (e.g., multi-modal BERT); (2) four small autoregressive language models, serving as moderately capable commanders (e.g., DeepSeek-VL); (3) two large LLM-based commander (Gemini Pro and GPT-4o) t...