[2604.06205] Tool-MCoT: Tool Augmented Multimodal Chain-of-Thought for Content Safety Moderation

[2604.06205] Tool-MCoT: Tool Augmented Multimodal Chain-of-Thought for Content Safety Moderation

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2604.06205: Tool-MCoT: Tool Augmented Multimodal Chain-of-Thought for Content Safety Moderation

Computer Science > Computation and Language arXiv:2604.06205 (cs) [Submitted on 15 Mar 2026] Title:Tool-MCoT: Tool Augmented Multimodal Chain-of-Thought for Content Safety Moderation Authors:Shutong Zhang, Dylan Zhou, Yinxiao Liu, Yang Yang, Huiwen Luo, Wenfei Zou View a PDF of the paper titled Tool-MCoT: Tool Augmented Multimodal Chain-of-Thought for Content Safety Moderation, by Shutong Zhang and 5 other authors View PDF HTML (experimental) Abstract:The growth of online platforms and user content requires strong content moderation systems that can handle complex inputs from various media types. While large language models (LLMs) are effective, their high computational cost and latency present significant challenges for scalable deployment. To address this, we introduce Tool-MCoT, a small language model (SLM) fine-tuned for content safety moderation leveraging external framework. By training our model on tool-augmented chain-of-thought data generated by LLM, we demonstrate that the SLM can learn to effectively utilize these tools to improve its reasoning and decision-making. Our experiments show that the fine-tuned SLM achieves significant performance gains. Furthermore, we show that the model can learn to use these tools selectively, achieving a balance between moderation accuracy and inference efficiency by calling tools only when necessary. Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI) Cite as: arXiv:2604.06205 [cs.CL]   (or arXiv:2604.062...

Originally published on April 09, 2026. Curated by AI News.

Related Articles

Llms

We gave 45 psychological questionnaires to 50 LLMs. What we found was not “personality.”

What is the “personality” of an LLM? What actually differentiates models psychometrically? Since LLMs entered public use, researchers hav...

Reddit - Artificial Intelligence · 1 min ·
How to Disable Google's Gemini in Chrome | WIRED
Llms

How to Disable Google's Gemini in Chrome | WIRED

Chrome users were caught off guard by a 4-GB Google AI model baked into Chrome, sparking privacy concerns. The good news: You can easily ...

Wired - AI · 6 min ·
OpenAI introduces new 'Trusted Contact' safeguard for cases of possible self-harm | TechCrunch
Llms

OpenAI introduces new 'Trusted Contact' safeguard for cases of possible self-harm | TechCrunch

The company is expanding its efforts to protect ChatGPT users in cases where conversations may turn to self-harm.

TechCrunch - AI · 5 min ·
Mira Murati’s deposition pulled back the curtain on Sam Altman’s ouster | The Verge
Llms

Mira Murati’s deposition pulled back the curtain on Sam Altman’s ouster | The Verge

Thanks to Musk v. Altman, the public is getting a concrete look at details of Sam Altman’s ouster from OpenAI, much of it centered on for...

The Verge - AI · 11 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime