[2512.20821] Divided We Fall: Defending Against Adversarial Attacks via Soft-Gated Fractional Mixture-of-Experts with Randomized Adversarial Training

[2512.20821] Divided We Fall: Defending Against Adversarial Attacks via Soft-Gated Fractional Mixture-of-Experts with Randomized Adversarial Training

arXiv - Machine Learning 4 min read Article

Summary

The paper presents a novel defense mechanism against adversarial attacks in machine learning using a soft-gated fractional mixture-of-experts architecture, demonstrating improved robustness over existing methods.

Why It Matters

As machine learning models become increasingly integral to various applications, their vulnerability to adversarial attacks poses significant risks. This research offers a promising solution that enhances model security, which is crucial for real-world deployment in sensitive areas like finance and healthcare.

Key Takeaways

  • Introduces a defense system utilizing a mixture-of-experts architecture.
  • Demonstrates superior performance against white-box evasion attacks compared to existing methods.
  • Employs randomized adversarial training to enhance model robustness.
  • Utilizes nine pre-trained classifiers to optimize performance.
  • Validates effectiveness on benchmark datasets CIFAR-10 and SVHN.

Computer Science > Machine Learning arXiv:2512.20821 (cs) [Submitted on 23 Dec 2025 (v1), last revised 22 Feb 2026 (this version, v2)] Title:Divided We Fall: Defending Against Adversarial Attacks via Soft-Gated Fractional Mixture-of-Experts with Randomized Adversarial Training Authors:Mohammad Meymani, Roozbeh Razavi-Far View a PDF of the paper titled Divided We Fall: Defending Against Adversarial Attacks via Soft-Gated Fractional Mixture-of-Experts with Randomized Adversarial Training, by Mohammad Meymani and 1 other authors View PDF HTML (experimental) Abstract:Machine learning is a powerful tool enabling full automation of a huge number of tasks without explicit programming. Despite recent progress of machine learning in different domains, these models have shown vulnerabilities when they are exposed to adversarial threats. Adversarial threats aim to hinder the machine learning models from satisfying their objectives. They can create adversarial perturbations, which are imperceptible to humans' eyes but have the ability to cause misclassification during inference. In this paper, we propose a defense system, which devises an adversarial training module within mixture-of-experts architecture to enhance its robustness against white-box evasion attacks. In our proposed defense system, we use nine pre-trained classifiers (experts) with ResNet-18 as their backbone. During end-to-end training, the parameters of all experts and the gating mechanism are jointly updated allowing ...

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Machine Learning

AI assistants are optimized to seem helpful. That is not the same thing as being helpful.

RLHF trains models on human feedback. Humans rate responses they like. And it turns out humans consistently rate confident, fluent, agree...

Reddit - Artificial Intelligence · 1 min ·
Llms

wtf bro did what? arc 3 2026

The Physarum Explorer is a high-speed, bio-inspired neural model designed specifically for ARC geometry. Here is the snapshot of its curr...

Reddit - Artificial Intelligence · 1 min ·
Llms

Study: LLMs Able to De-Anonymize User Accounts on Reddit, Hacker News & Other "Pseudonymous" Platforms; Report Co-Author Expands, Advises

Advice from the study's co-author: "Be aware that it’s not any single post that identifies you, but the combination of small details acro...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime