[2603.01361] MixerCSeg: An Efficient Mixer Architecture for Crack Segmentation via Decoupled Mamba Attention

[2603.01361] MixerCSeg: An Efficient Mixer Architecture for Crack Segmentation via Decoupled Mamba Attention

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2603.01361: MixerCSeg: An Efficient Mixer Architecture for Crack Segmentation via Decoupled Mamba Attention

Computer Science > Computer Vision and Pattern Recognition arXiv:2603.01361 (cs) [Submitted on 2 Mar 2026] Title:MixerCSeg: An Efficient Mixer Architecture for Crack Segmentation via Decoupled Mamba Attention Authors:Zilong Zhao, Zhengming Ding, Pei Niu, Wenhao Sun, Feng Guo View a PDF of the paper titled MixerCSeg: An Efficient Mixer Architecture for Crack Segmentation via Decoupled Mamba Attention, by Zilong Zhao and Zhengming Ding and Pei Niu and Wenhao Sun and Feng Guo View PDF HTML (experimental) Abstract:Feature encoders play a key role in pixel-level crack segmentation by shaping the representation of fine textures and thin structures. Existing CNN-, Transformer-, and Mamba-based models each capture only part of the required spatial or structural information, leaving clear gaps in modeling complex crack patterns. To address this, we present MixerCSeg, a mixer architecture designed like a coordinated team of specialists, where CNN-like pathways focus on local textures, Transformer-style paths capture global dependencies, and Mamba-inspired flows model sequential context within a single encoder. At the core of MixerCSeg is the TransMixer, which explores Mamba's latent attention behavior while establishing dedicated pathways that naturally express both locality and global awareness. To further enhance structural fidelity, we introduce a spatial block processing strategy and a Direction-guided Edge Gated Convolution (DEGConv) that strengthens edge sensitivity under irre...

Originally published on March 03, 2026. Curated by AI News.

Related Articles

Llms

[R] GPT-5.4-mini regressed 22pp on vanilla prompting vs GPT-5-mini. Nobody noticed because benchmarks don't test this. Recursive Language Models solved it.

GPT-5.4-mini produces shorter, terser outputs by default. Vanilla accuracy dropped from 69.5% to 47.2% across 12 tasks (1,800 evals). The...

Reddit - Machine Learning · 1 min ·
Top 10 AI certifications and courses for 2026
Ai Startups

Top 10 AI certifications and courses for 2026

This article reviews the top 10 AI certifications and courses for 2026, highlighting their significance in a rapidly evolving field and t...

AI Events · 15 min ·
Hub Group Using AI, Machine Learning for Real-Time Visibility of Shipments
Machine Learning

Hub Group Using AI, Machine Learning for Real-Time Visibility of Shipments

Hub Group says it’s using artificial intelligence and machine learning to leverage data from its GPS-equipped container fleet to give cus...

AI Events · 4 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime