[2602.17700] MIDAS: Mosaic Input-Specific Differentiable Architecture Search
Summary
MIDAS introduces a novel approach to differentiable neural architecture search by utilizing input-specific parameters and self-attention mechanisms to enhance model robustness and performance.
Why It Matters
This research addresses the limitations of existing neural architecture search methods by proposing a more dynamic and efficient framework. MIDAS's ability to optimize architectures for specific inputs could lead to significant advancements in machine learning applications, particularly in areas requiring high accuracy and adaptability.
Key Takeaways
- MIDAS modernizes DARTS by integrating dynamic, input-specific parameters.
- The method achieves state-of-the-art results on multiple benchmark datasets.
- Patchwise attention enhances the selection process of candidate operations.
- MIDAS simplifies architecture selection through a parameter-free search space.
- The approach demonstrates improved robustness and class-aware parameter distributions.
Computer Science > Machine Learning arXiv:2602.17700 (cs) [Submitted on 6 Feb 2026] Title:MIDAS: Mosaic Input-Specific Differentiable Architecture Search Authors:Konstanty Subbotko View a PDF of the paper titled MIDAS: Mosaic Input-Specific Differentiable Architecture Search, by Konstanty Subbotko View PDF HTML (experimental) Abstract:Differentiable Neural Architecture Search (NAS) provides efficient, gradient-based methods for automatically designing neural networks, yet its adoption remains limited in practice. We present MIDAS, a novel approach that modernizes DARTS by replacing static architecture parameters with dynamic, input-specific parameters computed via self-attention. To improve robustness, MIDAS (i) localizes the architecture selection by computing it separately for each spatial patch of the activation map, and (ii) introduces a parameter-free, topology-aware search space that models node connectivity and simplifies selecting the two incoming edges per node. We evaluate MIDAS on the DARTS, NAS-Bench-201, and RDARTS search spaces. In DARTS, it reaches 97.42% top-1 on CIFAR-10 and 83.38% on CIFAR-100. In NAS-Bench-201, it consistently finds globally optimal architectures. In RDARTS, it sets the state of the art on two of four search spaces on CIFAR-10. We further analyze why MIDAS works, showing that patchwise attention improves discrimination among candidate operations, and the resulting input-specific parameter distributions are class-aware and predominantly u...