[2602.12469] Regularized Meta-Learning for Improved Generalization
Summary
The paper presents a regularized meta-learning framework aimed at improving generalization in ensemble methods by addressing redundancy, instability, and overfitting.
Why It Matters
This research is significant as it tackles common challenges in machine learning, particularly in ensemble methods, which are widely used for predictive modeling. By enhancing generalization and reducing computational costs, the proposed framework could lead to more efficient and effective machine learning applications across various domains.
Key Takeaways
- Introduces a four-stage regularized meta-learning framework.
- Addresses issues of redundancy and overfitting in ensemble methods.
- Achieves improved predictive performance with lower computational costs.
- Demonstrates a significant reduction in effective matrix condition number.
- Provides a stable stacking strategy for high-dimensional ensemble systems.
Computer Science > Machine Learning arXiv:2602.12469 (cs) [Submitted on 12 Feb 2026] Title:Regularized Meta-Learning for Improved Generalization Authors:Noor Islam S. Mohammad, Md Muntaqim Meherab View a PDF of the paper titled Regularized Meta-Learning for Improved Generalization, by Noor Islam S. Mohammad and 1 other authors View PDF HTML (experimental) Abstract:Deep ensemble methods often improve predictive performance, yet they suffer from three practical limitations: redundancy among base models that inflates computational cost and degrades conditioning, unstable weighting under multicollinearity, and overfitting in meta-learning pipelines. We propose a regularized meta-learning framework that addresses these challenges through a four-stage pipeline combining redundancy-aware projection, statistical meta-feature augmentation, and cross-validated regularized meta-models (Ridge, Lasso, and ElasticNet). Our multi-metric de-duplication strategy removes near-collinear predictors using correlation and MSE thresholds ($\tau_{\text{corr}}=0.95$), reducing the effective condition number of the meta-design matrix while preserving predictive diversity. Engineered ensemble statistics and interaction terms recover higher-order structure unavailable to raw prediction columns. A final inverse-RMSE blending stage mitigates regularizer-selection variance. On the Playground Series S6E1 benchmark (100K samples, 72 base models), the proposed framework achieves an out-of-fold RMSE of 8.58...