[2509.04583] Instance-Wise Adaptive Sampling for Dataset Construction in Approximating Inverse Problem Solutions

[2509.04583] Instance-Wise Adaptive Sampling for Dataset Construction in Approximating Inverse Problem Solutions

arXiv - Machine Learning 4 min read Article

Summary

This article presents a novel instance-wise adaptive sampling framework designed to enhance the efficiency of training datasets for supervised learning in inverse problem solutions.

Why It Matters

The proposed method addresses the challenges of high data collection costs and inefficiencies in traditional training approaches. By dynamically adjusting the dataset based on specific test instances, it offers a scalable solution that can significantly improve accuracy and reduce resource expenditure in machine learning applications, particularly in complex inverse problems.

Key Takeaways

  • Introduces an adaptive sampling framework for dataset construction.
  • Improves sample efficiency by tailoring datasets to specific test instances.
  • Demonstrates effectiveness in inverse scattering problems with structured priors.
  • Offers a scalable alternative to conventional fixed-dataset training methods.
  • Applicable to a variety of inverse problems beyond the initial focus.

Computer Science > Machine Learning arXiv:2509.04583 (cs) [Submitted on 4 Sep 2025 (v1), last revised 19 Feb 2026 (this version, v2)] Title:Instance-Wise Adaptive Sampling for Dataset Construction in Approximating Inverse Problem Solutions Authors:Jiequn Han, Kui Ren, Nathan Soedjak View a PDF of the paper titled Instance-Wise Adaptive Sampling for Dataset Construction in Approximating Inverse Problem Solutions, by Jiequn Han and 2 other authors View PDF HTML (experimental) Abstract:We propose an instance-wise adaptive sampling framework for constructing compact and informative training datasets for supervised learning of inverse problem solutions. Typical learning-based approaches aim to learn a general-purpose inverse map from datasets drawn from a prior distribution, with the training process independent of the specific test instance. When the prior has a high intrinsic dimension or when high accuracy of the learned solution is required, a large number of training samples may be needed, resulting in substantial data collection costs. In contrast, our method dynamically allocates sampling effort based on the specific test instance, enabling significant gains in sample efficiency. By iteratively refining the training dataset conditioned on the latest prediction, the proposed strategy tailors the dataset to the geometry of the inverse map around each test instance. We demonstrate the effectiveness of our approach in the inverse scattering problem under two types of structu...

Related Articles

Llms

[R] Hybrid attention for small code models: 50x faster inference, but data scaling still dominates

TLDR: Forked pytorch and triton internals . Changed attention so its linear first layer , middle quadratic layer, last linear layer Infer...

Reddit - Machine Learning · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
AI Hiring Growth: AI and ML Hiring Surges 37% in Marche
Machine Learning

AI Hiring Growth: AI and ML Hiring Surges 37% in Marche

AI News - General · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime