[2602.16690] Synthetic-Powered Multiple Testing with FDR Control

[2602.16690] Synthetic-Powered Multiple Testing with FDR Control

arXiv - Machine Learning 3 min read Article

Summary

The paper presents SynthBH, a novel method for multiple hypothesis testing that integrates synthetic data to enhance statistical inference while controlling the false discovery rate (FDR).

Why It Matters

This research addresses a critical challenge in statistical inference by leveraging synthetic data, which can improve the efficiency and power of hypothesis testing. Its implications are significant for fields like genomics and drug screening, where accurate results are crucial.

Key Takeaways

  • SynthBH method enhances multiple testing by incorporating synthetic data.
  • Guarantees finite-sample FDR control under specific conditions.
  • Adapts to the quality of synthetic data, improving sample efficiency.
  • Demonstrated effectiveness in outlier detection and genomic analyses.
  • Provides a framework for safer statistical inference in various applications.

Statistics > Methodology arXiv:2602.16690 (stat) [Submitted on 18 Feb 2026] Title:Synthetic-Powered Multiple Testing with FDR Control Authors:Yonghoon Lee, Meshi Bashari, Edgar Dobriban, Yaniv Romano View a PDF of the paper titled Synthetic-Powered Multiple Testing with FDR Control, by Yonghoon Lee and 3 other authors View PDF HTML (experimental) Abstract:Multiple hypothesis testing with false discovery rate (FDR) control is a fundamental problem in statistical inference, with broad applications in genomics, drug screening, and outlier detection. In many such settings, researchers may have access not only to real experimental observations but also to auxiliary or synthetic data -- from past, related experiments or generated by generative models -- that can provide additional evidence about the hypotheses of interest. We introduce SynthBH, a synthetic-powered multiple testing procedure that safely leverages such synthetic data. We prove that SynthBH guarantees finite-sample, distribution-free FDR control under a mild PRDS-type positive dependence condition, without requiring the pooled-data p-values to be valid under the null. The proposed method adapts to the (unknown) quality of the synthetic data: it enhances the sample efficiency and may boost the power when synthetic data are of high quality, while controlling the FDR at a user-specified level regardless of their quality. We demonstrate the empirical performance of SynthBH on tabular outlier detection benchmarks and on...

Related Articles

Llms

MegaTrain: Full Precision Training of 100B+ Parameter Large Language Models on a Single GPU

https://arxiv.org/abs/2604.05091 Abstract: "We present MegaTrain, a memory-centric system that efficiently trains 100B+ parameter large l...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

Fresher ML/DL Engineer actively looking for entry-level Data Scientist & ML Engineer roles

submitted by /u/SavingsPromise5993 [link] [comments]

Reddit - ML Jobs · 1 min ·
Machine Learning

"There's a green field." Five words, no system prompt, pure autocomplete. It figured out what it was.

No chat interface. No identity. No instructions. Just the API in raw autocomplete mode. The model receives text, predicts the next tokens...

Reddit - Artificial Intelligence · 1 min ·
Llms

[D] The Bitter Lesson of Optimization: Why training Neural Networks to update themselves is mathematically brutal (but probably inevitable)

Are we still stuck in the "feature engineering" era of optimization? We trust neural networks to learn unimaginably complex patterns from...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime