[2603.24213] Uncovering Memorization in Timeseries Imputation models: LBRM Membership Inference and its link to attribute Leakage

[2603.24213] Uncovering Memorization in Timeseries Imputation models: LBRM Membership Inference and its link to attribute Leakage

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2603.24213: Uncovering Memorization in Timeseries Imputation models: LBRM Membership Inference and its link to attribute Leakage

Computer Science > Machine Learning arXiv:2603.24213 (cs) [Submitted on 25 Mar 2026] Title:Uncovering Memorization in Timeseries Imputation models: LBRM Membership Inference and its link to attribute Leakage Authors:Faiz Taleb, Ivan Gazeau, Maryline Laurent View a PDF of the paper titled Uncovering Memorization in Timeseries Imputation models: LBRM Membership Inference and its link to attribute Leakage, by Faiz Taleb and 1 other authors View PDF HTML (experimental) Abstract:Deep learning models for time series imputation are now essential in fields such as healthcare, the Internet of Things (IoT), and finance. However, their deployment raises critical privacy concerns. Beyond the well-known issue of unintended memorization, which has been extensively studied in generative models, we demonstrate that time series models are vulnerable to inference attacks in a black-box setting. In this work, we introduce a two-stage attack framework comprising: (1) a novel membership inference attack based on a reference model that improves detection accuracy, even for models robust to overfitting-based attacks, and (2) the first attribute inference attack that predicts sensitive characteristics of the training data for timeseries imputation model. We evaluate these attacks on attention-based and autoencoder architectures in two scenarios: models that are trained from scratch, and fine-tuned models where the adversary has access to the initial weights. Our experimental results demonstrate t...

Originally published on March 26, 2026. Curated by AI News.

Related Articles

Llms

[R] GPT-5.4-mini regressed 22pp on vanilla prompting vs GPT-5-mini. Nobody noticed because benchmarks don't test this. Recursive Language Models solved it.

GPT-5.4-mini produces shorter, terser outputs by default. Vanilla accuracy dropped from 69.5% to 47.2% across 12 tasks (1,800 evals). The...

Reddit - Machine Learning · 1 min ·
Top 10 AI certifications and courses for 2026
Ai Startups

Top 10 AI certifications and courses for 2026

This article reviews the top 10 AI certifications and courses for 2026, highlighting their significance in a rapidly evolving field and t...

AI Events · 15 min ·
Hub Group Using AI, Machine Learning for Real-Time Visibility of Shipments
Machine Learning

Hub Group Using AI, Machine Learning for Real-Time Visibility of Shipments

Hub Group says it’s using artificial intelligence and machine learning to leverage data from its GPS-equipped container fleet to give cus...

AI Events · 4 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime