[2510.24215] What Can Be Recovered Under Sparse Adversarial Corruption? Assumption-Free Theory for Linear Measurements

[2510.24215] What Can Be Recovered Under Sparse Adversarial Corruption? Assumption-Free Theory for Linear Measurements

arXiv - Machine Learning 4 min read Article

Summary

This paper explores the recoverability of sparse adversarial vectors in linear measurements without relying on strong structural assumptions, presenting a new theoretical framework.

Why It Matters

Understanding how to recover signals under adversarial conditions is crucial in fields like machine learning and signal processing. This research provides a foundation for developing more robust algorithms that can handle real-world data corruption without strict assumptions, enhancing the reliability of various applications.

Key Takeaways

  • Introduces an assumption-free theory for recovering sparse signals under adversarial corruption.
  • Identifies the smallest robust solution set as a function of the kernel of a projection matrix.
  • Demonstrates that minimizing the ℓ0-norm leads to solutions within the identified robust set.

Computer Science > Information Theory arXiv:2510.24215 (cs) [Submitted on 28 Oct 2025 (v1), last revised 16 Feb 2026 (this version, v3)] Title:What Can Be Recovered Under Sparse Adversarial Corruption? Assumption-Free Theory for Linear Measurements Authors:Vishal Halder, Alexandre Reiffers-Masson, Abdeldjalil Aïssa-El-Bey, Gugan Thoppe View a PDF of the paper titled What Can Be Recovered Under Sparse Adversarial Corruption? Assumption-Free Theory for Linear Measurements, by Vishal Halder and 3 other authors View PDF HTML (experimental) Abstract:Let $A \in \mathbb{R}^{m \times n}$ be an arbitrary, known matrix and $e$ a $q$-sparse adversarial vector. Given $y = A x^\star + e$ and $q$, we seek the smallest robust solution set containing $x^\star$ that is uniformly recoverable from $y$ without knowing $e$. While exact recovery of $x^\star$ via strong (and often impractical) structural assumptions on $A$ or $x^\star$ (e.g., restricted isometry, sparsity) is well studied, recoverability for arbitrary $A$ and $x^\star$ remains open. Our main result shows that the smallest robust solution set is $x^\star + \ker(U)$, where $U$ is the unique projection matrix onto the intersection of rowspaces of all possible submatrices of $A$ obtained by deleting $2q$ rows. Moreover, we prove that every $x$ that minimizes the $\ell_0$-norm of $y - A x$ lies in $x^\star + \ker(U)$, which then gives a constructive approach to recover this set. Comments: Subjects: Information Theory (cs.IT); Machine...

Related Articles

Machine Learning

[For Hire] Ex-Microsoft Senior Data Engineer | Databricks, Palantir Foundry, MLOps | $55/hr

submitted by /u/mcheetirala2510 [link] [comments]

Reddit - ML Jobs · 1 min ·
Meta AI app climbs to No. 5 on the App Store after Muse Spark launch | TechCrunch
Machine Learning

Meta AI app climbs to No. 5 on the App Store after Muse Spark launch | TechCrunch

The app was ranking No. 57 on the App Store just before Meta AI's new model launched. Now it's No. 5 — and rising.

TechCrunch - AI · 4 min ·
Machine Learning

Detecting mirrored selfie images: OCR the best way? [D]

I'm trying to catch backwards "selfie" images before passing them to our VLM text reader and/or face embedding extraction. Since models l...

Reddit - Machine Learning · 1 min ·
Llms

Google’s Gemini AI can answer your questions with 3D models and simulations

submitted by /u/tekz [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime