[2602.10449] A Unified Theory of Random Projection for Influence Functions

[2602.10449] A Unified Theory of Random Projection for Influence Functions

arXiv - Machine Learning 4 min read Article

Summary

This paper presents a unified theory of random projection for influence functions, addressing challenges in scalable influence computation in overparametrized models and providing insights into projection preservation under various conditions.

Why It Matters

Understanding how random projections preserve influence functions is crucial for improving the efficiency and accuracy of machine learning models, especially in high-dimensional settings. This research offers a theoretical framework that can guide practitioners in selecting appropriate sketch sizes for practical applications.

Key Takeaways

  • Exact preservation of influence functions via unregularized projection requires the sketch to be injective on the range of the curvature operator.
  • Ridge regularization alters the conditions for sketching, impacting approximation guarantees based on the effective dimension of the curvature.
  • The paper provides a novel theory for influence preservation, offering practical guidance for sketch size selection in machine learning.

Computer Science > Machine Learning arXiv:2602.10449 (cs) [Submitted on 11 Feb 2026 (v1), last revised 13 Feb 2026 (this version, v2)] Title:A Unified Theory of Random Projection for Influence Functions Authors:Pingbang Hu, Yuzheng Hu, Jiaqi W. Ma, Han Zhao View a PDF of the paper titled A Unified Theory of Random Projection for Influence Functions, by Pingbang Hu and 3 other authors View PDF HTML (experimental) Abstract:Influence functions and related data attribution scores take the form of $g^{\top}F^{-1}g^{\prime}$, where $F\succeq 0$ is a curvature operator. In modern overparametrized models, forming or inverting $F\in\mathbb{R}^{d\times d}$ is prohibitive, motivating scalable influence computation via random projection with a sketch $P \in \mathbb{R}^{m\times d}$. This practice is commonly justified via the Johnson--Lindenstrauss (JL) lemma, which ensures approximate preservation of Euclidean geometry for a fixed dataset. However, JL does not address how sketching behaves under inversion. Furthermore, there is no existing theory that explains how sketching interacts with other widely-used techniques, such as ridge regularization and structured curvature approximations. We develop a unified theory characterizing when projection provably preserves influence functions. When $g,g^{\prime}\in\text{range}(F)$, we show that: 1) Unregularized projection: exact preservation holds iff $P$ is injective on $\text{range}(F)$, which necessitates $m\geq \text{rank}(F)$; 2) Regulari...

Related Articles

Open Source Ai

[D] Runtime layer on Hugging Face Transformers (no source changes) [D]

I’ve been experimenting with a runtime-layer approach to augmenting existing ML systems without modifying their source code. As a test ca...

Reddit - Machine Learning · 1 min ·
Machine Learning

Can I trick a public AI to spit out an outcome I prefer?

I am aware of an organization that evaluates proposals by feeding them into a public version of AI. Is there a way to make that AI rate m...

Reddit - Artificial Intelligence · 1 min ·
Llms

Curated 550+ free AI tools useful for building projects (LLMs, APIs, local models, RAG, agents)

Over the last few days I was collecting free or low cost AI tools that are actually useful if you want to build stuff, not just try rando...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

Artificial intelligence - Machine Learning, Robotics, Algorithms

AI Events ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime