[2602.14397] LRD-MPC: Efficient MPC Inference through Low-rank Decomposition

[2602.14397] LRD-MPC: Efficient MPC Inference through Low-rank Decomposition

arXiv - Machine Learning 4 min read Article

Summary

The paper presents LRD-MPC, a method that enhances the efficiency of secure multi-party computation (MPC) in machine learning by utilizing low-rank decomposition to reduce computational overhead.

Why It Matters

As secure multi-party computation becomes increasingly relevant in cloud-based machine learning applications, optimizing its efficiency is crucial. This research addresses significant computational and communication challenges, potentially improving the viability of secure inference services across various platforms.

Key Takeaways

  • LRD-MPC reduces computational costs in MPC by applying low-rank decomposition.
  • The method introduces optimizations like truncation skipping to enhance efficiency.
  • Experiments show significant speedups and energy savings in MPC protocols.

Computer Science > Cryptography and Security arXiv:2602.14397 (cs) [Submitted on 16 Feb 2026] Title:LRD-MPC: Efficient MPC Inference through Low-rank Decomposition Authors:Tingting Tang, Yongqin Wang, Murali Annavaram View a PDF of the paper titled LRD-MPC: Efficient MPC Inference through Low-rank Decomposition, by Tingting Tang and 2 other authors View PDF HTML (experimental) Abstract:Secure Multi-party Computation (MPC) enables untrusted parties to jointly compute a function without revealing their inputs. Its application to machine learning (ML) has gained significant attention, particularly for secure inference services deployed across multiple cloud virtual machines (VMs), where each VM acts as an MPC party. Model providers secret-share model weights, and users secret-share inputs, ensuring that each server operates only on random shares. While MPC provides strong cryptographic guarantees, it incurs substantial computational and communication overhead. Deep neural networks rely heavily on convolutional and fully connected layers, which require costly matrix multiplications in MPC. To reduce this cost, we propose leveraging low-rank decomposition (LRD) for linear layers, replacing one large matrix multiplication with two smaller ones. Each matrix multiplication in MPC incurs a round of communication, meaning decomposing one matrix multiplication into two leads to an additional communication round. Second, the added matrix multiplication requires an additional truncatio...

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Alabama A&M University chosen for Amazon Web Services AI training program
Machine Learning

Alabama A&M University chosen for Amazon Web Services AI training program

Alabama A&M University has been selected as one of just five institutions nationwide to participate in Amazon Web Services' Machine Learn...

AI News - General · 2 min ·
Interpretable machine learning model advances analysis of complex genetic traits
Machine Learning

Interpretable machine learning model advances analysis of complex genetic traits

A new study published in Genome Research presents an interpretable artificial intelligence framework that improves both the accuracy and ...

AI News - General · 6 min ·
Sam Altman's Coworkers Say He Can Barely Code and Misunderstands Basic Machine Learning Concepts
Machine Learning

Sam Altman's Coworkers Say He Can Barely Code and Misunderstands Basic Machine Learning Concepts

The OpenAI CEO reportedly confuses basic coding and machine learning terms, numerous insiders have admitted.

AI News - General · 2 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime