[2603.19949] TAPAS: Efficient Two-Server Asymmetric Private Aggregation Beyond Prio(+)

[2603.19949] TAPAS: Efficient Two-Server Asymmetric Private Aggregation Beyond Prio(+)

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2603.19949: TAPAS: Efficient Two-Server Asymmetric Private Aggregation Beyond Prio(+)

Computer Science > Cryptography and Security arXiv:2603.19949 (cs) [Submitted on 20 Mar 2026] Title:TAPAS: Efficient Two-Server Asymmetric Private Aggregation Beyond Prio(+) Authors:Harish Karthikeyan, Antigoni Polychroniadou View a PDF of the paper titled TAPAS: Efficient Two-Server Asymmetric Private Aggregation Beyond Prio(+), by Harish Karthikeyan and Antigoni Polychroniadou View PDF HTML (experimental) Abstract:Privacy-preserving aggregation is a cornerstone for AI systems that learn from distributed data without exposing individual records, especially in federated learning and telemetry. Existing two-server protocols (e.g., Prio and successors) set a practical baseline by validating inputs while preventing any single party from learning users' values, but they impose symmetric costs on both servers and communication that scales with the per-client input dimension $L$. Modern learning tasks routinely involve dimensionalities $L$ in the tens to hundreds of millions of model parameters. We present TAPAS, a two-server asymmetric private aggregation scheme that addresses these limitations along four dimensions: (i) no trusted setup or preprocessing, (ii) server-side communication that is independent of $L$ (iii) post-quantum security based solely on standard lattice assumptions (LWE, SIS), and (iv) stronger robustness with identifiable abort and full malicious security for the servers. A key design choice is intentional asymmetry: one server bears the $O(L)$ aggregation a...

Originally published on March 23, 2026. Curated by AI News.

Related Articles

Ai Infrastructure

[D] thoughts on the controversy about Google's new paper?

Openreview: https://openreview.net/forum?id=tO3ASKZlok It's sad to see almost no one mention this on Reddit and people are being mean to ...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] MXFP8 GEMM: Up to 99% of cuBLAS performance using CUDA + PTX

New blog post by Daniel Vega-Myhre (Meta/PyTorch) illustrating GEMM design for FP8, including deep-dives into all the constraints and des...

Reddit - Machine Learning · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
[2603.15159] To See is Not to Master: Teaching LLMs to Use Private Libraries for Code Generation
Llms

[2603.15159] To See is Not to Master: Teaching LLMs to Use Private Libraries for Code Generation

Abstract page for arXiv paper 2603.15159: To See is Not to Master: Teaching LLMs to Use Private Libraries for Code Generation

arXiv - AI · 4 min ·
More in Ai Infrastructure: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime