[2602.12630] TensorCommitments: A Lightweight Verifiable Inference for Language Models

[2602.12630] TensorCommitments: A Lightweight Verifiable Inference for Language Models

arXiv - AI 3 min read Article

Summary

The paper introduces TensorCommitments, a novel proof-of-inference scheme designed to enhance the security of large language model (LLM) inference by ensuring verifiability without requiring extensive computational resources.

Why It Matters

As reliance on cloud-based LLMs grows, ensuring their integrity and security becomes critical. TensorCommitments addresses the challenge of verifying inference results efficiently, which is vital for applications where trust and security are paramount, such as in sensitive data processing.

Key Takeaways

  • TensorCommitments provides a lightweight method for verifiable LLM inference.
  • The proposed scheme improves robustness against tailored LLM attacks by up to 48%.
  • It adds minimal overhead to inference times (0.97% for prover, 0.12% for verifier).
  • Existing cryptographic methods are not scalable for LLMs, highlighting the innovation of TCs.
  • The approach utilizes multivariate Terkle Trees for effective commitment management.

Computer Science > Cryptography and Security arXiv:2602.12630 (cs) [Submitted on 13 Feb 2026] Title:TensorCommitments: A Lightweight Verifiable Inference for Language Models Authors:Oguzhan Baser, Elahe Sadeghi, Eric Wang, David Ribeiro Alves, Sam Kazemian, Hong Kang, Sandeep P. Chinchali, Sriram Vishwanath View a PDF of the paper titled TensorCommitments: A Lightweight Verifiable Inference for Language Models, by Oguzhan Baser and 7 other authors View PDF HTML (experimental) Abstract:Most large language models (LLMs) run on external clouds: users send a prompt, pay for inference, and must trust that the remote GPU executes the LLM without any adversarial tampering. We critically ask how to achieve verifiable LLM inference, where a prover (the service) must convince a verifier (the client) that an inference was run correctly without rerunning the LLM. Existing cryptographic works are too slow at the LLM scale, while non-cryptographic ones require a strong verifier GPU. We propose TensorCommitments (TCs), a tensor-native proof-of-inference scheme. TC binds the LLM inference to a commitment, an irreversible tag that breaks under tampering, organized in our multivariate Terkle Trees. For LLaMA2, TC adds only 0.97% prover and 0.12% verifier time over inference while improving robustness to tailored LLM attacks by up to 48% over the best prior work requiring a verifier GPU. Comments: Subjects: Cryptography and Security (cs.CR); Artificial Intelligence (cs.AI) Cite as: arXiv:260...

Related Articles

[2601.22451] Countering the Over-Reliance Trap: Mitigating Object Hallucination for LVLMs via a Self-Validation Framework
Llms

[2601.22451] Countering the Over-Reliance Trap: Mitigating Object Hallucination for LVLMs via a Self-Validation Framework

Abstract page for arXiv paper 2601.22451: Countering the Over-Reliance Trap: Mitigating Object Hallucination for LVLMs via a Self-Validat...

arXiv - AI · 4 min ·
[2601.21463] Unifying Speech Editing Detection and Content Localization via Prior-Enhanced Audio LLMs
Llms

[2601.21463] Unifying Speech Editing Detection and Content Localization via Prior-Enhanced Audio LLMs

Abstract page for arXiv paper 2601.21463: Unifying Speech Editing Detection and Content Localization via Prior-Enhanced Audio LLMs

arXiv - AI · 4 min ·
[2601.16206] Computer Environments Elicit General Agentic Intelligence in LLMs
Llms

[2601.16206] Computer Environments Elicit General Agentic Intelligence in LLMs

Abstract page for arXiv paper 2601.16206: Computer Environments Elicit General Agentic Intelligence in LLMs

arXiv - AI · 4 min ·
[2601.15356] Q-Probe: Scaling Image Quality Assessment to High Resolution via Context-Aware Agentic Probing
Llms

[2601.15356] Q-Probe: Scaling Image Quality Assessment to High Resolution via Context-Aware Agentic Probing

Abstract page for arXiv paper 2601.15356: Q-Probe: Scaling Image Quality Assessment to High Resolution via Context-Aware Agentic Probing

arXiv - AI · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime