[2512.07703] PVeRA: Probabilistic Vector-Based Random Matrix Adaptation

[2512.07703] PVeRA: Probabilistic Vector-Based Random Matrix Adaptation

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2512.07703: PVeRA: Probabilistic Vector-Based Random Matrix Adaptation

Computer Science > Computer Vision and Pattern Recognition arXiv:2512.07703 (cs) [Submitted on 8 Dec 2025 (v1), last revised 30 Apr 2026 (this version, v2)] Title:PVeRA: Probabilistic Vector-Based Random Matrix Adaptation Authors:Leo Fillioux, Enzo Ferrante, Paul-Henry Cournède, Maria Vakalopoulou, Stergios Christodoulidis View a PDF of the paper titled PVeRA: Probabilistic Vector-Based Random Matrix Adaptation, by Leo Fillioux and 4 other authors View PDF Abstract:Large foundation models have emerged in the last years and are pushing performance boundaries for a variety of tasks. Training or even finetuning such models demands vast datasets and computational resources, which are often scarce and costly. Adaptation methods provide a computationally efficient solution to address these limitations by allowing such models to be finetuned on small amounts of data and computing power. This is achieved by appending new trainable modules to frozen backbones with only a fraction of the trainable parameters and fitting only these modules on novel tasks. Recently, the VeRA adapter was shown to excel in parameter-efficient adaptations by utilizing a pair of frozen random low-rank matrices shared across all layers. In this paper, we propose PVeRA, a probabilistic version of the VeRA adapter, which modifies the low-rank matrices of VeRA in a probabilistic manner. This modification naturally allows handling inherent ambiguities in the input and allows for different sampling configuratio...

Originally published on May 01, 2026. Curated by AI News.

Related Articles

Llms

Zoom + Claude Connector

Zoom have just launched their Claude Connector bringing a whole host of data & information into your Claude workspace. As a Claude Co...

Reddit - Artificial Intelligence · 1 min ·
Llms

Must your chatbot rat you out?

New court cases may take chatbot conversations another step away from privacy You may recall that court cases have recently held users’ c...

Reddit - Artificial Intelligence · 1 min ·
[2506.09110] CodeBrain: Bridging Decoupled Tokenizer and Multi-Scale Architecture for EEG Foundation Model
Llms

[2506.09110] CodeBrain: Bridging Decoupled Tokenizer and Multi-Scale Architecture for EEG Foundation Model

Abstract page for arXiv paper 2506.09110: CodeBrain: Bridging Decoupled Tokenizer and Multi-Scale Architecture for EEG Foundation Model

arXiv - Machine Learning · 4 min ·
[2604.27914] Geometry-Calibrated Conformal Abstention for Language Models
Llms

[2604.27914] Geometry-Calibrated Conformal Abstention for Language Models

Abstract page for arXiv paper 2604.27914: Geometry-Calibrated Conformal Abstention for Language Models

arXiv - Machine Learning · 3 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime