[2602.12533] AMPS: Adaptive Modality Preference Steering via Functional Entropy
Summary
The paper presents AMPS, a method for Adaptive Modality Preference Steering in Multimodal Large Language Models (MLLMs), addressing the challenge of modality preference and steering sensitivity.
Why It Matters
As MLLMs become increasingly prevalent, understanding and managing their modality preferences is crucial for improving their accuracy and reliability. This research offers a novel approach to enhance performance by adapting steering based on specific instances, which could lead to more effective applications in various AI systems.
Key Takeaways
- Introduces an instance-aware diagnostic metric for MLLMs.
- Proposes a scaling strategy to adjust modality preference sensitivity.
- Demonstrates that instance-aware steering outperforms conventional methods.
- Addresses the limitations of uniform steering intensity in MLLMs.
- Highlights the importance of balancing steering to minimize generation errors.
Computer Science > Machine Learning arXiv:2602.12533 (cs) [Submitted on 13 Feb 2026] Title:AMPS: Adaptive Modality Preference Steering via Functional Entropy Authors:Zihan Huang, Xintong Li, Rohan Surana, Tong Yu, Rui Wang, Julian McAuley, Jingbo Shang, Junda Wu View a PDF of the paper titled AMPS: Adaptive Modality Preference Steering via Functional Entropy, by Zihan Huang and 7 other authors View PDF HTML (experimental) Abstract:Multimodal Large Language Models (MLLMs) often exhibit significant modality preference, which is a tendency to favor one modality over another. Depending on the input, they may over-rely on linguistic priors relative to visual evidence, or conversely over-attend to visually salient but facts in textual contexts. Prior work has applied a uniform steering intensity to adjust the modality preference of MLLMs. However, strong steering can impair standard inference and increase error rates, whereas weak steering is often ineffective. In addition, because steering sensitivity varies substantially across multimodal instances, a single global strength is difficult to calibrate. To address this limitation with minimal disruption to inference, we introduce an instance-aware diagnostic metric that quantifies each modality's information contribution and reveals sample-specific susceptibility to steering. Building on these insights, we propose a scaling strategy that reduces steering for sensitive samples and a learnable module that infers scaling patterns, e...