[2602.17677] Reducing Text Bias in Synthetically Generated MCQAs for VLMs in Autonomous Driving
Summary
This paper discusses reducing text bias in synthetically generated multiple-choice question answering (MCQA) for Vision Language Models (VLMs) in autonomous driving, highlighting a method that improves model reliance on visual context over linguistic shortcuts.
Why It Matters
As autonomous driving technology advances, ensuring that VLMs accurately interpret visual data without being misled by text biases is crucial for safety and reliability. This research addresses a significant flaw in current MCQA benchmarks, enhancing the robustness of AI systems in real-world applications.
Key Takeaways
- Synthetically generated MCQAs often lead to models exploiting linguistic patterns instead of visual context.
- The proposed method reduces blind accuracy from +66.9% to +2.9%, indicating improved model reliability.
- Decoupling answers from linguistic artifacts encourages models to focus on visual grounding.
- Curriculum learning strategies can enhance model training effectiveness in VLMs.
- This research contributes to safer AI applications in autonomous driving.
Computer Science > Machine Learning arXiv:2602.17677 (cs) [Submitted on 28 Jan 2026] Title:Reducing Text Bias in Synthetically Generated MCQAs for VLMs in Autonomous Driving Authors:Sutej Kulgod, Sean Ye, Sanchit Tanwar, Christoffer Heckman View a PDF of the paper titled Reducing Text Bias in Synthetically Generated MCQAs for VLMs in Autonomous Driving, by Sutej Kulgod and 3 other authors View PDF HTML (experimental) Abstract:Multiple Choice Question Answering (MCQA) benchmarks are an established standard for measuring Vision Language Model (VLM) performance in driving tasks. However, we observe the known phenomenon that synthetically generated MCQAs are highly susceptible to hidden textual cues that allow models to exploit linguistic patterns rather than visual context. Our results show that a VLM fine-tuned on such data can achieve accuracy comparable to human-validated benchmarks even without visual input. Our proposed method reduces blind accuracy from +66.9% above random to +2.9%, eliminating the vast majority of exploitable textual shortcuts. By decoupling the correct answer from linguistic artifacts and employing a curriculum learning strategy, we force the model to rely on visual grounding, ensuring that performance accurately reflects perceptual understanding. Comments: Subjects: Machine Learning (cs.LG); Computation and Language (cs.CL); Robotics (cs.RO) Cite as: arXiv:2602.17677 [cs.LG] (or arXiv:2602.17677v1 [cs.LG] for this version) https://doi.org/10.4855...