[2602.13469] How Multimodal Large Language Models Support Access to Visual Information: A Diary Study With Blind and Low Vision People

[2602.13469] How Multimodal Large Language Models Support Access to Visual Information: A Diary Study With Blind and Low Vision People

arXiv - AI 4 min read Article

Summary

This article explores how multimodal large language models (MLLMs) enhance access to visual information for blind and low vision individuals through a two-week diary study.

Why It Matters

The study highlights the potential of MLLMs to improve daily life for blind and low vision users by providing conversational assistance for visual interpretation. Understanding their effectiveness and limitations is crucial for developing better assistive technologies.

Key Takeaways

  • MLLMs can improve access to visual information for blind and low vision users.
  • Participants found MLLM-generated visual interpretations somewhat trustworthy and satisfying.
  • The accuracy of MLLM responses is limited, with a significant rate of incorrect answers.
  • The study proposes a 'visual assistant' skill to enhance the reliability of MLLM applications.
  • Practical guidelines are suggested for future MLLM-enabled visual interpretation tools.

Computer Science > Human-Computer Interaction arXiv:2602.13469 (cs) [Submitted on 13 Feb 2026] Title:How Multimodal Large Language Models Support Access to Visual Information: A Diary Study With Blind and Low Vision People Authors:Ricardo E. Gonzalez Penuela, Crescentia Jung, Sharon Y Lin, Ruiying Hu, Shiri Azenkot View a PDF of the paper titled How Multimodal Large Language Models Support Access to Visual Information: A Diary Study With Blind and Low Vision People, by Ricardo E. Gonzalez Penuela and 3 other authors View PDF Abstract:Multimodal large language models (MLLMs) are changing how Blind and Low Vision (BLV) people access visual information in their daily lives. Unlike traditional visual interpretation tools that provide access through captions and OCR (text recognition through camera input), MLLM-enabled applications support access through conversational assistance, where users can ask questions to obtain goal-relevant details. However, evidence about their performance in the real-world and their implications for BLV people's everyday life remain limited. To address this, we conducted a two-week diary study, where we captured 20 BLV participants' use of an MLLM-enabled visual interpretation application. Although participants rated the visual interpretations of the application as "somewhat trustworthy" (mean=3.76 out of 5, max=very trustworthy) and "somewhat satisfying" (mean=4.13 out of 5, max=very satisfying), the AI often produced incorrect answers (22.2%) or a...

Related Articles

Llms

Have Companies Began Adopting Claude Co-Work at an Enterprise Level?

Hi Guys, My company is considering purchasing the Claude Enterprise plan. The main two constraints are: - Being able to block usage of Cl...

Reddit - Artificial Intelligence · 1 min ·
Llms

What I learned about multi-agent coordination running 9 specialized Claude agents

I've been experimenting with multi-agent AI systems and ended up building something more ambitious than I originally planned: a fully ope...

Reddit - Artificial Intelligence · 1 min ·
Llms

[D] The problem with comparing AI memory system benchmarks — different evaluation methods make scores meaningless

I've been reviewing how various AI memory systems evaluate their performance and noticed a fundamental issue with cross-system comparison...

Reddit - Machine Learning · 1 min ·
Shifting to AI model customization is an architectural imperative | MIT Technology Review
Llms

Shifting to AI model customization is an architectural imperative | MIT Technology Review

In the early days of large language models (LLMs), we grew accustomed to massive 10x jumps in reasoning and coding capability with every ...

MIT Technology Review · 6 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime