[2507.10587] Anthropomimetic Uncertainty: What Verbalized Uncertainty in Language Models is Missing

[2507.10587] Anthropomimetic Uncertainty: What Verbalized Uncertainty in Language Models is Missing

arXiv - AI 4 min read Article

Summary

The paper discusses the concept of anthropomimetic uncertainty in language models, emphasizing the need for these models to express confidence levels to enhance trustworthiness and collaboration with human users.

Why It Matters

As language models become integral to human-computer interaction, understanding and communicating uncertainty is crucial. This research highlights the importance of mimicking human uncertainty communication to improve user trust and the effectiveness of AI systems.

Key Takeaways

  • Language models often exhibit overconfidence, reducing trust.
  • Verbalized uncertainty can improve human-machine collaboration.
  • Anthropomimetic uncertainty aims to replicate human communication of uncertainty.
  • The paper reviews existing research on human uncertainty communication.
  • Future research directions are outlined for implementing these concepts.

Computer Science > Computation and Language arXiv:2507.10587 (cs) [Submitted on 11 Jul 2025 (v1), last revised 20 Feb 2026 (this version, v2)] Title:Anthropomimetic Uncertainty: What Verbalized Uncertainty in Language Models is Missing Authors:Dennis Ulmer, Alexandra Lorson, Ivan Titov, Christian Hardmeier View a PDF of the paper titled Anthropomimetic Uncertainty: What Verbalized Uncertainty in Language Models is Missing, by Dennis Ulmer and 3 other authors View PDF Abstract:Human users increasingly communicate with large language models (LLMs), but LLMs suffer from frequent overconfidence in their output, even when its accuracy is questionable, which undermines their trustworthiness and perceived legitimacy. Therefore, there is a need for language models to signal their confidence in order to reap the benefits of human-machine collaboration and mitigate potential harms. Verbalized uncertainty is the expression of confidence with linguistic means, an approach that integrates perfectly into language-based interfaces. Most recent research in natural language processing (NLP) overlooks the nuances surrounding human uncertainty communication and the biases that influence the communication of and with machines. We argue for anthropomimetic uncertainty, the principle that intuitive and trustworthy uncertainty communication requires a degree of imitation of human linguistic behaviors. We present a thorough overview of the research in human uncertainty communication, survey ongoi...

Related Articles

Llms

I think we’re about to have a new kind of “SEO”… and nobody is talking about it.

More people are asking ChatGPT things like: “what’s the best CRM?” “is this tool worth it?” “alternatives to X” And they just… trust the ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Why would Claude give me the same response over and over and give others different replies?

I asked Claude to "generate me a random word" so I could do some word play. Then I asked it again in a new prompt window on desktop after...

Reddit - Artificial Intelligence · 1 min ·
Anthropic essentially bans OpenClaw from Claude by making subscribers pay extra | The Verge
Llms

Anthropic essentially bans OpenClaw from Claude by making subscribers pay extra | The Verge

The popular combination of OpenClaw and Claude Code is being severed now that Anthropic has announced it will start charging subscribers ...

The Verge - AI · 4 min ·
Llms

wtf bro did what? arc 3 2026

The Physarum Explorer is a high-speed, bio-inspired neural model designed specifically for ARC geometry. Here is the snapshot of its curr...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime