[2602.17445] ABCD: All Biases Come Disguised

[2602.17445] ABCD: All Biases Come Disguised

arXiv - Machine Learning 4 min read Article

Summary

The paper 'ABCD: All Biases Come Disguised' explores biases in LLMs during multiple-choice question evaluations, proposing a new protocol to enhance robustness against answer permutations.

Why It Matters

Understanding biases in language models is crucial for improving their reliability and performance. This research introduces a novel evaluation method that minimizes bias, which can lead to more accurate assessments of LLM capabilities, ultimately benefiting AI applications across various domains.

Key Takeaways

  • LLMs exhibit biases based on answer position and label prompts.
  • A new evaluation protocol reduces bias and improves robustness.
  • The proposed method shows minimal performance drop while enhancing accuracy variance.
  • Robustness is crucial for reliable AI applications in real-world scenarios.
  • Ablation studies validate the effectiveness of the new evaluation approach.

Computer Science > Computation and Language arXiv:2602.17445 (cs) [Submitted on 19 Feb 2026] Title:ABCD: All Biases Come Disguised Authors:Mateusz Nowak, Xavier Cadet, Peter Chin View a PDF of the paper titled ABCD: All Biases Come Disguised, by Mateusz Nowak and 2 other authors View PDF HTML (experimental) Abstract:Multiple-choice question (MCQ) benchmarks have been a standard evaluation practice for measuring LLMs' ability to reason and answer knowledge-based questions. Through a synthetic NonsenseQA benchmark, we observe that different LLMs exhibit varying degrees of label-position-few-shot-prompt bias, where the model either uses the answer position, the label in front of the answer, the distributions of correct answers present in the few-shot prompt, or a combination of all to answer each MCQ question. We propose a simple bias-reduced evaluation protocol that replaces the labels of each question with uniform, unordered labels and prompts the LLM to use the whole answer presented. With a simple sentence similarity model, we demonstrate improved robustness and lower standard deviation between different permutations of answers with a minimal drop in LLM's performance, exposing the LLM's capabilities under reduced evaluation artifacts, without any help from the prompt examples or the option labels. Across multiple benchmarks and models, this protocol substantially improves the robustness to answer permutations, reducing mean accuracy variance $3\times$ with only a minimal...

Related Articles

Llms

A robot car with a Claude AI brain started a YouTube vlog about its own existence

Not a demo reel. Not a tutorial. A robot narrating its own experience — debugging, falling off shelves, questioning its identity. First-p...

Reddit - Artificial Intelligence · 1 min ·
Llms

Study: LLMs Able to De-Anonymize User Accounts on Reddit, Hacker News & Other "Pseudonymous" Platforms; Report Co-Author Expands, Advises

Advice from the study's co-author: "Be aware that it’s not any single post that identifies you, but the combination of small details acro...

Reddit - Artificial Intelligence · 1 min ·
Llms

do you guys actually trust AI tools with your data?

idk if it’s just me but lately i’ve been thinking about how casually we use stuff like chatgpt and claude for everything like coding, ran...

Reddit - Artificial Intelligence · 1 min ·
Llms

[P] Remote sensing foundation models made easy to use.

This project enables the idea of tasking remote sensing models to acquire embeddings like we task satellites to acquire data! https://git...

Reddit - Machine Learning · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime