[2602.04674] Overstating Attitudes, Ignoring Networks: LLM Biases in Simulating Misinformation Susceptibility

[2602.04674] Overstating Attitudes, Ignoring Networks: LLM Biases in Simulating Misinformation Susceptibility

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2602.04674: Overstating Attitudes, Ignoring Networks: LLM Biases in Simulating Misinformation Susceptibility

Computer Science > Social and Information Networks arXiv:2602.04674 (cs) [Submitted on 4 Feb 2026 (v1), last revised 10 Apr 2026 (this version, v2)] Title:Overstating Attitudes, Ignoring Networks: LLM Biases in Simulating Misinformation Susceptibility Authors:Eun Cheol Choi, Lindsay E. Young, Emilio Ferrara View a PDF of the paper titled Overstating Attitudes, Ignoring Networks: LLM Biases in Simulating Misinformation Susceptibility, by Eun Cheol Choi and 2 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) are increasingly used as proxies for human judgment in computational social science, yet their ability to reproduce patterns of susceptibility to misinformation remains unclear. We test whether LLM-simulated survey respondents, prompted with participant profiles drawn from social survey data measuring network, demographic, attitudinal and behavioral features, can reproduce human patterns of misinformation belief and sharing. Using three online surveys as baselines, we evaluate whether LLM outputs match observed response distributions and recover feature-outcome associations present in the original survey data. LLM-generated responses capture broad distributional tendencies and show modest correlation with human responses, but consistently overstate the association between belief and sharing. Linear models fit to simulated responses exhibit substantially higher explained variance and place disproportionate weight on attitudinal and behaviora...

Originally published on April 13, 2026. Curated by AI News.

Related Articles

[2511.05168] Another BRIXEL in the Wall: Towards Cheaper Dense Features
Llms

[2511.05168] Another BRIXEL in the Wall: Towards Cheaper Dense Features

Abstract page for arXiv paper 2511.05168: Another BRIXEL in the Wall: Towards Cheaper Dense Features

arXiv - Machine Learning · 3 min ·
[2604.07361] BLEG: LLM Functions as Powerful fMRI Graph-Enhancer for Brain Network Analysis
Llms

[2604.07361] BLEG: LLM Functions as Powerful fMRI Graph-Enhancer for Brain Network Analysis

Abstract page for arXiv paper 2604.07361: BLEG: LLM Functions as Powerful fMRI Graph-Enhancer for Brain Network Analysis

arXiv - Machine Learning · 4 min ·
[2601.18150] FP8-RL: A Practical and Stable Low-Precision Stack for LLM Reinforcement Learning
Llms

[2601.18150] FP8-RL: A Practical and Stable Low-Precision Stack for LLM Reinforcement Learning

Abstract page for arXiv paper 2601.18150: FP8-RL: A Practical and Stable Low-Precision Stack for LLM Reinforcement Learning

arXiv - Machine Learning · 4 min ·
[2604.09418] Automated Instruction Revision (AIR): A Structured Comparison of Task Adaptation Strategies for LLM
Llms

[2604.09418] Automated Instruction Revision (AIR): A Structured Comparison of Task Adaptation Strategies for LLM

Abstract page for arXiv paper 2604.09418: Automated Instruction Revision (AIR): A Structured Comparison of Task Adaptation Strategies for...

arXiv - Machine Learning · 3 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime