[2603.20229] Characterizing the ability of LLMs to recapitulate Americans' distributional responses to public opinion polling questions across political issues

[2603.20229] Characterizing the ability of LLMs to recapitulate Americans' distributional responses to public opinion polling questions across political issues

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2603.20229: Characterizing the ability of LLMs to recapitulate Americans' distributional responses to public opinion polling questions across political issues

Computer Science > Computers and Society arXiv:2603.20229 (cs) [Submitted on 6 Mar 2026] Title:Characterizing the ability of LLMs to recapitulate Americans' distributional responses to public opinion polling questions across political issues Authors:Eric Gong, Nathan E. Sanders, Bruce Schneier View a PDF of the paper titled Characterizing the ability of LLMs to recapitulate Americans' distributional responses to public opinion polling questions across political issues, by Eric Gong and 2 other authors View PDF HTML (experimental) Abstract:Traditional survey-based political issue polling is becoming less tractable due to increasing costs and risk of bias associated with growing non-response rates and declining coverage of key demographic groups. With researchers and pollsters seeking alternatives, Large Language Models have drawn attention for their potential to augment human population studies in polling contexts. We propose and implement a new framework for anticipating human responses on multiple-choice political issue polling questions by directly prompting an LLM to predict a distribution of responses. By comparison to a large and high quality issue poll of the US population, the Cooperative Election Study, we evaluate how the accuracy of this framework varies across a range of demographics and questions on a variety of topics, as well as how this framework compares to previously proposed frameworks where LLMs are repeatedly queried to simulate individual respondents. ...

Originally published on March 24, 2026. Curated by AI News.

Related Articles

Llms

We hit 150 stars on our AI setup tool!

yo folks, we just hit 150 stars on our open source tool that auto makes AI context files. got 90 PRs merged and 20 issues that ppl are pi...

Reddit - Artificial Intelligence · 1 min ·
Llms

Is ai getting dummer?

Over the past month, it feels like GPT and Gemini have been giving wrong answers a lot. Do you feel the same, or am I exaggerating? submi...

Reddit - Artificial Intelligence · 1 min ·
Llms

If AI is really making us more productive... why does it feel like we are working more, not less...?

The promise of AI was the ultimate system optimisation: Efficiency. On paper, the tools are delivering something similar to what they pro...

Reddit - Artificial Intelligence · 1 min ·
Llms

[R] GPT-5.4-mini regressed 22pp on vanilla prompting vs GPT-5-mini. Nobody noticed because benchmarks don't test this. Recursive Language Models solved it.

GPT-5.4-mini produces shorter, terser outputs by default. Vanilla accuracy dropped from 69.5% to 47.2% across 12 tasks (1,800 evals). The...

Reddit - Machine Learning · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime