[2603.00024] Personalization Increases Affective Alignment but Has Role-Dependent Effects on Epistemic Independence in LLMs

[2603.00024] Personalization Increases Affective Alignment but Has Role-Dependent Effects on Epistemic Independence in LLMs

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2603.00024: Personalization Increases Affective Alignment but Has Role-Dependent Effects on Epistemic Independence in LLMs

Computer Science > Computation and Language arXiv:2603.00024 (cs) [Submitted on 3 Feb 2026] Title:Personalization Increases Affective Alignment but Has Role-Dependent Effects on Epistemic Independence in LLMs Authors:Sean W. Kelley, Christoph Riedl View a PDF of the paper titled Personalization Increases Affective Alignment but Has Role-Dependent Effects on Epistemic Independence in LLMs, by Sean W. Kelley and Christoph Riedl View PDF HTML (experimental) Abstract:Large Language Models (LLMs) are prone to sycophantic behavior, uncritically conforming to user beliefs. As models increasingly condition responses on user-specific context (personality traits, preferences, conversation history), they gain information to tailor agreement more effectively. Understanding how personalization modulates sycophancy is critical, yet systematic evaluation across models and contexts remains limited. We present a rigorous evaluation of personalization's impact on LLM sycophancy across nine frontier models and five benchmark datasets spanning advice, moral judgment, and debate contexts. We find that personalization generally increases affective alignment (emotional validation, hedging/deference), but affects epistemic alignment (belief adoption, position stability, resistance to influence) with context-dependent role modulation. When the LLM's role is to give advice, personalization strengthens epistemic independence (models challenge user presuppositions). When its role is that of a social ...

Originally published on March 03, 2026. Curated by AI News.

Related Articles

Llms

I Accidentally Discovered a Security Vulnerability in AI Education — Then Submitted It To a $200K Competition

Last night I was testing Maestro University, the first fully AI-taught university. I walked into their enrollment chatbot and asked it to...

Reddit - Artificial Intelligence · 1 min ·
Llms

Is anyone else concerned with this blatant potential of security / privacy breach?

Recently, when sending a very sensitive email to my brother including my mother’s health information, I wondered what happens if a recipi...

Reddit - Artificial Intelligence · 1 min ·
Llms

An attack class that passes every current LLM filter - no payload, no injection signature, no log trace

https://shapingrooms.com/research I published a paper today on something I've been calling postural manipulation. The short version: ordi...

Reddit - Artificial Intelligence · 1 min ·
Llms

[R] An attack class that passes every current LLM filter - no payload, no injection signature, no log trace

https://shapingrooms.com/research I've been documenting what I'm calling postural manipulation: a specific class of language that install...

Reddit - Machine Learning · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime