[2603.27057] Debiasing Large Language Models toward Social Factors in Online Behavior Analytics through Prompt Knowledge Tuning
About this article
Abstract page for arXiv paper 2603.27057: Debiasing Large Language Models toward Social Factors in Online Behavior Analytics through Prompt Knowledge Tuning
Computer Science > Computation and Language arXiv:2603.27057 (cs) [Submitted on 28 Mar 2026] Title:Debiasing Large Language Models toward Social Factors in Online Behavior Analytics through Prompt Knowledge Tuning Authors:Hossein Salemi, Jitin Krishnan, Hemant Purohit View a PDF of the paper titled Debiasing Large Language Models toward Social Factors in Online Behavior Analytics through Prompt Knowledge Tuning, by Hossein Salemi and 2 other authors View PDF HTML (experimental) Abstract:Attribution theory explains how individuals interpret and attribute others' behavior in a social context by employing personal (dispositional) and impersonal (situational) causality. Large Language Models (LLMs), trained on human-generated corpora, may implicitly mimic this social attribution process in social contexts. However, the extent to which LLMs utilize these causal attributions in their reasoning remains underexplored. Although using reasoning paradigms, such as Chain-of-Thought (CoT), has shown promising results in various tasks, ignoring social attribution in reasoning could lead to biased responses by LLMs in social contexts. In this study, we investigate the impact of incorporating a user's goal as knowledge to infer dispositional causality and message context to infer situational causality on LLM performance. To this end, we introduce a scalable method to mitigate such biases by enriching the instruction prompts for LLMs with two prompt aids using social-attribution knowledge,...