[2605.00012] Exploring LLM biases to manipulate AI search overview
About this article
Abstract page for arXiv paper 2605.00012: Exploring LLM biases to manipulate AI search overview
Computer Science > Information Retrieval arXiv:2605.00012 (cs) [Submitted on 30 Mar 2026] Title:Exploring LLM biases to manipulate AI search overview Authors:Roman Smirnov View a PDF of the paper titled Exploring LLM biases to manipulate AI search overview, by Roman Smirnov View PDF HTML (experimental) Abstract:Modern large language models (LLMs) are used in many business applications in general, and specifically in web search systems and applications that generate overviews of search results - LLM Overview systems. Such systems are using an LLM to select most relevant sources from search results and generate an answer to the user's query. It is known from many studies that LLMs have different biases, in LLM Overview application both the source selection and answer generation stages may be affected by the biases of LLMs (here we are focusing mainly on the selection stage). This research is focused on investigating the presence of the biases in LLM Overview systems and on biases exploitation to manipulate LLM Overview results. Here we train a small language model using reinforcement learning to rewrite search snippets to increase their likelihood of being preferred by an LLM Overview. Our experimental setup intentionally restricts the policy to operate only on snippets and limits reward-hacking strategies, reflecting realistic constraints of web search environments. The results prove that LLM Overview systems have biases and that reinforcement learning in most of the cases ...