[2510.18914] Fairness Evaluation and Inference Level Mitigation in LLMs
About this article
Abstract page for arXiv paper 2510.18914: Fairness Evaluation and Inference Level Mitigation in LLMs
Computer Science > Computation and Language arXiv:2510.18914 (cs) [Submitted on 21 Oct 2025 (v1), last revised 7 Apr 2026 (this version, v3)] Title:Fairness Evaluation and Inference Level Mitigation in LLMs Authors:Afrozah Nadeem, Mark Dras, Usman Naseem View a PDF of the paper titled Fairness Evaluation and Inference Level Mitigation in LLMs, by Afrozah Nadeem and 2 other authors View PDF HTML (experimental) Abstract:Large language models often display undesirable behaviors embedded in their internal representations, undermining fairness, inconsistency drift, amplification of harmful content, and the propagation of unwanted patterns during extended dialogue and conversations. Although training-time or data-centric methods attempt to reduce these effects, they are computationally expensive, irreversible once deployed, and slow to adapt to new conversational contexts. Pruning-based methods provide a flexible and transparent way to reduce bias by adjusting the neurons responsible for certain behaviors. However, most existing approaches are static; once a neuron is removed, the model loses the ability to adapt when the conversation or context changes. To address this, we propose a dynamic, reversible, pruning-based framework that detects context-aware neuron activations and applies adaptive masking to modulate their influence during generation. Our inference-time solution provides fine-grained, memory-aware mitigation with knowledge-preserved, more coherent behavior across mu...