[2605.04177] Are LLMs Ready for Conflict Monitoring? Empirical Evidence from West Africa
About this article
Abstract page for arXiv paper 2605.04177: Are LLMs Ready for Conflict Monitoring? Empirical Evidence from West Africa
Computer Science > Computation and Language arXiv:2605.04177 (cs) [Submitted on 5 May 2026] Title:Are LLMs Ready for Conflict Monitoring? Empirical Evidence from West Africa Authors:Hoffmann Muki, Olukunle Owolabi View a PDF of the paper titled Are LLMs Ready for Conflict Monitoring? Empirical Evidence from West Africa, by Hoffmann Muki and 1 other authors View PDF Abstract:As LLMs enter conflict monitoring, understanding systematic distortions in their outputs is critical for humanitarian accountability. We evaluate four vanilla open-weight models Gemma 3 4B, Llama 3.2 3B, Mistral 7B, and OLMo 2 7B and two domain-adapted models, AfroConfliBERT and AfroConfliLLAMA, on Nigeria and Cameroon conflict-event classification against ACLED, a gold-standard dataset with multi-stage verification. We find a bifurcated divergence in normative directionality. Open-weight models exhibit statistically significant False Illegitimation bias: Gemma misclassifies to 18.29% of legitimate battles as civilian-targeted violence while making zero False Legitimation errors. By contrast, AfroConfliBERT and AfroConfliLLAMA achieve near-directional neutrality, with Legitimization Bias differences indistinguishable from zero. Yet domain adaptation does not eliminate actor-based selection bias. Both adapted models show statistically significant actor bias comparable to vanilla LLMs; in Nigeria, state actors are legitimized 36.5% more often than non-state actors in identical tactical contexts. Open-weig...