[2604.08970] Litmus (Re)Agent: A Benchmark and Agentic System for Predictive Evaluation of Multilingual Models
About this article
Abstract page for arXiv paper 2604.08970: Litmus (Re)Agent: A Benchmark and Agentic System for Predictive Evaluation of Multilingual Models
Computer Science > Computation and Language arXiv:2604.08970 (cs) [Submitted on 10 Apr 2026] Title:Litmus (Re)Agent: A Benchmark and Agentic System for Predictive Evaluation of Multilingual Models Authors:Avni Mittal, Shanu Kumar, Sandipan Dandapat, Monojit Choudhury View a PDF of the paper titled Litmus (Re)Agent: A Benchmark and Agentic System for Predictive Evaluation of Multilingual Models, by Avni Mittal and 2 other authors View PDF HTML (experimental) Abstract:We study predictive multilingual evaluation: estimating how well a model will perform on a task in a target language when direct benchmark results are missing. This problem is common in multilingual deployment, where evaluation coverage is sparse and published evidence is uneven across languages, tasks, and model families. We introduce a controlled benchmark of 1,500 questions spanning six tasks and five evidence scenarios. The benchmark separates accessible evidence from ground truth, enabling evaluation of systems that must infer missing results from incomplete literature evidence. We also present Litmus (Re)Agent, a DAG-orchestrated agentic system that decomposes queries into hypotheses, retrieves evidence, and synthesises predictions through feature-aware aggregation. Across six systems, Litmus (Re)Agent achieves the best overall performance, with the largest gains in transfer-heavy scenarios where direct evidence is weak or absent. These results show that structured agentic reasoning is a promising approac...