[2604.08947] MuTSE: A Human-in-the-Loop Multi-use Text Simplification Evaluator
About this article
Abstract page for arXiv paper 2604.08947: MuTSE: A Human-in-the-Loop Multi-use Text Simplification Evaluator
Computer Science > Computation and Language arXiv:2604.08947 (cs) [Submitted on 10 Apr 2026] Title:MuTSE: A Human-in-the-Loop Multi-use Text Simplification Evaluator Authors:Rares-Alexandru Roscan, Gabriel Petre1, Adrian-Marius Dumitran, Angela-Liliana Dumitran View a PDF of the paper titled MuTSE: A Human-in-the-Loop Multi-use Text Simplification Evaluator, by Rares-Alexandru Roscan and Gabriel Petre1 and Adrian-Marius Dumitran and Angela-Liliana Dumitran View PDF HTML (experimental) Abstract:As Large Language Models (LLMs) become increasingly prevalent in text simplification, systematically evaluating their outputs across diverse prompting strategies and architectures remains a critical methodological challenge in both NLP research and Intelligent Tutoring Systems (ITS). Developing robust prompts is often hindered by the absence of structured, visual frameworks for comparative text analysis. While researchers typically rely on static computational scripts, educators are constrained to standard conversational interfaces -- neither paradigm supports systematic multi-dimensional evaluation of prompt-model permutations. To address these limitations, we introduce \textbf{MuTSE}\footnote{The project code and the demo have been made available for peer review at the following anonymized URL. this https URL, an interactive human-in-the-loop web application designed to streamline the evaluation of LLM-generated text simplifications across arbitrary CEFR proficiency targets. The sy...