[2603.03095] Compact Prompting in Instruction-tuned LLMs for Joint Argumentative Component Detection
About this article
Abstract page for arXiv paper 2603.03095: Compact Prompting in Instruction-tuned LLMs for Joint Argumentative Component Detection
Computer Science > Computation and Language arXiv:2603.03095 (cs) [Submitted on 3 Mar 2026] Title:Compact Prompting in Instruction-tuned LLMs for Joint Argumentative Component Detection Authors:Sofiane Elguendouze, Erwan Hain, Elena Cabrio, Serena Villata View a PDF of the paper titled Compact Prompting in Instruction-tuned LLMs for Joint Argumentative Component Detection, by Sofiane Elguendouze and 3 other authors View PDF HTML (experimental) Abstract:Argumentative component detection (ACD) is a core subtask of Argument(ation) Mining (AM) and one of its most challenging aspects, as it requires jointly delimiting argumentative spans and classifying them into components such as claims and premises. While research on this subtask remains relatively limited compared to other AM tasks, most existing approaches formulate it as a simplified sequence labeling problem, component classification, or a pipeline of component segmentation followed by classification. In this paper, we propose a novel approach based on instruction-tuned Large Language Models (LLMs) using compact instruction-based prompts, and reframe ACD as a language generation task, enabling arguments to be identified directly from plain text without relying on pre-segmented components. Experiments on standard benchmarks show that our approach achieves higher performance compared to state-of-the-art systems. To the best of our knowledge, this is one of the first attempts to fully model ACD as a generative task, highlig...