[2604.08752] LLMs Underperform Graph-Based Parsers on Supervised Relation Extraction for Complex Graphs
About this article
Abstract page for arXiv paper 2604.08752: LLMs Underperform Graph-Based Parsers on Supervised Relation Extraction for Complex Graphs
Computer Science > Computation and Language arXiv:2604.08752 (cs) [Submitted on 9 Apr 2026] Title:LLMs Underperform Graph-Based Parsers on Supervised Relation Extraction for Complex Graphs Authors:Paolo Gajo, Domenic Rosati, Hassan Sajjad, Alberto Barrón-Cedeño View a PDF of the paper titled LLMs Underperform Graph-Based Parsers on Supervised Relation Extraction for Complex Graphs, by Paolo Gajo and 3 other authors View PDF HTML (experimental) Abstract:Relation extraction represents a fundamental component in the process of creating knowledge graphs, among other applications. Large language models (LLMs) have been adopted as a promising tool for relation extraction, both in supervised and in-context learning settings. However, in this work we show that their performance still lags behind much smaller architectures when the linguistic graph underlying a text has great complexity. To demonstrate this, we evaluate four LLMs against a graph-based parser on six relation extraction datasets with sentence graphs of varying sizes and complexities. Our results show that the graph-based parser increasingly outperforms the LLMs, as the number of relations in the input documents increases. This makes the much lighter graph-based parser a superior choice in the presence of complex linguistic graphs. Comments: Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI) Cite as: arXiv:2604.08752 [cs.CL] (or arXiv:2604.08752v1 [cs.CL] for this version) https://doi.org/...