[2603.01930] From Variance to Invariance: Qualitative Content Analysis for Narrative Graph Annotation
Nlp

[2603.01930] From Variance to Invariance: Qualitative Content Analysis for Narrative Graph Annotation

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2603.01930: From Variance to Invariance: Qualitative Content Analysis for Narrative Graph Annotation

Computer Science > Computation and Language arXiv:2603.01930 (cs) [Submitted on 2 Mar 2026] Title:From Variance to Invariance: Qualitative Content Analysis for Narrative Graph Annotation Authors:Junbo Huang, Max Weinig, Ulrich Fritsche, Ricardo Usbeck View a PDF of the paper titled From Variance to Invariance: Qualitative Content Analysis for Narrative Graph Annotation, by Junbo Huang and 3 other authors View PDF HTML (experimental) Abstract:Narratives in news discourse play a critical role in shaping public understanding of economic events, such as inflation. Annotating and evaluating these narratives in a structured manner remains a key challenge for Natural Language Processing (NLP). In this work, we introduce a narrative graph annotation framework that integrates principles from qualitative content analysis (QCA) to prioritize annotation quality by reducing annotation errors. We present a dataset of inflation narratives annotated as directed acyclic graphs (DAGs), where nodes represent events and edges encode causal relations. To evaluate annotation quality, we employed a $6\times3$ factorial experimental design to examine the effects of narrative representation (six levels) and distance metric type (three levels) on inter-annotator agreement (Krippendorrf's $\alpha$), capturing the presence of human label variation (HLV) in narrative interpretations. Our analysis shows that (1) lenient metrics (overlap-based distance) overestimate reliability, and (2) locally-constrai...

Originally published on March 03, 2026. Curated by AI News.

Related Articles

Llms

Agents Can Now Propose and Deploy Their Own Code Changes

150 clones yesterday. 43 stars in 3 days. Every agent framework you've used (LangChain, LangGraph, Claude Code) assumes agents are tools ...

Reddit - Artificial Intelligence · 1 min ·
[2603.17839] How do LLMs Compute Verbal Confidence
Llms

[2603.17839] How do LLMs Compute Verbal Confidence

Abstract page for arXiv paper 2603.17839: How do LLMs Compute Verbal Confidence

arXiv - AI · 4 min ·
[2602.03584] $V_0$: A Generalist Value Model for Any Policy at State Zero
Llms

[2602.03584] $V_0$: A Generalist Value Model for Any Policy at State Zero

Abstract page for arXiv paper 2602.03584: $V_0$: A Generalist Value Model for Any Policy at State Zero

arXiv - AI · 4 min ·
[2601.04448] Merging Triggers, Breaking Backdoors: Defensive Poisoning for Instruction-Tuned Language Models
Llms

[2601.04448] Merging Triggers, Breaking Backdoors: Defensive Poisoning for Instruction-Tuned Language Models

Abstract page for arXiv paper 2601.04448: Merging Triggers, Breaking Backdoors: Defensive Poisoning for Instruction-Tuned Language Models

arXiv - AI · 3 min ·
More in Nlp: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime