[2604.02171] Do Lexical and Contextual Coreference Resolution Systems Degrade Differently under Mention Noise? An Empirical Study on Scientific Software Mentions

[2604.02171] Do Lexical and Contextual Coreference Resolution Systems Degrade Differently under Mention Noise? An Empirical Study on Scientific Software Mentions

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2604.02171: Do Lexical and Contextual Coreference Resolution Systems Degrade Differently under Mention Noise? An Empirical Study on Scientific Software Mentions

Computer Science > Computation and Language arXiv:2604.02171 (cs) [Submitted on 2 Apr 2026] Title:Do Lexical and Contextual Coreference Resolution Systems Degrade Differently under Mention Noise? An Empirical Study on Scientific Software Mentions Authors:Atilla Kaan Alkan, Felix Grezes, Jennifer Lynn Bartlett, Anna Kelbert, Kelly Lockhart, Alberto Accomazzi View a PDF of the paper titled Do Lexical and Contextual Coreference Resolution Systems Degrade Differently under Mention Noise? An Empirical Study on Scientific Software Mentions, by Atilla Kaan Alkan and 5 other authors View PDF HTML (experimental) Abstract:We present our participation in the SOMD 2026 shared task on cross-document software mention coreference resolution, where our systems ranked second across all three subtasks. We compare two fine-tuning-free approaches: Fuzzy Matching (FM), a lexical string-similarity method, and Context Aware Representations (CAR), which combines mention-level and document-level embeddings. Both achieve competitive performance across all subtasks (CoNLL F1 of 0.94-0.96), with CAR consistently outperforming FM by 1 point on the official test set, consistent with the high surface regularity of software names, which reduces the need for complex semantic reasoning. A controlled noise-injection study reveals complementary failure modes: as boundary noise increases, CAR loses only 0.07 F1 points from clean to fully corrupted input, compared to 0.20 for FM, whereas under mention substitu...

Originally published on April 03, 2026. Curated by AI News.

Related Articles

Machine Learning

GPU Compass – open-source, real-time GPU pricing across 20+ clouds [P]

We maintain an open-source catalog of cloud GPU offerings (skypilot-catalog, Apache 2.0). It auto-fetches pricing from 20+ cloud APIs eve...

Reddit - Machine Learning · 1 min ·
5 AI Models Tried to Scam Me. Some of Them Were Scary Good | WIRED
Machine Learning

5 AI Models Tried to Scam Me. Some of Them Were Scary Good | WIRED

The cyber capabilities of AI models have experts rattled. AI’s social skills may be just as dangerous.

Wired - AI · 8 min ·
Machine Learning

“AI engineers” today are just prompt engineers with better branding?

Hot take: A lot of what’s being called “AI engineering” right now feels like: prompt tweaking chaining APIs adding retries/guardrails Not...

Reddit - Artificial Intelligence · 1 min ·
Anthropic’s Mythos rollout has missed America’s cybersecurity agency | The Verge
Machine Learning

Anthropic’s Mythos rollout has missed America’s cybersecurity agency | The Verge

The Cybersecurity and Infrastructure Security Agency (CISA) doesn’t have access to Anthropic’s Mythos Preview, Axios reported.

The Verge - AI · 5 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime