[2603.01690] QIME: Constructing Interpretable Medical Text Embeddings via Ontology-Grounded Questions
Nlp

[2603.01690] QIME: Constructing Interpretable Medical Text Embeddings via Ontology-Grounded Questions

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2603.01690: QIME: Constructing Interpretable Medical Text Embeddings via Ontology-Grounded Questions

Computer Science > Computation and Language arXiv:2603.01690 (cs) [Submitted on 2 Mar 2026] Title:QIME: Constructing Interpretable Medical Text Embeddings via Ontology-Grounded Questions Authors:Yixuan Tang, Zhenghong Lin, Yandong Sun, Anthony K.H. Tung View a PDF of the paper titled QIME: Constructing Interpretable Medical Text Embeddings via Ontology-Grounded Questions, by Yixuan Tang and 3 other authors View PDF HTML (experimental) Abstract:While dense biomedical embeddings achieve strong performance, their black-box nature limits their utility in clinical decision-making. Recent question-based interpretable embeddings represent text as binary answers to natural-language questions, but these approaches often rely on heuristic or surface-level contrastive signals and overlook specialized domain knowledge. We propose QIME, an ontology-grounded framework for constructing interpretable medical text embeddings in which each dimension corresponds to a clinically meaningful yes/no question. By conditioning on cluster-specific medical concept signatures, QIME generates semantically atomic questions that capture fine-grained distinctions in biomedical text. Furthermore, QIME supports a training-free embedding construction strategy that eliminates per-question classifier training while further improving performance. Experiments across biomedical semantic similarity, clustering, and retrieval benchmarks show that QIME consistently outperforms prior interpretable embedding methods ...

Originally published on March 03, 2026. Curated by AI News.

Related Articles

Llms

Agents Can Now Propose and Deploy Their Own Code Changes

150 clones yesterday. 43 stars in 3 days. Every agent framework you've used (LangChain, LangGraph, Claude Code) assumes agents are tools ...

Reddit - Artificial Intelligence · 1 min ·
[2603.17839] How do LLMs Compute Verbal Confidence
Llms

[2603.17839] How do LLMs Compute Verbal Confidence

Abstract page for arXiv paper 2603.17839: How do LLMs Compute Verbal Confidence

arXiv - AI · 4 min ·
[2602.03584] $V_0$: A Generalist Value Model for Any Policy at State Zero
Llms

[2602.03584] $V_0$: A Generalist Value Model for Any Policy at State Zero

Abstract page for arXiv paper 2602.03584: $V_0$: A Generalist Value Model for Any Policy at State Zero

arXiv - AI · 4 min ·
[2601.04448] Merging Triggers, Breaking Backdoors: Defensive Poisoning for Instruction-Tuned Language Models
Llms

[2601.04448] Merging Triggers, Breaking Backdoors: Defensive Poisoning for Instruction-Tuned Language Models

Abstract page for arXiv paper 2601.04448: Merging Triggers, Breaking Backdoors: Defensive Poisoning for Instruction-Tuned Language Models

arXiv - AI · 3 min ·
More in Nlp: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime