[2512.21106] Semantic Refinement with LLMs for Graph Representations
Abstract page for arXiv paper 2512.21106: Semantic Refinement with LLMs for Graph Representations
Alignment, bias, regulation, and responsible AI
Abstract page for arXiv paper 2512.21106: Semantic Refinement with LLMs for Graph Representations
Abstract page for arXiv paper 2511.22294: Structure is Supervision: Multiview Masked Autoencoders for Radiology
Abstract page for arXiv paper 2511.18123: Bias Is a Subspace, Not a Coordinate: A Geometric Rethinking of Post-hoc Debiasing in Vision-La...
The paper presents CS-Aligner, a novel framework for vision-language alignment that integrates Cauchy-Schwarz divergence with mutual info...
The paper introduces Geodesic Integrated Gradients (GIG), a new method for attributing importance scores in deep networks, addressing fla...
This article presents a novel toolchain for implementing safe reinforcement learning in real-world engine control, specifically for trans...
This paper addresses the challenge of partitioning input variables in attribution methods for Explainable AI, proposing new metrics to re...
This article explores the potential of large language models (LLMs) to act as mediators in online conflicts, moving beyond moderation to ...
This article evaluates biases in Large Language Models (LLMs) used as judges in communication systems, assessing their reliability and pr...
This article presents a framework for evaluating AI agent behavior through consumer choice experiments, highlighting biases in decision-m...
The paper presents UbiQTree, a method for decomposing uncertainty in SHAP values used in explainable AI, focusing on aleatoric and episte...
This paper presents STPR, a framework that utilizes large language models to convert complex natural language constraints into executable...
This article presents a novel approach called Reflective Test-Time Planning for embodied LLMs, enabling robots to learn from mistakes thr...
XMorph presents a novel framework for explainable brain tumor analysis, achieving 96% accuracy while addressing interpretability and comp...
This study investigates human vulnerability to deception by large language model (LLM) agents, revealing significant trust issues in high...
The paper introduces VAUQ, a framework for vision-aware uncertainty quantification in large vision-language models (LVLMs), enhancing sel...
This paper presents MMHNet, a novel multimodal hierarchical network that enhances video-to-audio generation by enabling models to general...
This paper explores the relationship between the law of robustness and robust generalization in machine learning, providing a framework t...
This paper presents a novel system that integrates depth camera measurements and deep learning for accurate distance estimation in UAV-as...
This article explores the economic implications of Artificial General Intelligence (AGI), focusing on the transition from human cognition...
This paper explores agentic skills in LLM agents, focusing on reusable procedural capabilities that enhance long-horizon workflows. It pr...
The paper presents AdapTools, a novel framework for adaptive indirect prompt injection attacks on agentic large language models (LLMs), h...
This paper presents an AI-driven methodology for segmenting straylight effects in space camera sensors, enhancing image analysis in resou...
Get the latest news, tools, and insights delivered to your inbox.
Daily or weekly digest • Unsubscribe anytime