[2512.21106] Semantic Refinement with LLMs for Graph Representations
Abstract page for arXiv paper 2512.21106: Semantic Refinement with LLMs for Graph Representations
Alignment, bias, regulation, and responsible AI
Abstract page for arXiv paper 2512.21106: Semantic Refinement with LLMs for Graph Representations
Abstract page for arXiv paper 2511.22294: Structure is Supervision: Multiview Masked Autoencoders for Radiology
Abstract page for arXiv paper 2511.18123: Bias Is a Subspace, Not a Coordinate: A Geometric Rethinking of Post-hoc Debiasing in Vision-La...
The CGSTA framework enhances multivariate time-series anomaly detection by utilizing dynamic layered graphs and stability-aware alignment...
This paper explores the online alignment of large language models (LLMs) under misspecified preference feedback, proposing a robust optim...
The paper introduces CREDIT, a method for certified ownership verification of deep neural networks to combat model extraction attacks, en...
The paper presents CITED, a novel framework for defending Graph Neural Networks (GNNs) against Model Extraction Attacks (MEAs) by providi...
The Truthfulness Spectrum Hypothesis explores how large language models (LLMs) represent truthfulness across various domains, revealing a...
This article presents a federated framework using a CTMC hazard model for assessing bridge deterioration, allowing municipalities to coll...
The paper presents KBVQ-MoE, a novel framework for improving vector quantization in Mixture of Experts (MoE) large language models, addre...
The paper introduces SAS-Net, a novel framework for robust spatiotemporal registration in bidirectional photoacoustic microscopy, address...
This paper introduces the Persona Brainstorm Audit (PBA), a method for assessing bias in Large Language Models (LLMs) used in creative ap...
This paper investigates safety alignment in large language models (LLMs) and large reasoning models (LRMs), identifying key factors that ...
The paper presents HiGR, a novel framework for generative slate recommendation that enhances efficiency and user preference alignment thr...
The paper introduces Refusal Steering, a method for controlling Large Language Models' refusal behavior on sensitive topics without retra...
This article evaluates the security of large language models (LLMs) used in AI agents, introducing a framework for identifying vulnerabil...
The paper proposes a scalable oversight framework for AI systems using partitioned human supervision, addressing challenges in obtaining ...
This article explores how rationales generated by large language models (LLMs) influence human judgments of plausibility in commonsense r...
This paper presents a novel approach to image transmission using multi-hop deep joint source-channel coding (DeepJSCC) combined with deep...
This paper evaluates the robustness of Vision-Language-Action (VLA) models against various multi-modal perturbations, proposing a new met...
The paper introduces Proportionate Credit Policy Optimization (PCPO), a novel framework aimed at improving the stability and quality of t...
HSSBench introduces a benchmark for evaluating Multimodal Large Language Models (MLLMs) in Humanities and Social Sciences, addressing gap...
This article examines how linguistic and contextual factors influence the accuracy of AI-generated health advice, revealing significant d...
Get the latest news, tools, and insights delivered to your inbox.
Daily or weekly digest • Unsubscribe anytime