[2512.21106] Semantic Refinement with LLMs for Graph Representations
Abstract page for arXiv paper 2512.21106: Semantic Refinement with LLMs for Graph Representations
Alignment, bias, regulation, and responsible AI
Abstract page for arXiv paper 2512.21106: Semantic Refinement with LLMs for Graph Representations
Abstract page for arXiv paper 2511.22294: Structure is Supervision: Multiview Masked Autoencoders for Radiology
Abstract page for arXiv paper 2511.18123: Bias Is a Subspace, Not a Coordinate: A Geometric Rethinking of Post-hoc Debiasing in Vision-La...
This paper investigates the impact of encoder-side poisoning on text-to-image models, revealing that traditional evaluations of backdoor ...
The paper introduces CAGE, a framework for culturally adaptive red-teaming benchmark generation, addressing the limitations of existing b...
This article explores the ownership rules surrounding AI-generated outputs, examining how they are linked to their creators and the impli...
This article presents a benchmarking framework for early deterioration prediction in emergency triage, comparing hospital-rich settings w...
The paper presents ConceptRM, a novel method aimed at reducing alert fatigue in intelligent agents by improving data cleaning processes f...
The paper presents a novel training paradigm for AI that integrates concepts from affective neuroscience, focusing on a dual-model framew...
The paper explores how Large Language Models (LLMs) can achieve superintelligence through the Diligent Learner framework, emphasizing the...
This article introduces Vision-Language Causal Graphs (VLCGs) to enhance causal reasoning in Vision-Language Models (LVLMs), addressing t...
This paper presents a novel evaluation framework for assessing the alignment of language models under realistic pressure, revealing behav...
This paper presents a pipeline for verifying mathematical solutions generated by Large Language Models (LLMs), emphasizing both automatic...
The paper introduces Counterfactual Simulation Training (CST), a method designed to enhance Chain-of-Thought (CoT) faithfulness in large ...
The paper introduces ICON, a novel framework designed to defend Large Language Model (LLM) agents against Indirect Prompt Injection (IPI)...
The paper presents PromptCD, a method for enhancing AI behavior at test time using polarity-prompt contrastive decoding, improving alignm...
This paper explores the challenges of ensuring safety in AI systems using untrusted monitoring. It develops a taxonomy of collusion strat...
This paper explores the cross-modal bias in multimodal large language models (MLLMs) through a physics-based phenomenological approach, a...
The paper presents an evaluation framework called Implicit Intelligence, which assesses AI agents' ability to understand unstated user re...
The article discusses new tools that analyze the energy consumption of various AI models, highlighting the importance of understanding po...
Anthropic has announced the discontinuation of its flagship safety pledge, raising concerns about AI safety commitments in the industry.
The article discusses a Pentagon meeting involving Defense Secretary Pete Hegseth, former Uber executive Emil Michael, and private equity...
Concerns over AI spending are causing volatility in Wall Street, as investors question profitability. Major companies like IBM and Master...
Get the latest news, tools, and insights delivered to your inbox.
Daily or weekly digest • Unsubscribe anytime