[2512.21106] Semantic Refinement with LLMs for Graph Representations
Abstract page for arXiv paper 2512.21106: Semantic Refinement with LLMs for Graph Representations
Alignment, bias, regulation, and responsible AI
Abstract page for arXiv paper 2512.21106: Semantic Refinement with LLMs for Graph Representations
Abstract page for arXiv paper 2511.22294: Structure is Supervision: Multiview Masked Autoencoders for Radiology
Abstract page for arXiv paper 2511.18123: Bias Is a Subspace, Not a Coordinate: A Geometric Rethinking of Post-hoc Debiasing in Vision-La...
The paper presents PRECTR-V2, an advanced framework for improving search relevance and click-through rate (CTR) prediction by addressing ...
The paper introduces CAMEL, a confidence-gated reflection framework for reward modeling in AI, achieving state-of-the-art performance wit...
This article explores the use of vision-language models (VLMs) for non-invasive ergonomic assessment of manual lifting tasks, estimating ...
This article evaluates various machine learning models for hate speech detection on social media, comparing traditional and advanced tech...
The paper presents OptiLeak, a framework utilizing reinforcement learning to enhance prompt reconstruction efficiency in multi-tenant LLM...
This article examines the issue of personal information memorization in language models, highlighting the risks and proposing a detection...
This paper explores fair allocation of indivisible goods through limited cost-sensitive sharing, demonstrating how controlled sharing can...
This article explores a hybrid dialogue system that integrates Large Language Models (LLMs) within a rule-based framework to enhance lear...
This article explores the limitations of diversity in ideas generated by large language models (LLMs) compared to human creativity, ident...
This article discusses three significant challenges and two potential solutions for improving the safety of unsupervised elicitation in l...
The paper introduces QueryBandits, a model-agnostic framework designed to mitigate hallucinations in large language models (LLMs) by opti...
This article presents a framework for circuit tracing in vision-language models (VLMs), aiming to enhance understanding of their internal...
This article examines how specific linguistic features of queries impact the performance of Large Language Models (LLMs), particularly in...
This article examines the expectation-realisation gap in agentic AI systems, revealing discrepancies between anticipated productivity gai...
This paper proposes the 'Right to History,' a principle ensuring individuals have a verifiable record of AI agent actions on personal har...
CodeHacker is an automated framework designed to generate test cases that identify vulnerabilities in competitive programming solutions, ...
This paper explores the concept of 'Epistemic Debt' in novice programming using generative AI, proposing metacognitive scripts to enhance...
This article discusses the concept of 'golden layers' in large language models (LLMs) and presents a novel method, Layer Gradient Analysi...
This paper evaluates the reliability of digital forensic evidence identified by large language models (LLMs), proposing a structured fram...
The OpenPort Protocol introduces a governance-first approach for AI agents, ensuring secure access to application tools while addressing ...
Get the latest news, tools, and insights delivered to your inbox.
Daily or weekly digest • Unsubscribe anytime