[2510.14628] RLAIF-SPA: Structured AI Feedback for Semantic-Prosodic Alignment in Speech Synthesis
Abstract page for arXiv paper 2510.14628: RLAIF-SPA: Structured AI Feedback for Semantic-Prosodic Alignment in Speech Synthesis
Alignment, bias, regulation, and responsible AI
Abstract page for arXiv paper 2510.14628: RLAIF-SPA: Structured AI Feedback for Semantic-Prosodic Alignment in Speech Synthesis
Abstract page for arXiv paper 2504.05995: NativQA Framework: Enabling LLMs and VLMs with Native, Local, and Everyday Knowledge
Abstract page for arXiv paper 2502.19463: Hedging and Non-Affirmation: Quantifying LLM Alignment on Questions of Human Rights
MC$^2$Mark introduces a novel watermarking framework that ensures reliable embedding of long messages in generated text while maintaining...
This article presents a multi-agent framework for medical AI that enhances clinical query processing by leveraging fine-tuned language mo...
This paper explores the integration of Large Language Models (LLMs) in anticipating adversary behavior within DevSecOps environments, pro...
The paper explores the limitations of factuality evaluations in large language models (LLMs), identifying recall as a key bottleneck in a...
This paper presents a novel approach to activation functions in neural networks that incorporates missing data and confidence scores, enh...
This article explores the post-training pipeline for LLM-based vulnerability detection, detailing methods from supervised fine-tuning (SF...
The LEAD-Drift framework offers a real-time solution for detecting intent drift in Intent-Based Networking (IBN), enhancing proactive net...
This paper presents novel locally private parametric methods for change-point detection, focusing on maintaining privacy while identifyin...
The paper discusses a polytopological PDL framework for expressing common knowledge and its implications in epistemic logic, highlighting...
The paper introduces the Generative Speech Reward Model (GSRM), a novel approach to evaluating speech naturalness in AI-generated audio, ...
The paper introduces Comparables XAI, a method for providing faithful, example-based AI explanations using counterfactual trace adjustmen...
The paper presents Transferable XAI, a framework that enables users to apply understanding from one AI domain to another, enhancing decis...
This paper explores the integration of moral cognition into AI decision-making models, introducing the concept of Expected Moral Shortfal...
This paper presents a novel hybrid quantum reinforcement learning framework, Q-PPO, designed to enhance the security of SIM-assisted wire...
This article explores how anthropomorphism in AI influences risk perception through trust and domain knowledge, based on a large-scale on...
The paper identifies a vulnerability in large language model (LLM) evaluation processes, termed Rubric-Induced Preference Drift (RIPD), w...
The paper introduces Elo-Evolve, a co-evolutionary framework for aligning large language models (LLMs) through dynamic multi-agent compet...
The paper examines how increasing context length in large language models (LLMs) affects personalization quality and privacy risks, revea...
The paper presents the Adaptive Safe Context Learning (ASCL) framework to address the safety-utility trade-off in large language model (L...
The paper presents a Privacy-Concealing Cooperation (PCC) framework for Bird's Eye View (BEV) semantic segmentation, enhancing autonomous...
Get the latest news, tools, and insights delivered to your inbox.
Daily or weekly digest • Unsubscribe anytime