AI Safety & Ethics

Alignment, bias, regulation, and responsible AI

Top This Week

[2604.03356] Evaluating Artificial Intelligence Through a Christian Understanding of Human Flourishing
Llms

[2604.03356] Evaluating Artificial Intelligence Through a Christian Understanding of Human Flourishing

Abstract page for arXiv paper 2604.03356: Evaluating Artificial Intelligence Through a Christian Understanding of Human Flourishing

arXiv - AI · 3 min ·
[2602.01528] Making Bias Non-Predictive: Training Robust LLM Reasoning via Reinforcement Learning
Llms

[2602.01528] Making Bias Non-Predictive: Training Robust LLM Reasoning via Reinforcement Learning

Abstract page for arXiv paper 2602.01528: Making Bias Non-Predictive: Training Robust LLM Reasoning via Reinforcement Learning

arXiv - Machine Learning · 4 min ·
[2510.27584] Image Hashing via Cross-View Code Alignment in the Age of Foundation Models
Llms

[2510.27584] Image Hashing via Cross-View Code Alignment in the Age of Foundation Models

Abstract page for arXiv paper 2510.27584: Image Hashing via Cross-View Code Alignment in the Age of Foundation Models

arXiv - Machine Learning · 4 min ·

All Content

[2602.15061] Safe-SDL:Establishing Safety Boundaries and Control Mechanisms for AI-Driven Self-Driving Laboratories
Robotics

[2602.15061] Safe-SDL:Establishing Safety Boundaries and Control Mechanisms for AI-Driven Self-Driving Laboratories

The paper presents Safe-SDL, a framework for ensuring safety in AI-driven Self-Driving Laboratories, addressing the critical 'Syntax-to-S...

arXiv - AI · 4 min ·
[2602.15568] Scenario Approach with Post-Design Certification of User-Specified Properties
Data Science

[2602.15568] Scenario Approach with Post-Design Certification of User-Specified Properties

This paper introduces a scenario approach for post-design certification of user-specified properties, enhancing reliability without addit...

arXiv - Machine Learning · 3 min ·
[2602.15552] Latent Regularization in Generative Test Input Generation
Machine Learning

[2602.15552] Latent Regularization in Generative Test Input Generation

This paper explores the effects of latent space regularization on the quality of generative test inputs for deep learning classifiers, de...

arXiv - Machine Learning · 3 min ·
[2602.15055] Beyond Context Sharing: A Unified Agent Communication Protocol (ACP) for Secure, Federated, and Autonomous Agent-to-Agent (A2A) Orchestration
Llms

[2602.15055] Beyond Context Sharing: A Unified Agent Communication Protocol (ACP) for Secure, Federated, and Autonomous Agent-to-Agent (A2A) Orchestration

The paper introduces the Agent Communication Protocol (ACP), a framework for secure and efficient agent-to-agent orchestration, addressin...

arXiv - AI · 3 min ·
[2602.15037] CircuChain: Disentangling Competence and Compliance in LLM Circuit Analysis
Llms

[2602.15037] CircuChain: Disentangling Competence and Compliance in LLM Circuit Analysis

The paper introduces CircuChain, a benchmark for evaluating large language models (LLMs) in electrical circuit analysis, focusing on thei...

arXiv - AI · 4 min ·
[2602.15423] GaiaFlow: Semantic-Guided Diffusion Tuning for Carbon-Frugal Search
Machine Learning

[2602.15423] GaiaFlow: Semantic-Guided Diffusion Tuning for Carbon-Frugal Search

GaiaFlow presents a novel framework for carbon-efficient search, employing semantic-guided diffusion tuning to balance retrieval accuracy...

arXiv - Machine Learning · 3 min ·
[2602.15785] This human study did not involve human subjects: Validating LLM simulations as behavioral evidence
Llms

[2602.15785] This human study did not involve human subjects: Validating LLM simulations as behavioral evidence

This article discusses the use of large language models (LLMs) as synthetic participants in social science experiments, evaluating their ...

arXiv - AI · 4 min ·
[2602.15368] GMAIL: Generative Modality Alignment for generated Image Learning
Machine Learning

[2602.15368] GMAIL: Generative Modality Alignment for generated Image Learning

The paper presents GMAIL, a novel framework for aligning generated images with real images in machine learning, enhancing performance in ...

arXiv - Machine Learning · 4 min ·
[2602.15326] SCENE OTA-FD: Self-Centering Noncoherent Estimator for Over-the-Air Federated Distillation
Ai Safety

[2602.15326] SCENE OTA-FD: Self-Centering Noncoherent Estimator for Over-the-Air Federated Distillation

The paper presents SCENE, a novel estimator for over-the-air federated distillation that enhances aggregation without requiring pilot sig...

arXiv - Machine Learning · 3 min ·
[2602.15645] CARE Drive A Framework for Evaluating Reason-Responsiveness of Vision Language Models in Automated Driving
Llms

[2602.15645] CARE Drive A Framework for Evaluating Reason-Responsiveness of Vision Language Models in Automated Driving

The article presents CARE Drive, a framework for evaluating the reason-responsiveness of vision language models in automated driving, add...

arXiv - AI · 4 min ·
[2602.15323] Unforgeable Watermarks for Language Models via Robust Signatures
Llms

[2602.15323] Unforgeable Watermarks for Language Models via Robust Signatures

The paper presents a novel watermarking scheme for language models that ensures unforgeability and recoverability, enhancing content prov...

arXiv - Machine Learning · 4 min ·
[2602.15553] RUVA: Personalized Transparent On-Device Graph Reasoning
Nlp

[2602.15553] RUVA: Personalized Transparent On-Device Graph Reasoning

The paper presents RUVA, a novel architecture for personalized on-device graph reasoning that enhances user control over AI-generated con...

arXiv - AI · 3 min ·
[2602.15259] Knowing Isn't Understanding: Re-grounding Generative Proactivity with Epistemic and Behavioral Insight
Generative Ai

[2602.15259] Knowing Isn't Understanding: Re-grounding Generative Proactivity with Epistemic and Behavioral Insight

This paper discusses the limitations of generative AI agents that equate understanding with resolving explicit queries, highlighting the ...

arXiv - Machine Learning · 4 min ·
[2602.15532] Quantifying construct validity in large language model evaluations
Llms

[2602.15532] Quantifying construct validity in large language model evaluations

This paper presents a structured capabilities model to improve the construct validity of large language model (LLM) evaluations, addressi...

arXiv - Machine Learning · 4 min ·
[2602.15252] Decision Making under Imperfect Recall: Algorithms and Benchmarks
Machine Learning

[2602.15252] Decision Making under Imperfect Recall: Algorithms and Benchmarks

This paper presents a benchmark suite for decision-making under imperfect recall in game theory, introducing regret matching algorithms t...

arXiv - Machine Learning · 4 min ·
[2602.15195] Weight space Detection of Backdoors in LoRA Adapters
Llms

[2602.15195] Weight space Detection of Backdoors in LoRA Adapters

This article presents a novel method for detecting backdoors in LoRA adapters by analyzing their weight matrices, achieving high accuracy...

arXiv - Machine Learning · 3 min ·
[2602.15391] Improving LLM Reliability through Hybrid Abstention and Adaptive Detection
Llms

[2602.15391] Improving LLM Reliability through Hybrid Abstention and Adaptive Detection

The paper presents a novel adaptive abstention system for Large Language Models (LLMs) that balances safety and utility by dynamically ad...

arXiv - AI · 4 min ·
[2602.15384] World-Model-Augmented Web Agents with Action Correction
Llms

[2602.15384] World-Model-Augmented Web Agents with Action Correction

The paper presents WAC, a web agent that enhances task execution by integrating model collaboration, consequence simulation, and action r...

arXiv - AI · 3 min ·
[2602.15161] Exploiting Layer-Specific Vulnerabilities to Backdoor Attack in Federated Learning
Machine Learning

[2602.15161] Exploiting Layer-Specific Vulnerabilities to Backdoor Attack in Federated Learning

This paper presents the Layer Smoothing Attack (LSA), a novel backdoor attack exploiting layer-specific vulnerabilities in federated lear...

arXiv - Machine Learning · 4 min ·
[2602.15298] X-MAP: eXplainable Misclassification Analysis and Profiling for Spam and Phishing Detection
Machine Learning

[2602.15298] X-MAP: eXplainable Misclassification Analysis and Profiling for Spam and Phishing Detection

The paper presents X-MAP, a framework for analyzing and profiling misclassifications in spam and phishing detection, enhancing interpreta...

arXiv - AI · 3 min ·
Previous Page 96 Next

Related Topics

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime