AI Safety & Ethics

Alignment, bias, regulation, and responsible AI

Top This Week

[2603.14267] DiFlowDubber: Discrete Flow Matching for Automated Video Dubbing via Cross-Modal Alignment and Synchronization
Machine Learning

[2603.14267] DiFlowDubber: Discrete Flow Matching for Automated Video Dubbing via Cross-Modal Alignment and Synchronization

Abstract page for arXiv paper 2603.14267: DiFlowDubber: Discrete Flow Matching for Automated Video Dubbing via Cross-Modal Alignment and ...

arXiv - AI · 4 min ·
[2601.22440] AI and My Values: User Perceptions of LLMs' Ability to Extract, Embody, and Explain Human Values from Casual Conversations
Llms

[2601.22440] AI and My Values: User Perceptions of LLMs' Ability to Extract, Embody, and Explain Human Values from Casual Conversations

Abstract page for arXiv paper 2601.22440: AI and My Values: User Perceptions of LLMs' Ability to Extract, Embody, and Explain Human Value...

arXiv - AI · 4 min ·
[2601.13622] CARPE: Context-Aware Image Representation Prioritization via Ensemble for Large Vision-Language Models
Llms

[2601.13622] CARPE: Context-Aware Image Representation Prioritization via Ensemble for Large Vision-Language Models

Abstract page for arXiv paper 2601.13622: CARPE: Context-Aware Image Representation Prioritization via Ensemble for Large Vision-Language...

arXiv - AI · 3 min ·

All Content

[2505.12186] Self-Destructive Language Model
Llms

[2505.12186] Self-Destructive Language Model

Abstract page for arXiv paper 2505.12186: Self-Destructive Language Model

arXiv - Machine Learning · 4 min ·
[2505.12096] When Bias Meets Trainability: Connecting Theories of Initialization
Machine Learning

[2505.12096] When Bias Meets Trainability: Connecting Theories of Initialization

Abstract page for arXiv paper 2505.12096: When Bias Meets Trainability: Connecting Theories of Initialization

arXiv - Machine Learning · 4 min ·
[2307.14025] Topological Inductive Bias fosters Multiple Instance Learning in Data-Scarce Scenarios
Machine Learning

[2307.14025] Topological Inductive Bias fosters Multiple Instance Learning in Data-Scarce Scenarios

Abstract page for arXiv paper 2307.14025: Topological Inductive Bias fosters Multiple Instance Learning in Data-Scarce Scenarios

arXiv - Machine Learning · 4 min ·
[2404.17768] Changing the Training Data Distribution to Reduce Simplicity Bias Improves In-distribution Generalization
Machine Learning

[2404.17768] Changing the Training Data Distribution to Reduce Simplicity Bias Improves In-distribution Generalization

Abstract page for arXiv paper 2404.17768: Changing the Training Data Distribution to Reduce Simplicity Bias Improves In-distribution Gene...

arXiv - Machine Learning · 4 min ·
[2602.11661] Quark Medical Alignment: A Holistic Multi-Dimensional Alignment and Collaborative Optimization Paradigm
Llms

[2602.11661] Quark Medical Alignment: A Holistic Multi-Dimensional Alignment and Collaborative Optimization Paradigm

Abstract page for arXiv paper 2602.11661: Quark Medical Alignment: A Holistic Multi-Dimensional Alignment and Collaborative Optimization ...

arXiv - AI · 4 min ·
[2512.12411] Detecting the Disturbance: A Nuanced View of Introspective Abilities in LLMs
Llms

[2512.12411] Detecting the Disturbance: A Nuanced View of Introspective Abilities in LLMs

Abstract page for arXiv paper 2512.12411: Detecting the Disturbance: A Nuanced View of Introspective Abilities in LLMs

arXiv - AI · 4 min ·
[2603.01214] Reasoning Boosts Opinion Alignment in LLMs
Llms

[2603.01214] Reasoning Boosts Opinion Alignment in LLMs

Abstract page for arXiv paper 2603.01214: Reasoning Boosts Opinion Alignment in LLMs

arXiv - Machine Learning · 3 min ·
[2509.01938] EigenBench: A Comparative Behavioral Measure of Value Alignment
Llms

[2509.01938] EigenBench: A Comparative Behavioral Measure of Value Alignment

Abstract page for arXiv paper 2509.01938: EigenBench: A Comparative Behavioral Measure of Value Alignment

arXiv - Machine Learning · 4 min ·
[2508.15030] Collab-REC: An LLM-based Agentic Framework for Balancing Recommendations in Tourism
Llms

[2508.15030] Collab-REC: An LLM-based Agentic Framework for Balancing Recommendations in Tourism

Abstract page for arXiv paper 2508.15030: Collab-REC: An LLM-based Agentic Framework for Balancing Recommendations in Tourism

arXiv - AI · 3 min ·
[2506.05619] Beyond RLHF and NLHF: Population-Proportional Alignment under an Axiomatic Framework
Robotics

[2506.05619] Beyond RLHF and NLHF: Population-Proportional Alignment under an Axiomatic Framework

Abstract page for arXiv paper 2506.05619: Beyond RLHF and NLHF: Population-Proportional Alignment under an Axiomatic Framework

arXiv - Machine Learning · 4 min ·
[2505.19965] Adaptive Location Hierarchy Learning for Long-Tailed Mobility Prediction
Machine Learning

[2505.19965] Adaptive Location Hierarchy Learning for Long-Tailed Mobility Prediction

Abstract page for arXiv paper 2505.19965: Adaptive Location Hierarchy Learning for Long-Tailed Mobility Prediction

arXiv - AI · 4 min ·
[2505.19653] Token-Importance Guided Direct Preference Optimization
Llms

[2505.19653] Token-Importance Guided Direct Preference Optimization

Abstract page for arXiv paper 2505.19653: Token-Importance Guided Direct Preference Optimization

arXiv - AI · 3 min ·
[2505.16448] The First Impression Problem: Internal Bias Triggers Overthinking in Reasoning Models
Machine Learning

[2505.16448] The First Impression Problem: Internal Bias Triggers Overthinking in Reasoning Models

Abstract page for arXiv paper 2505.16448: The First Impression Problem: Internal Bias Triggers Overthinking in Reasoning Models

arXiv - AI · 4 min ·
[2503.11832] Safety Mirage: How Spurious Correlations Undermine VLM Safety Fine-Tuning and Can Be Mitigated by Machine Unlearning
Llms

[2503.11832] Safety Mirage: How Spurious Correlations Undermine VLM Safety Fine-Tuning and Can Be Mitigated by Machine Unlearning

Abstract page for arXiv paper 2503.11832: Safety Mirage: How Spurious Correlations Undermine VLM Safety Fine-Tuning and Can Be Mitigated ...

arXiv - Machine Learning · 4 min ·
[2603.02128] LLMs as Strategic Actors: Behavioral Alignment, Risk Calibration, and Argumentation Framing in Geopolitical Simulations
Llms

[2603.02128] LLMs as Strategic Actors: Behavioral Alignment, Risk Calibration, and Argumentation Framing in Geopolitical Simulations

Abstract page for arXiv paper 2603.02128: LLMs as Strategic Actors: Behavioral Alignment, Risk Calibration, and Argumentation Framing in ...

arXiv - AI · 3 min ·
[2603.00233] Scaling Quantum Machine Learning without Tricks: High-Resolution and Diverse Image Generation
Machine Learning

[2603.00233] Scaling Quantum Machine Learning without Tricks: High-Resolution and Diverse Image Generation

Abstract page for arXiv paper 2603.00233: Scaling Quantum Machine Learning without Tricks: High-Resolution and Diverse Image Generation

arXiv - Machine Learning · 4 min ·
[2603.02019] Selection as Power: Constrained Reinforcement for Bounded Decision Authority
Ai Safety

[2603.02019] Selection as Power: Constrained Reinforcement for Bounded Decision Authority

Abstract page for arXiv paper 2603.02019: Selection as Power: Constrained Reinforcement for Bounded Decision Authority

arXiv - Machine Learning · 4 min ·
[2603.01945] When Numbers Tell Half the Story: Human-Metric Alignment in Topic Model Evaluation
Machine Learning

[2603.01945] When Numbers Tell Half the Story: Human-Metric Alignment in Topic Model Evaluation

Abstract page for arXiv paper 2603.01945: When Numbers Tell Half the Story: Human-Metric Alignment in Topic Model Evaluation

arXiv - Machine Learning · 4 min ·
[2603.01792] ALTER: Asymmetric LoRA for Token-Entropy-Guided Unlearning of LLMs
Llms

[2603.01792] ALTER: Asymmetric LoRA for Token-Entropy-Guided Unlearning of LLMs

Abstract page for arXiv paper 2603.01792: ALTER: Asymmetric LoRA for Token-Entropy-Guided Unlearning of LLMs

arXiv - AI · 4 min ·
[2603.01784] Co-Evolutionary Multi-Modal Alignment via Structured Adversarial Evolution
Llms

[2603.01784] Co-Evolutionary Multi-Modal Alignment via Structured Adversarial Evolution

Abstract page for arXiv paper 2603.01784: Co-Evolutionary Multi-Modal Alignment via Structured Adversarial Evolution

arXiv - AI · 3 min ·
Previous Page 24 Next

Related Topics

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime