AI Safety & Ethics

Alignment, bias, regulation, and responsible AI

Top This Week

Washington needs AI guardrails — now | Opinion
Ai Safety

Washington needs AI guardrails — now | Opinion

We need legislation that draws clear lines on what AI systems may and may not do on behalf of the United States government

AI Tools & Products · 3 min ·
[2601.12910] SciCoQA: Quality Assurance for Scientific Paper--Code Alignment
Ai Safety

[2601.12910] SciCoQA: Quality Assurance for Scientific Paper--Code Alignment

Abstract page for arXiv paper 2601.12910: SciCoQA: Quality Assurance for Scientific Paper--Code Alignment

arXiv - AI · 3 min ·
[2509.21385] Debugging Concept Bottleneck Models through Removal and Retraining
Machine Learning

[2509.21385] Debugging Concept Bottleneck Models through Removal and Retraining

Abstract page for arXiv paper 2509.21385: Debugging Concept Bottleneck Models through Removal and Retraining

arXiv - Machine Learning · 4 min ·

All Content

[2603.22819] TDATR: Improving End-to-End Table Recognition via Table Detail-Aware Learning and Cell-Level Visual Alignment
Machine Learning

[2603.22819] TDATR: Improving End-to-End Table Recognition via Table Detail-Aware Learning and Cell-Level Visual Alignment

Abstract page for arXiv paper 2603.22819: TDATR: Improving End-to-End Table Recognition via Table Detail-Aware Learning and Cell-Level Vi...

arXiv - AI · 4 min ·
[2603.22954] Privacy-Preserving EHR Data Transformation via Geometric Operators: A Human-AI Co-Design Technical Report
Machine Learning

[2603.22954] Privacy-Preserving EHR Data Transformation via Geometric Operators: A Human-AI Co-Design Technical Report

Abstract page for arXiv paper 2603.22954: Privacy-Preserving EHR Data Transformation via Geometric Operators: A Human-AI Co-Design Techni...

arXiv - Machine Learning · 4 min ·
[2603.22779] KARMA: Knowledge-Action Regularized Multimodal Alignment for Personalized Search at Taobao
Llms

[2603.22779] KARMA: Knowledge-Action Regularized Multimodal Alignment for Personalized Search at Taobao

Abstract page for arXiv paper 2603.22779: KARMA: Knowledge-Action Regularized Multimodal Alignment for Personalized Search at Taobao

arXiv - AI · 4 min ·
[2603.22855] TorR: Towards Brain-Inspired Task-Oriented Reasoning via Cache-Oriented Algorithm-Architecture Co-design
Computer Vision

[2603.22855] TorR: Towards Brain-Inspired Task-Oriented Reasoning via Cache-Oriented Algorithm-Architecture Co-design

Abstract page for arXiv paper 2603.22855: TorR: Towards Brain-Inspired Task-Oriented Reasoning via Cache-Oriented Algorithm-Architecture ...

arXiv - Machine Learning · 4 min ·
[2603.22690] WiFi2Cap: Semantic Action Captioning from Wi-Fi CSI via Limb-Level Semantic Alignment
Ai Safety

[2603.22690] WiFi2Cap: Semantic Action Captioning from Wi-Fi CSI via Limb-Level Semantic Alignment

Abstract page for arXiv paper 2603.22690: WiFi2Cap: Semantic Action Captioning from Wi-Fi CSI via Limb-Level Semantic Alignment

arXiv - AI · 3 min ·
[2603.23268] SafeSeek: Universal Attribution of Safety Circuits in Language Models
Llms

[2603.23268] SafeSeek: Universal Attribution of Safety Circuits in Language Models

Abstract page for arXiv paper 2603.23268: SafeSeek: Universal Attribution of Safety Circuits in Language Models

arXiv - AI · 4 min ·
[2603.22335] Causal Direct Preference Optimization for Distributionally Robust Generative Recommendation
Llms

[2603.22335] Causal Direct Preference Optimization for Distributionally Robust Generative Recommendation

Abstract page for arXiv paper 2603.22335: Causal Direct Preference Optimization for Distributionally Robust Generative Recommendation

arXiv - AI · 3 min ·
[2603.23101] SpecXMaster Technical Report
Ai Safety

[2603.23101] SpecXMaster Technical Report

Abstract page for arXiv paper 2603.23101: SpecXMaster Technical Report

arXiv - Machine Learning · 3 min ·
[2603.22882] TreeTeaming: Autonomous Red-Teaming of Vision-Language Models via Hierarchical Strategy Exploration
Llms

[2603.22882] TreeTeaming: Autonomous Red-Teaming of Vision-Language Models via Hierarchical Strategy Exploration

Abstract page for arXiv paper 2603.22882: TreeTeaming: Autonomous Red-Teaming of Vision-Language Models via Hierarchical Strategy Explora...

arXiv - Machine Learning · 4 min ·
[2603.22824] Towards The Implicit Bias on Multiclass Separable Data Under Norm Constraints
Machine Learning

[2603.22824] Towards The Implicit Bias on Multiclass Separable Data Under Norm Constraints

Abstract page for arXiv paper 2603.22824: Towards The Implicit Bias on Multiclass Separable Data Under Norm Constraints

arXiv - Machine Learning · 3 min ·
[2603.22364] MCLR: Improving Conditional Modeling in Visual Generative Models via Inter-Class Likelihood-Ratio Maximization and Establishing the Equivalence between Classifier-Free Guidance and Alignment Objectives
Machine Learning

[2603.22364] MCLR: Improving Conditional Modeling in Visual Generative Models via Inter-Class Likelihood-Ratio Maximization and Establishing the Equivalence between Classifier-Free Guidance and Alignment Objectives

Abstract page for arXiv paper 2603.22364: MCLR: Improving Conditional Modeling in Visual Generative Models via Inter-Class Likelihood-Rat...

arXiv - AI · 4 min ·
[2603.22346] First-Mover Bias in Gradient Boosting Explanations: Mechanism, Detection, and Resolution
Ai Safety

[2603.22346] First-Mover Bias in Gradient Boosting Explanations: Mechanism, Detection, and Resolution

Abstract page for arXiv paper 2603.22346: First-Mover Bias in Gradient Boosting Explanations: Mechanism, Detection, and Resolution

arXiv - AI · 4 min ·
[2603.22339] Problems with Chinchilla Approach 2: Systematic Biases in IsoFLOP Parabola Fits
Llms

[2603.22339] Problems with Chinchilla Approach 2: Systematic Biases in IsoFLOP Parabola Fits

Abstract page for arXiv paper 2603.22339: Problems with Chinchilla Approach 2: Systematic Biases in IsoFLOP Parabola Fits

arXiv - Machine Learning · 4 min ·
[2603.22829] Improving Safety Alignment via Balanced Direct Preference Optimization
Llms

[2603.22829] Improving Safety Alignment via Balanced Direct Preference Optimization

Abstract page for arXiv paper 2603.22829: Improving Safety Alignment via Balanced Direct Preference Optimization

arXiv - AI · 3 min ·
[2603.22721] HyFI: Hyperbolic Feature Interpolation for Brain-Vision Alignment
Machine Learning

[2603.22721] HyFI: Hyperbolic Feature Interpolation for Brain-Vision Alignment

Abstract page for arXiv paper 2603.22721: HyFI: Hyperbolic Feature Interpolation for Brain-Vision Alignment

arXiv - AI · 4 min ·
[2603.22322] AEGIS: An Operational Infrastructure for Post-Market Governance of Adaptive Medical AI Under US and EU Regulations
Machine Learning

[2603.22322] AEGIS: An Operational Infrastructure for Post-Market Governance of Adaptive Medical AI Under US and EU Regulations

Abstract page for arXiv paper 2603.22322: AEGIS: An Operational Infrastructure for Post-Market Governance of Adaptive Medical AI Under US...

arXiv - AI · 4 min ·
[2603.22314] Enhancing AI-Based Tropical Cyclone Track and Intensity Forecasting via Systematic Bias Correction
Ai Safety

[2603.22314] Enhancing AI-Based Tropical Cyclone Track and Intensity Forecasting via Systematic Bias Correction

Abstract page for arXiv paper 2603.22314: Enhancing AI-Based Tropical Cyclone Track and Intensity Forecasting via Systematic Bias Correction

arXiv - AI · 4 min ·
[2603.22305] CN-Buzz2Portfolio: A Chinese-Market Dataset and Benchmark for LLM-Based Macro and Sector Asset Allocation from Daily Trending Financial News
Llms

[2603.22305] CN-Buzz2Portfolio: A Chinese-Market Dataset and Benchmark for LLM-Based Macro and Sector Asset Allocation from Daily Trending Financial News

Abstract page for arXiv paper 2603.22305: CN-Buzz2Portfolio: A Chinese-Market Dataset and Benchmark for LLM-Based Macro and Sector Asset ...

arXiv - AI · 4 min ·
Llms

I mapped how Reddit actually talks about AI safety: 6,374 posts, 23 clusters, some surprising patterns

I collected Reddit posts between Jan 29 - Mar 1, 2026 using 40 keyword-based search terms ("AI safety", "AI alignment", "EU AI Act", "AI ...

Reddit - Artificial Intelligence · 1 min ·
Nlp

What if your AI agent could fix its own hallucinations without being told what's wrong?

Every autonomous AI agent has three problems: it contradicts itself, it can't decide, and it says things confidently that aren't true. Cu...

Reddit - Artificial Intelligence · 1 min ·
Previous Page 5 Next

Related Topics

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime