[2603.22920] The EU AI Act and the Rights-based Approach to Technological Governance

[2603.22920] The EU AI Act and the Rights-based Approach to Technological Governance

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2603.22920: The EU AI Act and the Rights-based Approach to Technological Governance

Computer Science > Computers and Society arXiv:2603.22920 (cs) [Submitted on 24 Mar 2026] Title:The EU AI Act and the Rights-based Approach to Technological Governance Authors:Georgios Pavlidis View a PDF of the paper titled The EU AI Act and the Rights-based Approach to Technological Governance, by Georgios Pavlidis View PDF Abstract:The EU AI Act constitutes an important development in shaping the Union's digital regulatory architecture. The Act places fundamental rights at the heart of a risk-based governance framework. The article examines how the AI Act institutionalises a human-centric approach to AI and how the AI Act's provisions explicitly and implicitly embed the protection of rights enshrined in the EU Charter of Fundamental Rights. It argues that fundamental rights function not merely as aspirational goals, but as legal thresholds and procedural triggers across the lifecycle of an AI system. The analysis suggests that the AI Act has the potential to serve as a model for rights-preserving AI systems, while acknowledging that challenges will emerge at the level of implementation. Subjects: Computers and Society (cs.CY); Artificial Intelligence (cs.AI) Cite as: arXiv:2603.22920 [cs.CY]   (or arXiv:2603.22920v1 [cs.CY] for this version)   https://doi.org/10.48550/arXiv.2603.22920 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Journal reference: Review of European and Comparative Law (2026) Related DOI: https://doi.org/10.31743/recl.19283 F...

Originally published on March 25, 2026. Curated by AI News.

Related Articles

Washington needs AI guardrails — now | Opinion
Ai Safety

Washington needs AI guardrails — now | Opinion

We need legislation that draws clear lines on what AI systems may and may not do on behalf of the United States government

AI Tools & Products · 3 min ·
[2601.12910] SciCoQA: Quality Assurance for Scientific Paper--Code Alignment
Ai Safety

[2601.12910] SciCoQA: Quality Assurance for Scientific Paper--Code Alignment

Abstract page for arXiv paper 2601.12910: SciCoQA: Quality Assurance for Scientific Paper--Code Alignment

arXiv - AI · 3 min ·
[2509.21385] Debugging Concept Bottleneck Models through Removal and Retraining
Machine Learning

[2509.21385] Debugging Concept Bottleneck Models through Removal and Retraining

Abstract page for arXiv paper 2509.21385: Debugging Concept Bottleneck Models through Removal and Retraining

arXiv - Machine Learning · 4 min ·
[2512.00804] Epistemic Bias Injection: Biasing LLMs via Selective Context Retrieval
Llms

[2512.00804] Epistemic Bias Injection: Biasing LLMs via Selective Context Retrieval

Abstract page for arXiv paper 2512.00804: Epistemic Bias Injection: Biasing LLMs via Selective Context Retrieval

arXiv - AI · 4 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime