[2602.13244] Responsible AI in Business
Summary
The paper discusses the concept of Responsible AI in business, focusing on its implementation in small and medium-sized enterprises. It covers regulatory frameworks, transparency, sustainability, and local model operations.
Why It Matters
As AI technologies become integral to business operations, understanding Responsible AI is crucial for compliance, sustainability, and ethical practices. This paper provides a structured approach for organizations to navigate these challenges, particularly for SMEs facing resource constraints.
Key Takeaways
- Responsible AI is essential for legal compliance and ethical operations in businesses.
- The EU AI Act outlines key obligations for AI providers and deployers, emphasizing risk assessment and transparency.
- Green AI principles advocate for evaluating AI systems based on energy efficiency and resource consumption.
- Local AI models enhance data protection and operational independence.
- A structured governance and implementation roadmap is vital for successful AI integration.
Computer Science > Computers and Society arXiv:2602.13244 (cs) [Submitted on 31 Jan 2026] Title:Responsible AI in Business Authors:Stephan Sandfuchs, Diako Farooghi, Janis Mohr, Sarah Grewe, Markus Lemmen, Jörg Frochte View a PDF of the paper titled Responsible AI in Business, by Stephan Sandfuchs and 4 other authors View PDF Abstract:Artificial intelligence (AI) and Machine Learning (ML) have moved from research and pilot projects into everyday business operations, with generative AI accelerating adoption across processes, products, and services. This paper introduces the concept of Responsible AI for organizational practice, with a particular focus on small and medium-sized enterprises. It structures Responsible AI along four focal areas that are central for introducing and operating AI systems in a legally compliant, comprehensible, sustainable, and data-sovereign manner. First, it discusses the EU AI Act as a risk-based regulatory framework, including the distinction between provider and deployer roles and the resulting obligations such as risk assessment, documentation, transparency requirements, and AI literacy measures. Second, it addresses Explainable AI as a basis for transparency and trust, clarifying key notions such as transparency, interpretability, and explainability and summarizing practical approaches to make model behavior and decisions more understandable. Third, it covers Green AI, emphasizing that AI systems should be evaluated not only by performance b...