Democratizing AI Safety with RiskRubric.ai
About this article
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Back to Articles Democratizing AI Safety with RiskRubric.ai Published September 18, 2025 Update on GitHub Upvote 21 +15 Gal Moyal galmo-noma Follow guest Building trust in the open model ecosystem through standardized risk assessment More than 500,000 models can be found on the Hugging Face hub, but it’s not always clear to users how to choose the best model for them, notably on the security aspects. Developers might find a model that perfectly fits their use case, but have no systematic way to evaluate its security posture, privacy implications, or potential failure modes. As models become more powerful and adoption accelerates, we need equally rapid progress in AI safety and security reporting. We're therefore excited to announce RiskRubric.ai, a novel initiative led by Cloud Security Alliance and Noma Security, with contributions by Haize Labs and Harmonic Security, for standardized and transparent risk assessment in the AI model ecosystem. Risk Rubric, a new Standardized Assessment of Risk for models RiskRubric.ai provides consistent, comparable risk scores across the entire model landscape, by evaluating models across six pillars: transparency, reliability, security, privacy, safety, and reputation. The platform's approach aligns perfectly with open-source values: rigorous, transparent, and reproducible. Using Noma Security capabilities to automate the effort, each model undergoes: 1,000+ reliability tests checking consistency and edge case handling 200+ adversarial s...