98%
95%
96%
97%
100%
88%
42%
75%
93%
EU AI Act
Mandatory
NIST AI RMF
Full
ISO 42001
Full
OECD
Full
UNESCO
Full
Canada SAD
Mandatory
Law 25 QC
Mandatory
Microsoft RAI
Standard
Google AI
Commitment
EU AI Act
Mandatory
OECD
Full
UNESCO
Full
NIST AI RMF
Full
Google AI
Commitment
Microsoft RAI
Standard
NYC LL 144
Mandatory
ISO 42001
Partial
MAS FEAT
Full
EU AI Act
Mandatory
NIST AI RMF
Full
ISO 42001
Full
OECD
Full
Anthropic RSP
Central
OpenAI Prep.
Central
CSA Singapore
Full
DoD AI
Mandatory
FDA SaMD
Mandatory
GDPR + EU AI
Mandatory
Law 25 QC
Mandatory
OECD
Full
UNESCO
Full
ISO 42001
Full
Canada SAD
Mandatory
Microsoft RAI
Standard
China CAC
Mandatory
EU AI Act
Mandatory
ISO 42001
Full
NIST AI RMF
Full
OECD
Full
UNESCO
Full
Canada SAD
Mandatory
Microsoft RAI
Standard
CoE Treaty
Treaty
EU AI Act
Mandatory
UNESCO
Full
OECD
Full
Canada SAD
Mandatory
CoE Treaty
Treaty
NIST AI RMF
Full
DoD AI
Mandatory
Google/MS
Partial
UNESCO
Full
EU AI Act GPAI
Partial
OECD 2024
Emerging
Paris Decl.
Commitment
NIST AI RMF
Absent
ISO 42001
Absent
Google/MS
Partial
UNESCO
Full
OECD
Full
Google AI
Commitment
Microsoft RAI
Standard
EU AI Act
Partial
UN Res. 2024
Resolution
NIST AI RMF
Partial
ISO 42001
Limited
ISO 42001
Central
NIST AI RMF
Central
EU AI Act
Mandatory
Microsoft RAI
Central
Canada SAD
Mandatory
MAS Guidelines
Mandatory
OSFI Canada
Mandatory
G7 Hiroshima
Commitment
| EU AI Act | NIST RMF | ISO 42001 | OECD | UNESCO | Canada SAD | Law 25 QC | Google AI | Microsoft | Anthropic | |
|---|---|---|---|---|---|---|---|---|---|---|
| ● | ● | ● | ● | ● | ● | ● | ● | ● | ● | |
| ● | ● | ◐ | ● | ● | ◐ | ● | ● | ● | ◐ | |
| ● | ● | ● | ● | ◐ | ◐ | ◐ | ● | ● | ● | |
| ● | ● | ● | ● | ● | ● | ● | ● | ● | ◐ | |
| ● | ● | ● | ● | ● | ● | ● | ● | ● | ● | |
| ● | ● | ◐ | ● | ● | ● | ◐ | ◐ | ◐ | ● | |
| ◐ | ○ | ○ | ◐ | ● | ○ | ○ | ○ | ○ | ○ | |
| ◐ | ◐ | ○ | ● | ● | ◐ | ◐ | ● | ● | ◐ | |
| ● | ● | ● | ◐ | ◐ | ● | ● | ◐ | ● | ◐ |
●
◐
○
| ◐ | ◐ | ◐ | ● | ● | ● | ● | ● | ◐ | ○ | |
| ● | ● | ● | ● | ● | ● | ◐ | ● | ● | ○ | |
| ● | ◐ | ◐ | ● | ● | ● | ● | ● | ● | ◐ | |
| ● | ● | ● | ● | ◐ | ● | ● | ● | ◐ | ● | |
| ● | ◐ | ◐ | ◐ | ● | ● | ● | ● | ● | ● | |
| ● | ○ | ◐ | ● | ● | ● | ● | ● | ● | ◐ | |
| ◐ | ◐ | ◐ | ● | ◐ | ◐ | ● | ● | ◐ | ● | |
| ● | ● | ● | ◐ | ● | ◐ | ● | ◐ | ● | ○ | |
| ● | ◐ | ◐ | ◐ | ● | ● | ● | ● | ● | ● |
Programme timeline — parallel tracks
Phase 1 · Foundation
M1–M6
M1–M6
Phase 2 · Implementation
M7–M13
M7–M13
Phase 3 · Maturity
M14–M18
M14–M18
ISO 42001
NIST AI RMF
AI TPRM
RAI Training
Axis 01 · Governance Backbone
ISO/IEC 42001 — AI Management System
Certifiable structure · Compatible with ISO 27001
Foundation
M1 – M6
- Gap analysis vs. ISO 42001 requirements
- Scope definition and context of the organisation
- Stakeholder identification and requirements mapping
- Draft AI Policy (signed by executive leadership)
- AI roles and responsibilities matrix (RACI)
Key Deliverable
ISO 42001 Gap Report + AI Policy
Implementation
M7 – M13
- AI system inventory and risk classification
- AI Impact Assessment procedures (aligned ISO 42005)
- Operational controls per risk level
- Internal audit programme setup
- Management review process
Key Deliverable
AIMS Documentation Package
Maturity
M14 – M18
- Internal pre-certification audit
- Corrective actions and remediation
- Stage 1 + Stage 2 certification audit
- Continuous improvement cycle (PDCA)
Key Deliverable
ISO 42001 Certificate
Axis 02 · Operational Risk Management
NIST AI RMF — Govern · Map · Measure · Manage
Actionable playbook · System-level risk controls
Foundation
M1 – M6
- Complete AI system inventory (all environments)
- Risk scoring model (impact × likelihood × bias)
- Mapping to NIST AI RMF functions (GOVERN)
- Define risk tolerance thresholds by system category
Key Deliverable
AI Risk Register + Scoring Matrix
Implementation
M7 – M13
- Deploy RMF Playbook actions (MAP + MEASURE)
- Fairness, robustness and explainability testing
- AI incident response playbook
- Integration into existing GRC tooling
- Red team / adversarial testing for high-risk systems
Key Deliverable
AI Controls Library + Incident Playbook
Maturity
M14 – M18
- Automated risk monitoring dashboards
- Post-deployment drift detection pipelines
- Quarterly risk review cadence
- Metrics reporting to executive committee
Key Deliverable
Continuous AI Risk Dashboard
Axis 03 · Third-Party AI Risk
AI TPRM — Vendor & Supply Chain Risk Management
Extension of existing TPRM programme · AI-specific criteria
Foundation
M1 – M6
- Map all AI vendor relationships (models, APIs, data)
- Define AI-specific risk assessment criteria
- Classify vendors by risk tier (critical, high, medium)
- AI vendor questionnaire design (security, fairness, privacy)
Key Deliverable
AI Vendor Registry + Risk Taxonomy
Implementation
M7 – M13
- Conduct assessments for critical-tier vendors
- AI clauses integration in vendor contracts
- Data processing agreements update for AI use
- Concentration risk analysis (cloud AI providers)
- Escalation and remediation process
Key Deliverable
Vendor Assessment Reports + AI Contract Clauses
Maturity
M14 – M18
- Annual AI vendor reassessment cycle
- Continuous monitoring of vendor AI incidents
- AI supply chain risk in Board reporting
- Alignment with EU AI Act supply chain requirements
Key Deliverable
AI TPRM Annual Review Programme
Axis 04 · Capability Building
RAI Training — Three-level Awareness Programme
All employees · Technical teams · Risk & Compliance leaders
Foundation
M1 – M6
- Training needs analysis by role and business unit
- Design 3-tier curriculum (awareness / practitioner / expert)
- Identify internal subject matter experts
- Select / develop e-learning and workshop content
Key Deliverable
RAI Training Curriculum + Content
Implementation
M7 – M13
- Tier 1: All-staff RAI awareness (e-learning, 1h)
- Tier 2: Practitioner workshop — AI/Data/Product teams
- Tier 3: Expert certification — Risk, CISO, Legal
- Case-based learning from internal AI incidents
- Compliance tracking and completion reporting
Key Deliverable
Training Completion Report (all 3 tiers)
Maturity
M14 – M18
- RAI integrated into onboarding process
- Annual refresher programme (regulatory updates)
- Internal RAI community of practice
- Culture measurement (RAI maturity survey)
Key Deliverable
Embedded RAI Culture Programme
Why this order matters — axis interdependencies
01 → All
ISO 42001 first
Creates the formal structure, policies, roles and artefacts that all other axes depend on. Without it, the other axes lack governance anchoring.
02 feeds 03
NIST RMF operationalises
The AI inventory and risk scoring from Axis 2 directly feeds the vendor classification in Axis 3 — no vendor risk programme without knowing which systems use them.
03 unlocks 04
TPRM reveals gaps
Vendor assessments expose the specific risks employees need to understand. Training content becomes concrete and credible because it is grounded in real supplier findings.
04 sustains all
Training sustains everything
Governance documents and risk registers are only effective if the teams using them understand why they exist. Training converts policies into behaviours.
Success milestones by horizon
M6 · Foundation complete
100%
AI systems inventoried and risk-classified · AI Policy signed · Vendor map complete
M12 · Implementation
80%
Staff trained (Tier 1) · Top-tier vendors assessed · Controls deployed for high-risk systems
M18 · Maturity
ISO
42001 certification achieved · Continuous monitoring live · RAI embedded in culture
Ongoing · Steady State
∞
Annual audit cycle · Regulatory watch · Quarterly Board reporting on AI risk
55+ AI Frameworks & Regulations
Exhaustive reference list with direct links. Click any framework name to access the source document.
59 frameworks
Binding = legally enforceable
Voluntary = guidelines / best practices
🌐
International
8
| Framework / Regulation | Year | Type | Status | Description |
|---|---|---|---|---|
| ↗ OECD AI Principles | 2019/2024 | Guidelines | Voluntary | Foundational AI principles adopted by 46+ countries covering transparency, accountability and human oversight. |
| ↗ UNESCO Recommendation on AI Ethics | 2021 | Recommendation | Voluntary | Global recommendation covering all aspects of ethical AI from privacy to sustainability and cultural diversity. |
| ↗ G7 Hiroshima AI Process | 2023–2025 | Code of Conduct | Voluntary | International code of conduct for advanced AI developers, covering safety, transparency and responsible deployment. |
| ↗ Council of Europe AI Convention | 2024 | Treaty | Binding | First legally binding international treaty on AI, requiring signatories to ensure AI respects human rights. |
| ↗ GPAI Code of Practice | 2025 | Code of Practice | Voluntary | Global Partnership on AI code of practice for general-purpose AI models aligned with the EU AI Act. |
| ↗ UN Resolution A/78/L.49 | 2024 | Resolution | Voluntary | UN General Assembly resolution on seizing opportunities of safe, secure and trustworthy AI for sustainable development. |
| ↗ Bletchley / Seoul / Paris AI Summits | 2023–2025 | Declaration | Voluntary | Series of international AI Safety Summits establishing shared principles for frontier AI governance. |
| ↗ Singapore Consensus on AI Safety | 2025 | Consensus | Voluntary | International consensus on priorities for AI safety research and evaluation methodologies. |
🇪🇺
European Union
6
| Framework / Regulation | Year | Type | Status | Description |
|---|---|---|---|---|
| ↗ EU AI Act (Regulation 2024/1689) | 2024 | Regulation | Binding | Landmark risk-based regulation classifying AI systems into risk tiers with mandatory requirements and sanctions up to €35M or 7% global turnover. |
| ↗ EU AI Liability Directive (proposed) | 2022+ | Directive (proposed) | Voluntary | Proposed directive establishing civil liability rules for damage caused by AI systems, adapting existing product liability rules. |
| ↗ HLEG AI Trustworthy AI Guidelines | 2019 | Guidelines | Voluntary | Ethics guidelines for trustworthy AI defining 7 key requirements: human agency, robustness, privacy, transparency, fairness, wellbeing, accountability. |
| ↗ GPAI Code of Practice (Art. 56 EU AI Act) | 2025 | Code of Practice | Binding | Mandatory code of practice for providers of general-purpose AI models under the EU AI Act Article 56. |
| ↗ GDPR applied to AI (Art. 22) | 2018 | Regulation | Binding | Application of GDPR rights to automated decision-making and AI, including right to explanation and human review. |
| ↗ AI Continent Action Plan | 2025 | Strategy | Voluntary | EU strategic plan to make Europe a global leader in trustworthy AI, covering infrastructure, data and talent. |
🏛️
National
14
| Framework / Regulation | Year | Type | Status | Description |
|---|---|---|---|---|
| ↗ NIST AI RMF 1.0 (USA) | 2023 | Framework | Voluntary | Voluntary framework with four core functions (Govern, Map, Measure, Manage) for managing AI risks throughout the lifecycle. |
| ↗ Colorado AI Act SB205 (USA) | 2024 | Law | Binding | First US state law requiring algorithmic impact assessments for high-risk AI systems used in consequential decisions. |
| ↗ California AI Transparency Act SB942 (USA) | 2024 | Law | Binding | Requires AI providers to make detection tools available and disclose AI-generated content. |
| ↗ China CAC AI Regulations (China) | 2022–2025 | Regulation | Binding | Series of binding regulations covering algorithmic recommendations, deep synthesis (deepfakes), and generative AI services. |
| ↗ UK AI White Paper (UK) | 2023 | Policy | Voluntary | Pro-innovation regulatory framework establishing five AI principles without creating new AI-specific legislation. |
| ↗ Japan AI Promotion Act (Japan) | 2025 | Law | Binding | Law promoting AI development while ensuring safety and transparency for AI systems in Japan. |
| ↗ Singapore Model AI Governance Framework | 2020/2023 | Framework | Voluntary | Practical guidance for private sector AI governance covering internal governance, risk management and stakeholder engagement. |
| ↗ Brazil AI Law 15.900/2025 (Brazil) | 2025 | Law | Binding | Comprehensive AI law establishing rights of AI users and obligations for AI systems providers in Brazil. |
| ↗ Australia Voluntary AI Safety Standard | 2024 | Standard | Voluntary | Eight guardrails providing practical guidance for Australian businesses deploying AI in high-risk settings. |
| ↗ India MeitY AI Guidelines (India) | 2023 | Guidelines | Voluntary | Advisory guidelines for responsible AI development in India covering data governance, transparency and accountability. |
| ↗ Saudi Arabia SDAIA Ethics Principles | 2023 | Principles | Voluntary | National AI ethics principles covering human-centricity, transparency, privacy and fairness for AI systems in Saudi Arabia. |
| ↗ South Korea AI Basic Act (Korea) | 2024 | Law | Binding | Foundational AI legislation establishing risk-based classification and governance requirements for AI in South Korea. |
| ↗ France 2030 AI Strategy (France) | 2023 | Strategy | Voluntary | National AI strategy investing €1.5B in AI research, talent and sovereignty, with CNIL guidance on GDPR-AI intersection. |
| ↗ UAE AIATC AI Guidelines (UAE) | 2024 | Guidelines | Voluntary | AI governance principles and guidelines from the UAE Artificial Intelligence and Advanced Technology Council. |
🍁
Canada
5
| Framework / Regulation | Year | Type | Status | Description |
|---|---|---|---|---|
| ↗ Directive on Automated Decision-Making (Canada) | 2019/2023 | Directive | Binding | Mandatory federal directive requiring algorithmic impact assessments for automated decision systems used by the federal government. |
| ↗ Bill C-27 / AIDA (Canada) | 2022+ | Bill (in progress) | Voluntary | Proposed Artificial Intelligence and Data Act establishing rules for high-impact AI systems in Canada (in progress). |
| ↗ Pan-Canadian AI Strategy (Canada) | 2017/2022 | Strategy | Voluntary | National strategy investing $443M to support AI research, talent and adoption across Canada through CIFAR. |
| ↗ Government of Canada AI Guidelines | 2023 | Guidelines | Voluntary | Principles and guidelines for the responsible use of AI across the Government of Canada. |
| ↗ Law 25 — Québec (Canada) | 2023 | Law | Binding | Binding Quebec privacy law requiring disclosure of automated decision-making, profiling transparency and data protection obligations. |
📐
Standards & Norms
8
| Framework / Regulation | Year | Type | Status | Description |
|---|---|---|---|---|
| ↗ ISO/IEC 42001:2023 | 2023 | Standard | Voluntary | Certifiable AI Management System (AIMS) standard providing requirements for establishing, implementing and improving AI governance. |
| ↗ ISO/IEC 42005 — AI Impact Assessment | In dev. | Standard | Voluntary | Standard for AI system impact assessment methodologies covering societal, ethical and environmental impacts (in development). |
| ↗ ISO/IEC 42006 — AI Auditing | In dev. | Standard | Voluntary | Requirements and guidance for bodies auditing AI management systems against ISO/IEC 42001 (in development). |
| ↗ ISO/IEC 23894 — AI Risk Management | 2023 | Standard | Voluntary | Guidance for AI risk management processes integrated within organizational risk management frameworks. |
| ↗ IEEE 7000-2021 | 2021 | Standard | Voluntary | Standard model process for addressing ethical concerns during system design through value-based engineering. |
| ↗ IEEE CertifAIEd | 2022 | Certification | Voluntary | IEEE program for assessing and certifying AI systems against ethical principles including transparency and accountability. |
| ↗ NIST SP 1270 — Towards Bias in AI | 2022 | Publication | Voluntary | NIST special publication providing guidance on identifying and managing bias in AI systems. |
| ↗ CEN/CENELEC AI Standards (EU harmonised) | In dev. | Standard | Voluntary | European harmonised standards under development to support EU AI Act compliance across risk categories. |
🏭
Sectoral
9
| Framework / Regulation | Year | Type | Status | Description |
|---|---|---|---|---|
| ↗ MAS FEAT Principles (Singapore Finance) | 2018 | Principles | Voluntary | Monetary Authority of Singapore principles for fairness, ethics, accountability and transparency in financial AI. |
| ↗ MAS AI Risk Management Guidelines | 2025 | Guidelines | Voluntary | Guidelines for financial institutions on managing risks associated with AI model development and deployment. |
| ↗ OSFI Guideline B-15 (Canada Finance) | 2023 | Guideline | Binding | OSFI guideline on climate risk management with AI governance provisions for federally regulated financial institutions. |
| ↗ FDA Software as Medical Device (USA Health) | 2021–2025 | Guidance | Binding | FDA framework for AI/ML-based software as medical devices covering premarket submissions and post-deployment monitoring. |
| ↗ WHO Ethics & Governance of AI for Health | 2021 | Guidance | Voluntary | WHO guidance on ethical design, development and deployment of AI technologies for health applications. |
| ↗ NYC Local Law 144 (USA HR) | 2023 | Law | Binding | New York City law requiring annual bias audits for automated employment decision tools used by employers. |
| ↗ CSA Singapore — Securing AI (Cybersecurity) | 2024 | Guidelines | Voluntary | Cybersecurity guidelines for AI systems covering adversarial attacks, model security and AI supply chain risks. |
| ↗ DoD AI Ethics Principles + RAI Strategy (USA Defence) | 2020–2022 | Strategy | Binding | US Department of Defense AI ethics principles and Responsible AI Strategy covering military AI applications. |
| ↗ UNESCO GenAI Guidance for Education | 2023 | Guidance | Voluntary | UNESCO guidance on generative AI use in education covering pedagogical, ethical and safety considerations. |
🏢
Industry / Big Tech
9
| Framework / Regulation | Year | Type | Status | Description |
|---|---|---|---|---|
| ↗ Google AI Principles + Frontier Safety Framework | 2018/2023 | Principles | Voluntary | Google's foundational AI principles covering beneficial, safe, privacy-respecting and accountable AI development. |
| ↗ Microsoft Responsible AI Standard v2 | 2022 | Standard | Voluntary | Microsoft's internal standard requiring Responsible AI Impact Assessments (RAIA) for all AI products before release. |
| ↗ IBM AI Trust & Transparency | 2018+ | Framework | Voluntary | IBM's AI ethics framework and open-source tools for explainability, fairness and robustness testing. |
| ↗ Anthropic Responsible Scaling Policy (RSP) | 2023/2024 | Policy | Voluntary | Anthropic's policy defining AI Safety Levels (ASL) and required safety measures before deploying frontier models. |
| ↗ OpenAI Preparedness Framework | 2023 | Framework | Voluntary | OpenAI's framework for evaluating and mitigating catastrophic risks from frontier AI models across four risk categories. |
| ↗ Partnership on AI + ABOUT ML | 2016+ | Initiative | Voluntary | Multi-stakeholder initiative promoting responsible AI development and the ABOUT ML documentation standard. |
| ↗ Meta System Cards | 2022 | Framework | Voluntary | Meta's framework for documenting AI systems at the system level, extending model cards to full deployment context. |
| ↗ AWS Responsible AI Policy | 2023 | Policy | Voluntary | Amazon Web Services policy and tooling for responsible AI covering fairness, explainability and privacy in cloud AI services. |
| ↗ MLCommons AI Safety v1.0 Benchmark | 2024 | Benchmark | Voluntary | Open benchmark for evaluating safety of AI models across hazard categories including chemical, biological and violent risks. |