AI Governance Checklist: 100 Controls for Regulated Enterprise
This is the most comprehensive AI governance checklist we publish. 100 controls across 10 domains, mapped to HIPAA, SOC 2, FedRAMP, NIST AI RMF, and ISO 42001. Use it to assess your current governance posture, identify gaps, and build a remediation roadmap.
How to use: Score each control as Implemented (2), Partially Implemented (1), or Not Implemented (0). Maximum score: 200. Scores above 160 indicate strong governance. 120-160 is adequate with gaps. Below 120 indicates significant governance risk.
Domain 1: AI Strategy (10 Controls)
Maps to: NIST AI RMF GOVERN, ISO 42001 Clause 5
- AI strategy document approved by executive leadership with defined vision, objectives, and success metrics.
- AI steering committee established with cross-functional representation (IT, legal, compliance, business units, HR).
- AI budget allocated with line items for governance, not just development.
- AI use case prioritization framework with documented criteria for evaluating and approving new AI initiatives.
- AI vendor evaluation criteria including security, compliance, data handling, and exit strategy requirements.
- AI roadmap with quarterly milestones aligned to business objectives.
- Board-level AI reporting cadence established (quarterly minimum) covering strategy progress, risk, and ROI.
- AI competitive intelligence process monitoring industry peers and regulatory developments.
- AI strategy alignment with overall digital transformation and business strategy documented.
- AI maturity assessment conducted annually with documented improvement targets.
Domain 2: Risk Management (10 Controls)
Maps to: NIST AI RMF MAP, SOC 2 CC3, ISO 42001 Clause 6
- AI risk register maintained with identified risks, likelihood, impact, and mitigation plans for each AI system.
- AI risk appetite statement approved by the board defining acceptable risk levels for AI deployment.
- AI impact assessments required before deploying AI systems that affect customers, employees, or regulated processes.
- Third-party AI risk assessment process for evaluating AI vendors and their sub-processors.
- Shadow AI risk management with processes to detect, assess, and govern unauthorized AI tool usage.
- AI concentration risk assessed for over-reliance on single AI vendors or models.
- AI failure mode analysis documented for each production AI system with fallback procedures.
- AI insurance evaluated for coverage of AI-specific liabilities (errors, bias, IP infringement).
- Regulatory change monitoring process for tracking new AI regulations that affect your organization.
- AI risk reporting integrated into enterprise risk management framework and reported to risk committee.
Domain 3: Ethics and Responsible AI (10 Controls)
Maps to: NIST AI RMF GOVERN, EU AI Act, ISO 42001 Clause 5
- AI ethics principles documented and published (fairness, transparency, accountability, safety, privacy).
- Bias testing procedures required for all AI systems with documented testing methodology and acceptance criteria.
- Transparency requirements defining when and how to disclose AI use to customers and employees.
- Human oversight requirements for AI-assisted decisions affecting rights, opportunities, or safety.
- AI ethics review board or ethics consultation process for high-risk AI applications.
- Fairness metrics defined and measured for AI systems that affect people (hiring, lending, service delivery).
- Explainability requirements documented for each AI system based on its risk level and regulatory requirements.
- AI grievance process enabling individuals affected by AI decisions to request review and explanation.
- Responsible AI training required for all employees involved in AI development, deployment, or oversight.
- Environmental impact of AI compute resources assessed and reported as part of ESG commitments.
Domain 4: Data Governance for AI (10 Controls)
Maps to: HIPAA Security Rule, SOC 2 CC6, NIST AI RMF MAP, GDPR Art. 5
- Data classification for AI defining what data can be used with which AI models based on sensitivity.
- Data quality requirements for AI training and inference data with documented quality metrics.
- Data lineage tracking for AI training data documenting source, transformations, and provenance.
- Consent management for using personal data in AI systems, aligned with privacy regulations.
- Data minimization principles applied to AI prompts and training data (only necessary data used).
- Data retention policies for AI interaction logs, training data, and model artifacts.
- Cross-border data transfer controls for AI systems processing data across jurisdictions.
- PII/PHI detection in AI prompts with automated blocking or redaction for sensitive data types.
- Data access controls for AI training data and model outputs with role-based permissions.
- Data sharing agreements with AI vendors covering data use, retention, and deletion rights.
Domain 5: Model Management (10 Controls)
Maps to: NIST AI RMF MEASURE, ISO 42001 Clause 8, SOC 2 CC8
- Model inventory maintained listing all AI models in use with version, vendor, purpose, and risk classification.
- Model validation procedures required before production deployment with documented acceptance criteria.
- Model versioning with rollback capability for all production AI models.
- Model performance baselines established with drift detection thresholds.
- Model documentation (model cards) required for each production AI system covering capabilities, limitations, and intended use.
- Fine-tuning governance with approval process, data requirements, and validation for custom model training.
- Prompt engineering standards with reviewed and tested prompt templates for critical applications.
- Model comparison testing conducted when evaluating alternative models for existing use cases.
- Model retirement process with defined criteria for decommissioning AI models and migrating dependent systems.
- Multi-model routing policies defining which models serve which use cases based on capability and compliance requirements.
Domain 6: Security (10 Controls)
Maps to: SOC 2 CC6/CC7, HIPAA Technical Safeguards, FedRAMP, NIST 800-53
- AI-specific threat modeling conducted for each production AI system (prompt injection, data poisoning, model extraction).
- Input validation for AI systems preventing prompt injection, jailbreaking, and adversarial inputs.
- Output filtering preventing AI systems from generating harmful, confidential, or non-compliant content.
- API security for AI endpoints with authentication, rate limiting, and input sanitization.
- Encryption for AI data at rest and in transit, including prompts, responses, and model artifacts.
- Access controls for AI admin consoles, model management, and configuration changes.
- Penetration testing scope expanded to include AI systems and their unique attack surfaces.
- Supply chain security for AI dependencies (models, libraries, training data, APIs).
- AI-specific incident detection monitoring for unusual query patterns, data exfiltration attempts, and model abuse.
- AI system isolation ensuring production AI systems are segmented from development and testing environments.
Domain 7: Compliance (10 Controls)
Maps to: HIPAA, SOC 2, FedRAMP, GDPR, EU AI Act, State AI Laws
- Regulatory mapping document identifying all AI-relevant regulations for your organization by jurisdiction.
- AI-specific policies (acceptable use, data handling, disclosure) reviewed by legal counsel.
- Audit trail for all AI interactions meeting regulatory retention requirements.
- Compliance testing for AI systems conducted on a defined schedule (annual minimum).
- Regulatory reporting procedures for AI incidents that trigger notification requirements.
- AI vendor compliance verification ensuring all AI vendors meet your regulatory requirements (BAAs, DPAs, certifications).
- eDiscovery readiness for AI interactions that may be subject to legal hold or discovery.
- Cross-regulation harmonization ensuring AI governance satisfies overlapping requirements (e.g., HIPAA + SOC 2 + state privacy).
- Compliance evidence collection automated where possible for audit preparation.
- Regulatory change management process for updating governance when new AI regulations take effect.
Domain 8: Operations (10 Controls)
Maps to: NIST AI RMF MANAGE, SOC 2 CC7/CC8, ISO 42001 Clause 8
- AI deployment procedures with documented approval, testing, and rollout process.
- AI change management process for model updates, configuration changes, and prompt modifications.
- AI service level objectives (SLOs) defined for availability, latency, and accuracy of production AI systems.
- AI incident management procedures specific to AI failures (hallucination, bias, outage, data leak).
- AI business continuity plan for AI system outages including manual fallback procedures.
- AI capacity planning for compute, storage, and API rate limits.
- AI cost management with budget tracking, usage monitoring, and optimization processes.
- AI vendor relationship management with regular reviews of performance, roadmap, and contract terms.
- AI documentation maintained and current for all production systems (architecture, data flows, integrations).
- AI runbooks created for common operational tasks (scaling, failover, incident response, model updates).
Domain 9: Monitoring and Measurement (10 Controls)
Maps to: NIST AI RMF MEASURE, SOC 2 CC4, ISO 42001 Clause 9
- Model performance monitoring with automated drift detection and alerting.
- Bias monitoring with regular fairness metric evaluation on production data.
- Usage analytics tracking adoption, query volume, user satisfaction, and cost per interaction.
- Quality assurance sampling with human review of a percentage of AI outputs on a regular schedule.
- Feedback collection from AI users with structured input on accuracy, usefulness, and issues.
- ROI measurement with documented methodology for calculating AI business value.
- Security monitoring for AI-specific threats (prompt injection attempts, data exfiltration, abuse patterns).
- Compliance monitoring with automated checks for policy violations in AI interactions.
- Vendor SLA monitoring tracking AI platform availability, performance, and incident response against contracted SLAs.
- Governance effectiveness metrics measuring how well the governance framework itself is working (policy compliance rate, incident response time, audit findings).
Domain 10: People and Culture (10 Controls)
Maps to: NIST AI RMF GOVERN, ISO 42001 Clause 7, SOC 2 CC1
- AI literacy program providing baseline AI education to all employees.
- Role-specific AI training for developers, data scientists, business users, and executives.
- AI governance roles defined with clear responsibilities (AI risk owner, model owner, data steward).
- AI champion network with trained advocates in each department supporting adoption and governance.
- AI skills assessment identifying gaps and development needs across the organization.
- AI hiring and retention strategy for critical AI roles (data scientists, ML engineers, AI governance specialists).
- AI culture assessment measuring organizational readiness and attitudes toward AI adoption.
- AI communication plan keeping all stakeholders informed about AI initiatives, policies, and successes.
- AI vendor training ensuring teams working with AI vendors are trained on the specific platforms they use.
- AI governance accountability with governance metrics included in relevant role performance evaluations.
Regulatory Mapping Summary
| Domain | HIPAA | SOC 2 | FedRAMP | NIST AI RMF |
|---|---|---|---|---|
| 1. Strategy | Indirect | CC1 | PL family | GOVERN |
| 2. Risk | 164.308(a)(1) | CC3 | RA family | MAP |
| 3. Ethics | Indirect | CC1 | Indirect | GOVERN |
| 4. Data | 164.312 | CC6 | SC/SI families | MAP |
| 5. Model | Indirect | CC8 | CM family | MEASURE |
| 6. Security | 164.312 | CC6/CC7 | AC/SC families | MANAGE |
| 7. Compliance | All subparts | All TSC | All families | GOVERN |
| 8. Operations | 164.308 | CC7/CC8 | CP/IR families | MANAGE |
| 9. Monitoring | 164.312(b) | CC4 | AU/CA families | MEASURE |
| 10. People | 164.308(a)(5) | CC1 | AT/PS families | GOVERN |
How EPC Group Implements the 100 Controls
EPC Group's vCAIO program uses this 100-control framework as the foundation for every governance engagement. Our approach:
- Baseline assessment scoring your organization against all 100 controls with evidence-based validation.
- Gap analysis prioritized by regulatory risk, business impact, and implementation complexity.
- Phased implementation roadmap delivering high-priority controls in 30-60-90 day sprints.
- Pre-built templates for policies, procedures, and technical configurations that accelerate implementation by 60-70%.
- Continuous monitoring with quarterly re-assessments and governance effectiveness reporting.
- Audit preparation support with evidence collection and documentation for SOC 2, HIPAA, and FedRAMP auditors.
For Microsoft Copilot-specific governance, also see our 47-question Copilot readiness checklist and multi-LLM governance framework.
Frequently Asked Questions
Is this checklist required by any specific regulation?
No single regulation requires exactly these 100 controls. However, this checklist is mapped to controls from NIST AI RMF (AI 100-1), ISO 42001 (AI Management System), EU AI Act, HIPAA Security Rule, SOC 2 Trust Service Criteria, and FedRAMP. Organizations subject to these frameworks will find that implementing this checklist satisfies the majority of AI-specific requirements across all of them. The mapping column in each domain shows which frameworks each control supports.
How should we prioritize the 100 controls?
Start with the 30 controls across Risk (domain 2), Security (domain 6), and Compliance (domain 7) — these address the highest-likelihood, highest-impact risks. Then implement Data (domain 4) and Model (domain 5) controls to establish technical foundations. Strategy (domain 1), Ethics (domain 3), and People (domain 10) can run in parallel. Operations (domain 8) and Monitoring (domain 9) are implemented as you deploy AI into production. EPC Group's implementation typically phases these over 6-9 months.
How does this checklist map to NIST AI RMF?
The 100 controls map to all four NIST AI RMF functions: GOVERN (domains 1, 3, 10), MAP (domains 2, 4), MEASURE (domains 5, 8, 9), and MANAGE (domains 6, 7). Each control includes a NIST AI RMF cross-reference. For organizations specifically required to demonstrate NIST AI RMF compliance, this checklist provides an actionable implementation guide that translates the framework's principles into specific, measurable controls.
Do we need all 100 controls for a small AI deployment?
No. For a single-use-case deployment like Microsoft Copilot, focus on the controls in Risk, Data, Security, and Compliance domains (40 controls). The full 100 controls are designed for organizations with multiple AI models, custom AI applications, and regulated industry requirements. EPC Group's assessment scores your organization against all 100 controls but prioritizes implementation based on your specific risk profile and regulatory requirements.
How often should the AI governance checklist be reviewed?
Quarterly reviews for the full checklist, with monthly monitoring of the Monitoring domain (domain 9) controls. AI regulations, model capabilities, and organizational AI usage evolve rapidly — a governance framework that is not reviewed at least quarterly becomes outdated within 6 months. Trigger-based reviews should occur whenever you deploy a new AI model, enter a new regulated market, or experience an AI incident.
Get Your AI Governance Score
EPC Group assesses your organization against all 100 controls and builds a prioritized implementation roadmap. Call (888) 381-9725 or request a governance assessment below.
Request AI Governance Assessment