EPC Group - Enterprise Microsoft AI, SharePoint, Power BI, and Azure Consulting
G2 High Performer Summer 2025, Momentum Leader Spring 2025, Leader Winter 2025, Leader Spring 2026
BlogContact
Ready to transform your Microsoft environment?Get started today
(888) 381-9725Get Free Consultation
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌

EPC Group

Enterprise Microsoft consulting with 29 years serving Fortune 500 companies.

(888) 381-9725
contact@epcgroup.net
4900 Woodway Drive, Suite 830
Houston, TX 77056

Follow Us

Solutions

  • All Services
  • Microsoft 365 Consulting
  • AI Governance
  • Azure AI Consulting
  • Cloud Migration
  • Microsoft Copilot
  • Data Governance
  • Microsoft Fabric
  • Dynamics 365
  • Power BI Consulting
  • SharePoint Consulting
  • Microsoft Teams
  • vCIO / vCAIO Services
  • Large-Scale Migrations
  • SharePoint Development

Industries

  • All Industries
  • Healthcare IT
  • Financial Services
  • Government
  • Education
  • Teams vs Slack

Power BI

  • Case Studies
  • 24/7 Emergency Support
  • Dashboard Guide
  • Gateway Setup
  • Premium Features
  • Lookup Functions
  • Power Pivot vs BI
  • Treemaps Guide
  • Dataverse
  • Power BI Consulting

Company

  • About Us
  • Our History
  • Microsoft Gold Partner
  • Case Studies
  • Testimonials
  • Fixed-Fee Accelerators
  • Blog
  • Resources
  • All Guides & Articles
  • Video Library
  • Client Reviews
  • Contact
  • Schedule a consultation

Microsoft Teams

  • Teams Questions
  • Teams Healthcare
  • Task Management
  • PSTN Calling
  • Enable Dial Pad

Azure & SharePoint

  • Azure Databricks
  • Azure DevOps
  • Azure Synapse
  • SharePoint MySites
  • SharePoint ECM
  • SharePoint vs M-Files

Comparisons

  • M365 vs Google
  • Databricks vs Dataproc
  • Dynamics vs SAP
  • Intune vs SCCM
  • Power BI vs MicroStrategy

Legal

  • Sitemap
  • Privacy Policy
  • Terms
  • Cookies

About EPC Group

EPC Group is a Microsoft consulting firm founded in 1997 (originally Enterprise Project Consulting, renamed EPC Group in 2005). 29 years of enterprise Microsoft consulting experience. EPC Group historically held the distinction of being the oldest continuous Microsoft Gold Partner in North America from 2016 until the program's retirement. Because Microsoft officially deprecated the Gold/Silver tiering framework, EPC Group transitioned to the modern Microsoft Solutions Partner ecosystem and currently holds the core Microsoft Solutions Partner designations.

Headquartered at 4900 Woodway Drive, Suite 830, Houston, TX 77056. Public clients include NASA, FBI, Federal Reserve, Pentagon, United Airlines, PepsiCo, Nike, and Northrop Grumman. 6,500+ SharePoint implementations, 1,500+ Power BI deployments, 500+ Microsoft Fabric implementations, 70+ Fortune 500 organizations served, 11,000+ enterprise engagements, 200+ Microsoft Power BI and Microsoft 365 consultants on staff.

About Errin O'Connor

Errin O'Connor is the Founder, CEO, and Chief AI Architect of EPC Group. Microsoft MVP multiple years, first awarded 2003. 4× Microsoft Press bestselling author of Windows SharePoint Services 3.0 Inside Out (MS Press 2007), Microsoft SharePoint Foundation 2010 Inside Out (MS Press 2011), SharePoint 2013 Field Guide (Sams/Pearson 2014), and Microsoft Power BI Dashboards Step by Step (MS Press 2018).

Original SharePoint Beta Team member (Project Tahoe). Original Power BI Beta Team member (Project Crescent). FedRAMP framework contributor. Worked with U.S. CIO Vivek Kundra on the Obama administration's 25-Point Plan to reform federal IT, and with NASA CIO Chris Kemp as Lead Architect on the NASA Nebula Cloud project. Speaker at Microsoft Ignite, SharePoint Conference, KMWorld, and DATAVERSITY.

© 2026 EPC Group. All rights reserved. Microsoft, SharePoint, Power BI, Azure, Microsoft 365, Microsoft Copilot, Microsoft Fabric, and Microsoft Dynamics 365 are trademarks of the Microsoft group of companies.

Healthcare organizations deploying AI must meet HIPAA technical safeguards, FDA SaMD regulations for clinical AI, and human-in-the-loop design requirements. This guide covers HIPAA requirements for AI systems, PHI risk assessment, clinical model validation, BAA requirements for AI vendors, and a HIPAA-compliant Azure AI architecture for healthcare deployments.

Key Facts

  • HIPAA requires access controls, audit logging, transmission security, and integrity controls for any AI system touching PHI.
  • Clinical AI used for diagnosis or treatment decisions may qualify as Software as a Medical Device (SaMD) under FDA regulations.
  • Healthcare organizations must sign a Business Associate Agreement (BAA) with every AI vendor that processes PHI.
  • Microsoft Azure's HIPAA-eligible services include Azure OpenAI, Azure Machine Learning, and Azure AI Search.
  • Bias testing must disaggregate AI performance metrics by age, sex, race, ethnicity, and insurance type.
  • Human-in-the-loop design is required for high-risk clinical decisions — AI alone cannot make final patient care decisions.

AI Governance Framework for Healthcare

A HIPAA Compliance Guide for Responsible Medical AI Deployment

AI Governance Framework for Healthcare: The Definitive HIPAA Compliance Guide

By Errin O'Connor|Chief AI Architect, EPC Group|February 22, 2026|15 min read
AI GovernanceHealthcareHIPAAClinical AICompliance

AI Governance Framework for Healthcare: The Definitive HIPAA Compliance Guide

Healthcare organizations deploying AI must meet HIPAA technical safeguards, FDA SaMD regulations for clinical AI, and human-in-the-loop design requirements. This guide covers HIPAA requirements for AI systems, PHI risk assessment, clinical model validation, BAA requirements for AI vendors, and a HIPAA-compliant Azure AI architecture for healthcare deployments.

Key facts

  • HIPAA requires access controls, audit logging, transmission security, and integrity controls for any AI system touching PHI.
  • Clinical AI used for diagnosis or treatment decisions may qualify as Software as a Medical Device (SaMD) under FDA regulations.
  • Healthcare organizations must sign a Business Associate Agreement (BAA) with every AI vendor that processes PHI.
  • Microsoft Azure's HIPAA-eligible services include Azure OpenAI, Azure Machine Learning, and Azure AI Search.
  • Bias testing must disaggregate AI performance metrics by age, sex, race, ethnicity, and insurance type.
  • Human-in-the-loop design is required for high-risk clinical decisions — AI alone cannot make final patient care decisions.

HIPAA requirements for AI in healthcare

Any AI system that processes, transmits, or stores Protected Health Information (PHI) is covered by HIPAA's Technical Safeguards (45 CFR § 164.312). Four controls apply directly to AI systems:

  • Access Controls (§ 164.312(a)) — Role-based access to AI systems and their training data. Unique user identification. Automatic session termination for inactive users.
  • Audit Controls (§ 164.312(b)) — Comprehensive logging of every AI inference that involves PHI. Logs must capture: who initiated the query, what data was accessed, what output was produced, and whether the output influenced clinical care.
  • Transmission Security (§ 164.312(e)) — Encryption of PHI in transit between EHR systems, AI inference endpoints, and result delivery interfaces using TLS 1.2 or higher.
  • Integrity Controls (§ 164.312(c)) — Input validation and data pipeline checksums to verify PHI has not been altered or corrupted during AI processing.

AI risk assessment for Protected Health Information

Data security risks

  • PHI leakage through model outputs — AI generating responses that contain patient identifiers.
  • Training data memorization — LLMs inadvertently memorizing PHI from training datasets.
  • Inference attacks — adversaries reconstructing PHI from model outputs or embeddings.
  • Unauthorized access to AI inference logs containing PHI.

Algorithmic risks

  • Biased predictions across demographic subgroups — models underperforming for minority populations.
  • Overconfident outputs — AI reporting high probability for conditions that are actually rare in the deployment population.
  • Distribution shift — model performance degrading as patient population or clinical practice patterns change.

Operational risks

  • Clinician over-reliance on AI recommendations without applying clinical judgment.
  • AI system downtime causing gaps in clinical decision support.
  • Inadequate clinician training on AI tool limitations and appropriate use.

Clinical model validation for healthcare AI

Clinical AI requires three validation phases before production deployment:

Phase 1: Retrospective validation

Test the model against a held-out dataset of historical cases with confirmed diagnoses. Measure sensitivity, specificity, positive and negative predictive values, and area under the ROC curve. The dataset must reflect the demographics and case mix of the target deployment population — not just the population the model was originally trained on.

Phase 2: Prospective shadow mode

Deploy the model in shadow mode alongside standard clinical workflow. The AI makes recommendations, but they do not influence care delivery. This phase identifies failure modes that retrospective testing misses.

These include data quality issues in live EHR feeds, latency impacts on clinical workflow, and edge cases specific to the local patient population.

Phase 3: Calibration verification

Verify that the model's stated confidence scores match actual outcomes. When the model outputs 70% probability for a diagnosis, approximately 70% of those patients should actually have that diagnosis. Poor calibration — even in models with high discrimination — leads to clinical overreaction or underreaction.

Human-in-the-loop design for medical AI

High-risk clinical decisions require a clinician to review and act on AI output. AI alone cannot make final patient care decisions. HITL design has four components:

  • Escalation design — Define which AI outputs require mandatory clinician review before any action.
  • Override capabilities — Clinicians must be able to override AI recommendations with documented rationale.
  • Reviewer training — Clinicians need structured training on how the model works and where it fails.
  • Feedback loops — Clinician overrides feed back into model monitoring to detect systematic disagreements.

BAA requirements for AI vendors

Every AI vendor that processes, stores, or transmits PHI on your behalf is a Business Associate. A Business Associate Agreement (BAA) must be in place before any PHI is shared with the vendor's AI system.

  • Microsoft signs a BAA covering Azure OpenAI, Azure Machine Learning, and Azure AI Search.
  • Verify that the BAA explicitly covers the AI service — not just the underlying cloud platform.
  • Check data residency terms — PHI must remain in HIPAA-eligible Azure regions.
  • Confirm that the vendor does not use your PHI to train their shared AI models.
  • Review the vendor's breach notification timeline — HIPAA requires notice within 60 days.

Azure AI in healthcare: HIPAA-compliant architecture

A HIPAA-compliant Azure AI architecture has four layers:

Data layer

  • Azure Data Lake Storage Gen2 with HIPAA-eligible encryption at rest.
  • De-identified PHI for model training in Azure Machine Learning.
  • Azure Purview for data lineage and PHI classification.

AI/ML layer

  • Azure OpenAI Service with a signed BAA — deployed in HIPAA-eligible Azure regions.
  • Azure Machine Learning for custom model training on de-identified data.
  • Azure AI Search for RAG-based clinical knowledge retrieval.

Security layer

  • Microsoft Entra ID for role-based access to all AI services.
  • Azure Private Endpoints to keep PHI off public internet paths.
  • Microsoft Defender for Cloud for threat detection on AI workloads.

Monitoring layer

  • Azure Monitor and Log Analytics for AI inference audit logging.
  • Microsoft Purview audit for PHI access tracking across AI systems.
  • Custom model drift monitoring with automated retraining triggers.

Bias detection and mitigation in healthcare AI

Bias testing must be built into validation — not added after deployment.

  • Disaggregate performance metrics by age, sex, race, ethnicity, primary language, and insurance type.
  • Flag subgroups where performance falls below the overall population by more than 5–10% relative difference.
  • Test for training data bias — check that underrepresented populations have sufficient representation in training datasets.
  • Test for label bias — verify that labeling processes did not introduce systematic errors for specific subgroups.
  • Test for deployment bias — run shadow mode across diverse patient populations before full rollout.

FDA guidelines for clinical AI (SaMD)

Clinical AI that makes diagnostic or treatment recommendations may qualify as Software as a Medical Device (SaMD) under FDA regulation. The FDA's AI/ML regulatory framework covers four stages:

  • Pre-clinical validation — Retrospective dataset performance testing before any patient use.
  • Clinical validation study — Prospective study with real patients and defined primary endpoints.
  • FDA regulatory review — 510(k) clearance, De Novo, or PMA pathway depending on risk classification.
  • Post-market surveillance — Ongoing monitoring and real-world performance reporting to FDA.

Frequently asked questions

What is AI governance in healthcare?

AI governance in healthcare is the set of policies, controls, and processes that govern how AI systems are developed, deployed, and monitored in clinical and administrative settings. It covers HIPAA compliance, FDA SaMD regulations, bias testing, human-in-the-loop design, and BAA requirements for AI vendors.

Is AI in healthcare subject to HIPAA?

Yes. Any AI system that processes, transmits, or stores PHI is subject to HIPAA's Technical Safeguards. This includes clinical decision support tools, AI-powered documentation systems, medical imaging AI, and any LLM that processes patient records. A Business Associate Agreement must be in place with each AI vendor.

How do you validate an AI model for clinical decision-making?

Three phases: retrospective validation on held-out historical data (measuring sensitivity, specificity, and AUC), prospective shadow-mode deployment alongside clinical workflow, and calibration verification to confirm confidence scores match actual outcomes. Each phase must use patient populations that match the target deployment population.

What is human-in-the-loop design for medical AI?

HITL design requires a clinician to review AI output before any clinical action. AI provides a recommendation — the clinician decides. HITL systems include override capabilities, mandatory review triggers for high-risk outputs, and feedback loops that capture clinician disagreements for model monitoring.

Do healthcare organizations need BAAs for AI vendors?

Yes. Every AI vendor that processes PHI is a Business Associate. A BAA must be signed before any PHI is shared. Verify that the BAA explicitly covers the AI service, that data residency is restricted to HIPAA-eligible regions, and that the vendor does not use your PHI to train shared models.

How does Azure AI support HIPAA-compliant healthcare AI?

Microsoft signs a BAA covering Azure OpenAI, Azure Machine Learning, and Azure AI Search. These services are deployed in HIPAA-eligible Azure regions with encryption at rest and in transit. Azure Purview tracks PHI lineage. Azure Monitor provides audit logging for every AI inference touching PHI.

Build your healthcare AI governance framework

Talk to a senior healthcare AI architect about HIPAA-compliant AI deployment. Call (888) 381-9725 or request a 30-minute discovery call.

Frequently Asked Questions: AI Governance in Healthcare

What is AI governance in healthcare?

AI governance in healthcare is the set of policies, procedures, technical controls, and oversight mechanisms that ensure artificial intelligence systems used in clinical and administrative settings comply with HIPAA regulations, protect patient data (PHI), produce fair and accurate outcomes, and maintain human oversight over clinical decisions. It encompasses model validation, bias detection, audit logging, vendor management through Business Associate Agreements, and alignment with FDA guidance on clinical decision support software.

Is AI in healthcare subject to HIPAA compliance?

Yes. Any AI system that accesses, processes, stores, or transmits protected health information (PHI) is subject to HIPAA Privacy, Security, and Breach Notification Rules. This includes clinical decision support tools, patient triage systems, diagnostic AI, predictive analytics platforms, and natural language processing systems that analyze clinical notes. Covered entities must ensure AI vendors sign Business Associate Agreements and that all AI workflows include encryption, access controls, audit trails, and minimum necessary data access.

How do you validate an AI model for clinical decision-making?

Clinical AI model validation requires a multi-layered approach: (1) retrospective validation against historical patient outcomes with known diagnoses, (2) prospective validation in a controlled clinical environment comparing AI recommendations to physician decisions, (3) subgroup analysis across demographics including age, sex, race, and comorbidity profiles to detect bias, (4) calibration testing to ensure predicted probabilities match observed outcomes, and (5) ongoing performance monitoring with automated drift detection. The validation protocol must be documented and reviewed by clinical leadership before deployment.

What is human-in-the-loop design for medical AI?

Human-in-the-loop (HITL) design ensures that no AI system makes autonomous clinical decisions without qualified human review. In practice, this means AI outputs are presented as recommendations or decision support to licensed clinicians who retain final authority. HITL systems include clear confidence scores, explainable reasoning, easy override mechanisms, escalation paths for edge cases, and mandatory clinician acknowledgment before AI-influenced actions are taken. HIPAA and FDA guidance both emphasize that AI should augment clinical judgment, not replace it.

Do healthcare organizations need Business Associate Agreements for AI vendors?

Yes. Under HIPAA, any AI vendor that creates, receives, maintains, or transmits PHI on behalf of a covered entity qualifies as a business associate and must sign a BAA before accessing any patient data. The BAA must specify how the vendor protects PHI, incident response and breach notification timelines (72 hours), data retention and destruction policies, subcontractor obligations, and audit rights. Organizations should also verify that AI vendors maintain SOC 2 Type II certification and conduct annual penetration testing.

How does Azure AI support HIPAA-compliant healthcare AI?

Microsoft Azure provides a HIPAA-compliant cloud platform with a signed BAA covering Azure AI services including Azure Machine Learning, Azure Cognitive Services, Azure OpenAI Service, and Azure Health Data Services. Key compliance features include encryption at rest (AES-256) and in transit (TLS 1.2+), Azure Private Link for network isolation, Microsoft Entra ID for role-based access control, Azure Monitor and Microsoft Sentinel for audit logging, and data residency controls. Azure also holds SOC 2 Type II, ISO 27001, HITRUST CSF, and FedRAMP High certifications.

What are the FDA guidelines for AI in clinical settings?

The FDA regulates AI/ML-based Software as a Medical Device (SaMD) under its Digital Health framework. Key requirements include: premarket review for AI that diagnoses, treats, or prevents disease; a Predetermined Change Control Plan for models that learn continuously; Good Machine Learning Practices (GMLP) covering data management, model training, and performance evaluation; real-world performance monitoring; and transparency requirements so clinicians understand how the AI reaches conclusions. Clinical decision support tools that meet four specific criteria under the 21st Century Cures Act may be exempt from FDA device regulation.

How do you detect and mitigate bias in healthcare AI models?

Healthcare AI bias detection requires analyzing model performance across protected demographic groups (race, ethnicity, sex, age, socioeconomic status, insurance type) using metrics like equalized odds, demographic parity, and calibration across subgroups. Mitigation strategies include diversifying training data to represent underserved populations, applying fairness constraints during model training, conducting regular bias audits with clinical and ethics committees, monitoring for performance drift that disproportionately affects specific populations, and maintaining a bias incident response plan with remediation timelines.

Related Resources

AI Governance Consulting Services

End-to-end AI governance framework development and implementation

Healthcare AI Risk Assessment

HIPAA-compliant LLM implementation and risk evaluation

Azure AI Enterprise Implementation

Technical architecture guide for Azure AI in enterprise environments

AI Governance Best Practices

Comprehensive AI governance frameworks for all industries

Build Your Healthcare AI Governance Framework

EPC Group has 29 years of experience implementing compliant technology solutions in healthcare. Our AI governance frameworks are deployed across hospital systems, health plans, and clinical research organizations nationwide. Schedule a consultation to assess your current AI landscape and build a governance roadmap that protects patients and your organization.

Schedule a Healthcare AI Governance Assessment

Free 30-minute consultation with our Chief AI Architect, Errin O'Connor