EPC Group - Enterprise Microsoft AI, SharePoint, Power BI, and Azure Consulting
G2 High Performer Summer 2025, Momentum Leader Spring 2025, Leader Winter 2025, Leader Spring 2026
BlogContact
Ready to transform your Microsoft environment?Get started today
(888) 381-9725Get Free Consultation
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌

EPC Group

Enterprise Microsoft consulting with 29 years serving Fortune 500 companies.

(888) 381-9725
contact@epcgroup.net
4900 Woodway Drive, Suite 830
Houston, TX 77056

Follow Us

Solutions

  • All Services
  • Microsoft 365 Consulting
  • AI Governance
  • Azure AI Consulting
  • Cloud Migration
  • Microsoft Copilot
  • Data Governance
  • Microsoft Fabric
  • Dynamics 365
  • Power BI Consulting
  • SharePoint Consulting
  • Microsoft Teams
  • vCIO / vCAIO Services
  • Large-Scale Migrations
  • SharePoint Development

Industries

  • All Industries
  • Healthcare IT
  • Financial Services
  • Government
  • Education
  • Teams vs Slack

Power BI

  • Case Studies
  • 24/7 Emergency Support
  • Dashboard Guide
  • Gateway Setup
  • Premium Features
  • Lookup Functions
  • Power Pivot vs BI
  • Treemaps Guide
  • Dataverse
  • Power BI Consulting

Company

  • About Us
  • Our History
  • Microsoft Gold Partner
  • Case Studies
  • Testimonials
  • Fixed-Fee Accelerators
  • Blog
  • Resources
  • All Guides & Articles
  • Video Library
  • Client Reviews
  • Contact
  • Schedule a consultation

Microsoft Teams

  • Teams Questions
  • Teams Healthcare
  • Task Management
  • PSTN Calling
  • Enable Dial Pad

Azure & SharePoint

  • Azure Databricks
  • Azure DevOps
  • Azure Synapse
  • SharePoint MySites
  • SharePoint ECM
  • SharePoint vs M-Files

Comparisons

  • M365 vs Google
  • Databricks vs Dataproc
  • Dynamics vs SAP
  • Intune vs SCCM
  • Power BI vs MicroStrategy

Legal

  • Sitemap
  • Privacy Policy
  • Terms
  • Cookies

About EPC Group

EPC Group is a Microsoft consulting firm founded in 1997 (originally Enterprise Project Consulting, renamed EPC Group in 2005). 29 years of enterprise Microsoft consulting experience. EPC Group historically held the distinction of being the oldest continuous Microsoft Gold Partner in North America from 2016 until the program's retirement. Because Microsoft officially deprecated the Gold/Silver tiering framework, EPC Group transitioned to the modern Microsoft Solutions Partner ecosystem and currently holds the core Microsoft Solutions Partner designations.

Headquartered at 4900 Woodway Drive, Suite 830, Houston, TX 77056. Public clients include NASA, FBI, Federal Reserve, Pentagon, United Airlines, PepsiCo, Nike, and Northrop Grumman. 6,500+ SharePoint implementations, 1,500+ Power BI deployments, 500+ Microsoft Fabric implementations, 70+ Fortune 500 organizations served, 11,000+ enterprise engagements, 200+ Microsoft Power BI and Microsoft 365 consultants on staff.

About Errin O'Connor

Errin O'Connor is the Founder, CEO, and Chief AI Architect of EPC Group. Microsoft MVP multiple years, first awarded 2003. 4× Microsoft Press bestselling author of Windows SharePoint Services 3.0 Inside Out (MS Press 2007), Microsoft SharePoint Foundation 2010 Inside Out (MS Press 2011), SharePoint 2013 Field Guide (Sams/Pearson 2014), and Microsoft Power BI Dashboards Step by Step (MS Press 2018).

Original SharePoint Beta Team member (Project Tahoe). Original Power BI Beta Team member (Project Crescent). FedRAMP framework contributor. Worked with U.S. CIO Vivek Kundra on the Obama administration's 25-Point Plan to reform federal IT, and with NASA CIO Chris Kemp as Lead Architect on the NASA Nebula Cloud project. Speaker at Microsoft Ignite, SharePoint Conference, KMWorld, and DATAVERSITY.

© 2026 EPC Group. All rights reserved. Microsoft, SharePoint, Power BI, Azure, Microsoft 365, Microsoft Copilot, Microsoft Fabric, and Microsoft Dynamics 365 are trademarks of the Microsoft group of companies.

‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
Power BI Copilot 'Prep Data for AI' 2026: Git-Friendly Metadata Architecture for Regulated Industries - EPC Group enterprise consulting

Power BI Copilot 'Prep Data for AI' 2026: Git-Friendly Metadata Architecture for Regulated Industries

Power BI Copilot Prep Data for AI tooling format: governance, Git-friendly metadata architecture, sensitivity-label gating, audit patterns for HIPAA, SOC 2, FedRAMP.

HomeBlogMicrosoft Copilot
Back to BlogMicrosoft Copilot

Power BI Copilot 'Prep Data for AI' 2026: Git-Friendly Metadata Architecture for Regulated Industries

Power BI Copilot Prep Data for AI tooling format: governance, Git-friendly metadata architecture, sensitivity-label gating, audit patterns for HIPAA, SOC 2, FedRAMP.

EO
Errin O'Connor
CEO & Chief AI Architect
•
May 14, 2026
•
17 min read
Microsoft CopilotPower BIAI GovernanceHIPAASOC 2FedRAMPMicrosoft Purview
Power BI Copilot 'Prep Data for AI' 2026: Git-Friendly Metadata Architecture for Regulated Industries

TL;DR

  • The May 2026 Power BI release introduces the Copilot Tooling Format ("Prep Data for AI"): a text-based, Git-friendly storage format for the metadata that makes Power BI semantic models work well with Microsoft Copilot — synonyms, description overrides, and sample questions.
  • For regulated-industry enterprises (healthcare under HIPAA, financial services under SOC 2 and SOX, federal under FedRAMP), the Copilot rollout decision has historically been gated by two concerns: source-control friction on the Copilot metadata, and the audit trail for Copilot-generated content. The new format closes the first gap; the second requires a sensitivity-label and Microsoft Purview audit-log architecture that this guide details.
  • The implementation pattern combines TMDL semantic-model definitions, the new Copilot Tooling Format, Microsoft Purview sensitivity labels, and Microsoft Sentinel audit-log routing into a single governance fabric. Enterprises that have already invested in TMDL-based development pipelines can extend the same pattern to Copilot without standing up new tooling.
  • The rollout sequence we recommend: cover sensitivity labels first, instrument audit-log routing second, populate the Copilot Tooling Format third, pilot Copilot Summarize fourth, expand fifth. Tenants that reverse this order discover problems too late.
  • This guide walks through the architecture, the governance patterns for HIPAA/SOC 2/FedRAMP scopes, the EPC Group implementation framework, and the common pitfalls we've seen across hundreds of regulated-industry deployments.

Executive Summary

For three years, regulated-industry enterprises have approached Microsoft Copilot in Power BI with caution. The capability is compelling — automatic natural-language summarization of report data, conversational analysis of complex models — but the audit and governance questions have been substantial. What does the auditor see when they ask "show me every Copilot response generated last quarter that summarized data with a Confidential sensitivity label"? Who approved the Copilot synonym that changed how Net Revenue is described to consumers? How do we know the language model is not surfacing detail to a consumer who should not see it?

The May 2026 Microsoft Fabric and Power BI release answers most of these questions through three converging capabilities:

  1. The Copilot Tooling Format ("Prep Data for AI") puts Copilot metadata into Git-managed text files alongside the TMDL semantic-model definition, so changes to synonyms and descriptions go through the same code-review process as changes to DAX measures.

  2. Microsoft Purview sensitivity labels propagate end-to-end through Fabric, gating Copilot behavior on confidentiality and triggering Microsoft Sentinel audit events that flow into existing SIEM pipelines.

  3. Fabric audit logging at the capacity level captures Copilot prompts and responses for downstream review.

This guide is for enterprise data and security leaders responsible for a Power BI Copilot rollout in a regulated environment. We cover the architecture, the governance patterns, and the implementation framework EPC Group has refined across hundreds of regulated-industry engagements.

Why This Architecture Matters Now

Three factors converge in mid-2026 to make Copilot rollout decisions urgent:

  1. The Copilot Summarize feature shipped in May 2026 puts AI-generated descriptions of report data directly in front of every report consumer who clicks the Summarize button. This is the first Copilot capability with that surface area. Tenants that have not completed the sensitivity-label coverage work need to do it now.

  2. The Copilot Tooling Format closes the source-control gap. Previously, Copilot metadata was stored in a format that made Git workflows awkward. The new format is text-based, diff-able, and merge-friendly. Enterprises that held back broader Copilot rollout because of source-control friction can now proceed.

  3. Regulator scrutiny on AI-generated content has increased. HIPAA's 2026 access-control updates, SR 11-7 model risk management expectations for AI in financial services, and FedRAMP's emerging AI governance expectations all add weight to the audit-trail and explainability requirements for Copilot-generated summaries.

The Three-Layer Metadata Model

A Power BI semantic model that works well with Copilot has three layers of metadata:

Layer 1: The semantic model itself (TMDL)

The TMDL (Tabular Model Definition Language) file describes the model — tables, columns, measures, relationships, RLS rules, OLS rules, perspectives. This is the authoritative model definition and lives under version control in the team's Git repository.

Layer 2: The model's narrative metadata

Within the TMDL, each table, column, and measure has a Description property. Copilot reads these descriptions and uses them in its summaries. A measure named [Net Revenue] with a description "Net revenue after returns, allowances, and trade discounts, in reporting currency" gives Copilot the context it needs to summarize that metric correctly.

This layer is part of the semantic model definition and is versioned along with the model. It is the foundation of Copilot quality.

Layer 3: The Copilot Tooling Format

The Copilot Tooling Format adds the three additional concerns that the TMDL Description property cannot cleanly express:

  • Synonyms. Alternate business terms that should resolve to the same model concept. The synonym file maps ["Net Sales", "Topline Revenue", "Gross Revenue After Returns"] to the [Net Revenue] measure. When a user asks Copilot about "topline revenue trends," Copilot understands the user means Net Revenue.

  • Description overrides. Sometimes the technical description in the TMDL is correct for engineers but wrong for business users. The description override file provides the audience-appropriate phrasing that Copilot should use in summaries shown to consumers.

  • Sample questions. The canonical questions Copilot should be ready to answer for this model. These guide the language model and help users discover what they can ask.

The three layers work together. Layer 1 + 2 are the foundation; Layer 3 tunes Copilot's behavior on top of that foundation.

Repository Architecture

EPC Group's recommended repository structure for a governed enterprise Power BI / Fabric environment:

/fabric-tenant-repo/
├── semantic-models/
│   ├── sales-finance/
│   │   ├── definition.tmdl
│   │   ├── model.bim (legacy fallback)
│   │   ├── perspectives/
│   │   │   ├── executive.tmdl
│   │   │   └── operations.tmdl
│   │   └── copilot/
│   │       ├── synonyms.json
│   │       ├── descriptions.json
│   │       └── sample-questions.json
│   ├── operations/
│   ├── compliance/
│   └── _shared/
│       └── common-dimensions.tmdl
├── reports/
│   ├── certified/
│   │   └── finance-executive-summary.pbip/
│   └── self-service/
├── governance/
│   ├── sensitivity-label-map.yaml
│   ├── capacity-allocation.yaml
│   ├── rls-rules.md
│   └── copilot-policy.md
├── pipelines/
│   ├── ci-build.yaml
│   ├── cd-deploy.yaml
│   └── pre-commit-hooks.yaml
└── docs/
    ├── README.md
    └── runbooks/

The Copilot metadata lives in /semantic-models/<model>/copilot/. A change to a synonym creates a diff in synonyms.json that goes through pull-request review the same way a DAX measure change does.

Synonyms file format

{
  "model": "sales-finance",
  "version": "1.4.0",
  "lastReviewed": "2026-05-14",
  "synonyms": [
    {
      "concept": "measures/Net Revenue",
      "terms": [
        "Net Sales",
        "Topline Revenue",
        "Revenue After Returns",
        "NR"
      ],
      "deprecated": [],
      "owner": "finance-bi-team"
    },
    {
      "concept": "tables/Customer",
      "terms": ["Account", "Client", "Buyer"],
      "deprecated": ["Customer Master"],
      "owner": "customer-data-team"
    }
  ]
}

The deprecated array tracks synonyms that were previously valid but are being retired. This matters for audit purposes — the auditor's question "what was the synonym definition for Customer as of January 1, 2026" is answered by looking at the file at the Git tag for that date.

Descriptions override file format

{
  "model": "sales-finance",
  "version": "1.4.0",
  "descriptions": [
    {
      "concept": "measures/Net Revenue",
      "default": "Net revenue after returns, allowances, and trade discounts",
      "audience": {
        "executive": "Total revenue we recognize after subtracting returns and discounts",
        "analyst": "SUM(Sales) - SUM(Returns) - SUM(Discounts), in reporting currency"
      }
    }
  ]
}

The audience-specific descriptions let Copilot use different phrasing for executive summaries vs. analyst-facing summaries. Power BI Copilot can be configured to select the audience description based on the consumer's group membership.

Sample questions file format

{
  "model": "sales-finance",
  "version": "1.4.0",
  "sampleQuestions": [
    "What is our net revenue this quarter compared to last quarter?",
    "Which product line had the largest revenue decline last month?",
    "Show me the top 10 customers by net revenue year-to-date",
    "How has the gross margin trended over the past 12 months?"
  ]
}

Sample questions guide both the Copilot model (helping it understand the typical query patterns for this model) and the user interface (Power BI Copilot can surface these as suggested prompts).

Microsoft Purview Sensitivity Labels in the Copilot Pipeline

How labels gate Copilot

Microsoft Purview sensitivity labels (Public, Internal, Confidential, Highly Confidential, and any custom labels the tenant has defined) apply across the Microsoft 365, Azure, and Fabric environment. For Power BI semantic models and reports, the label appears on the item and on derived items.

The Copilot behavior is gated as follows:

Label Copilot Summarize behavior
Public Generates summary, no restriction
Internal Generates summary, audit log entry created
Confidential Generates summary, audit log entry, may include label disclaimer in response
Highly Confidential (with "block Copilot processing" flag) Refuses to summarize, returns label-aware message
Custom labels Behavior defined per label policy

The exact behavior depends on the tenant's Microsoft Purview label policy configuration. The standard pattern in our regulated-industry deployments:

  • Public and Internal: Copilot Summarize permitted, audit logged.
  • Confidential: Copilot Summarize permitted with audit log, label disclaimer added to response, additional sensitivity-aware prompt wrapping.
  • Highly Confidential: Copilot Summarize blocked.

Label coverage checklist

Before enabling Copilot Summarize tenant-wide, the data security team should validate:

  1. Every published semantic model has a sensitivity label assigned (not blank, not "default").
  2. Labels reflect the highest-sensitivity element in the underlying data. A model that joins de-identified claims data with patient-identifiable demographic data should be labeled to reflect the demographic data sensitivity.
  3. Reports inherit their model's label by default. Override patterns are documented.
  4. The "block Copilot processing" label policy is configured for the appropriate labels.
  5. Microsoft Purview Compliance Manager shows the tenant's Copilot data-readiness score above the agreed threshold.

Audit-Log Architecture

What gets logged

Microsoft Fabric capacity-level audit logging captures Copilot interactions:

  • The user identity that invoked the Copilot prompt.
  • The timestamp.
  • The semantic model and report context.
  • The prompt text (with PII redaction depending on tenant configuration).
  • The response generated.
  • The sensitivity label context at the time of the request.

These events flow into the Microsoft Purview Audit log (Standard) and, for tenants that have configured the routing, into Microsoft Sentinel.

Routing to Microsoft Sentinel

For regulated-industry tenants with an established SIEM, the audit-log routing pattern is:

Microsoft Fabric (Power BI Copilot interactions)
        ↓
Microsoft Purview Audit (Standard)
        ↓
Microsoft Sentinel (via the Microsoft Defender for Cloud Apps connector
                    or the direct Microsoft Purview connector)
        ↓
Analytic rules (regulated-industry rule pack):
  - Copilot prompt against Highly Confidential data
  - Anomalous Copilot prompt volume per user
  - Copilot prompt containing PII-like patterns
  - Copilot prompt from outside the expected geographic region

The analytic rule pack is industry-specific. Healthcare tenants extend with HIPAA-aligned rules; financial services tenants extend with SR 11-7 and SOX-aligned rules; federal tenants extend with FedRAMP and NIST 800-53 aligned rules.

Retention

Audit retention windows vary by regulatory framework:

  • HIPAA Privacy Rule: 6 years from the last applicable date.
  • SOC 2 Trust Services Criteria: Typically 1 year minimum, often 3 years.
  • SOX: Typically 7 years.
  • FedRAMP: 1 year online, 3 years offline minimum; specific systems may require more.

Microsoft Purview Audit (Standard) retains 90 days; Audit (Premium) retains 1 year. For longer retention, the audit logs are archived to Azure Storage (Microsoft Sentinel can write to Azure Data Explorer for cost-effective long-retention).

Regulated-Industry Governance Patterns

Healthcare (HIPAA)

For healthcare enterprises rolling out Copilot in Power BI on PHI-touching data:

  • Sensitivity labels. Every model touching PHI carries Confidential or higher labeling. Models in scope for the "block Copilot processing" policy are clearly identified.
  • De-identification pattern. Where possible, Copilot summarization runs over de-identified data (Safe Harbor or Expert Determination). Models exposing identifiable PHI should not be Copilot-enabled for broad audiences.
  • Audit log routing. Microsoft Sentinel with a HIPAA-aligned analytic rule library. Rules include: Copilot summary against a model containing date-of-service detail, Copilot summary returning content that matches PHI patterns, Copilot prompt from a user outside the designated workforce list.
  • Business Associate Agreement. Verify the tenant's Microsoft BAA includes the relevant Fabric and Copilot services. The current Microsoft BAA does include them, but specific services and feature flags should be confirmed.
  • Workforce training. HIPAA Security Rule §164.308(a)(5) requires workforce security awareness training. Update the curriculum to include Copilot-specific guidance: what to ask, what not to share in prompts, how to recognize when Copilot's response should be escalated.

Financial Services (SOC 2 + SOX + SR 11-7)

For financial services enterprises:

  • Change management. Copilot synonym and description changes follow the same change-management process as semantic-model changes — typically a SOC 2 Common Criteria CC8.1 control. Git pull-request approval, peer review, evidence retention all apply.
  • Model risk management (SR 11-7). Copilot Summarize is a model in the SR 11-7 sense for the bank. The model risk function should perform an inventory, periodic effective challenge, and documented governance review. Most banks classify Copilot at a moderate-risk tier given the relatively contained surface area (summarization, not decision-making).
  • SOX-relevant reports. Reports supporting SOX financial reporting carry additional controls: the Copilot-generated summary on these reports should be treated as supporting documentation only, not as the report content itself. Disclaimers and audit-log entries are typical.
  • Audit-log routing. Microsoft Sentinel with financial-services-aligned rules. Cross-correlation with the bank's existing SIEM patterns.

Federal (FedRAMP)

For federal-sector enterprises:

  • Tenant configuration. Verify Copilot availability in the GCC or GCC High tenant. Some Copilot capabilities have a delayed availability for FedRAMP-aligned tenants.
  • Data residency. Copilot processing residency aligns with the tenant's Microsoft 365 residency. Verify against the agency's NIST 800-53 control mapping.
  • Approval workflow. Federal tenants typically require ATO documentation updates when adding a significant new capability. Treat Copilot rollout as an ATO-significant change and update the System Security Plan accordingly.

EPC Group's 16-Week Implementation Framework

For regulated-industry enterprises deploying Power BI Copilot, the implementation pattern that delivers consistent results without compliance friction:

Weeks 1–2: Discovery and gap analysis.

  • Current-state assessment of the tenant's Power BI estate.
  • Sensitivity-label coverage audit.
  • Microsoft Purview Compliance Manager Copilot-readiness review.
  • Existing source-control and CI/CD pipeline assessment.
  • Regulatory framework mapping (HIPAA, SOC 2, SOX, FedRAMP as applicable).

Weeks 3–6: Foundation.

  • Sensitivity-label catalog completion (apply labels to every semantic model and report).
  • Microsoft Sentinel Copilot audit-rule library deployment.
  • Source-control repository setup for the Copilot Tooling Format.
  • Pre-commit hooks and PR-review automation for Copilot metadata changes.

Weeks 7–10: Metadata population.

  • Identify priority semantic models for Copilot enablement (typically top 10–20 by usage).
  • Author the Copilot Tooling Format files (synonyms, descriptions, sample questions) for each priority model.
  • Code review and merge process for each model's Copilot metadata.

Weeks 11–12: Pilot.

  • Enable Copilot for a pilot user group (typically a single business unit, 100–300 users).
  • Monitor Copilot interactions in Microsoft Sentinel.
  • Tune synonyms and descriptions based on real usage feedback.

Weeks 13–14: Governance update.

  • Publish the Copilot governance policy.
  • Workforce training updates (especially for HIPAA-covered entities).
  • Update the ATO or compliance documentation (for FedRAMP and SOC 2 tenants).
  • Update the model risk inventory (for financial services).

Weeks 15–16: Broad rollout.

  • Enable Copilot for the broader user population.
  • Standing audit log review cadence.
  • Capacity-consumption monitoring and tuning.
  • Feedback loop into the Copilot metadata refinement process.

Common Pitfalls

Across the regulated-industry Copilot rollouts we have guided in 2026, the recurring problem patterns:

  1. Enabling Copilot before completing sensitivity-label coverage. A model with the default label inherits the tenant's default sensitivity, which is usually too permissive for regulated data. Cover labels first, broad-enable second.

  2. Treating the Copilot Tooling Format as optional. Copilot will function without it, but the quality of summaries is substantially better with it. The investment is modest (typically 2–4 hours per semantic model) and pays back quickly in user adoption.

  3. Skipping the audit-log routing setup. Tenants that enable Copilot without routing audit events into Sentinel (or the equivalent SIEM) discover the gap during the first regulatory review. Set up routing before broad enablement.

  4. Underestimating the workforce training burden. Especially in healthcare, the HIPAA Security Rule workforce training requirement extends to Copilot. The training is not heavy (typically 20–30 minutes of content) but it needs to happen.

  5. Letting business units self-author synonyms without governance. Synonyms are powerful and change Copilot's behavior in ways that surprise users. Synonym changes should go through the same code-review process as DAX measure changes.

  6. Forgetting to update the model risk inventory. For financial services, the SR 11-7 model risk inventory must include Copilot. Banks that have not done this discover it during the next model risk audit.

Frequently Asked Questions

What is the Copilot Tooling Format ("Prep Data for AI")?

The Copilot Tooling Format is the May 2026 GA storage format for Power BI Copilot metadata — synonyms (alternate business terms for model concepts), description overrides (audience-appropriate phrasing for Copilot to use), and sample questions (canonical questions Copilot should be ready to answer). The format is text-based, Git-friendly, and integrates cleanly into existing TMDL-based development pipelines.

Do I need the Copilot Tooling Format to use Power BI Copilot?

No. Power BI Copilot will work without the Tooling Format using only the TMDL descriptions. The Tooling Format improves Copilot quality substantially by providing synonyms, audience-specific descriptions, and sample questions. We recommend implementing it for any model where Copilot will be exposed to broad user populations.

How do sensitivity labels gate Copilot behavior?

Microsoft Purview sensitivity labels can include a "block Copilot processing" policy. Labels with that flag prevent Copilot from generating summaries for the labeled content. The label propagates from the semantic model to the reports built on it. The tenant's label policy defines which labels block Copilot processing.

What audit events does Copilot generate?

Power BI Copilot interactions generate audit events at the Fabric capacity level. The events include: user identity, timestamp, semantic model and report context, prompt text, response generated, and sensitivity label context. The events flow into Microsoft Purview Audit (Standard) and can be routed to Microsoft Sentinel.

What is HIPAA's position on Copilot summarization of PHI?

The HIPAA Security Rule applies to Copilot the same way it applies to any other workforce member or system that accesses PHI. The covered entity must verify that the Microsoft BAA covers the Copilot service (it does for the current Fabric Copilot offering), establish appropriate access controls, audit logging, and workforce training. De-identified data is outside HIPAA scope and is generally the simpler path for Copilot rollout.

Is Power BI Copilot subject to SR 11-7 model risk management at a bank?

Most banks classify Power BI Copilot as a model under SR 11-7. The model risk function performs an inventory entry, periodic effective challenge, and documented governance review. The risk classification is typically moderate given the contained surface area (summarization rather than decision-making).

How does Copilot Summarize work for reports that consume from multiple semantic models?

When a report visual is backed by multiple models (typically through composite models or DirectQuery to a remote semantic model), Copilot Summarize uses the metadata from each model. The visual's effective sensitivity label is the highest sensitivity of the contributing models.

Can different audience groups see different Copilot description overrides for the same measure?

Yes. The Copilot Tooling Format supports audience-specific descriptions. The audience is typically determined by the user's group membership at the time of the Copilot interaction. The configuration is in the description override file and the tenant's Copilot policy.

How do I deprecate a Copilot synonym safely?

Mark the synonym as deprecated in the synonyms file (move it from the terms array to the deprecated array). Monitor usage through the Copilot audit logs. After a stable absence-of-use window (typically 30–90 days depending on the model's user base), remove the deprecated synonym entirely in a subsequent release.

What is the difference between TMDL descriptions and the Copilot description override?

TMDL descriptions are part of the semantic model definition. They are shown in tooltips and used by Copilot as the default description. The Copilot description override is part of the Copilot Tooling Format and provides audience-specific phrasing that Copilot uses in summaries. The override is layered on top of the TMDL description.

How long does the typical Copilot Tooling Format implementation take per semantic model?

For a model the team is familiar with, allow 2–4 hours per model for the initial Tooling Format population (synonyms, descriptions, sample questions). Subsequent refinement based on production usage feedback is ongoing but lightweight.

Can the Copilot Tooling Format be edited in a tool other than a text editor?

Yes. Power BI Desktop's Copilot setup experience can edit the metadata files through a UI. For team-based development, we recommend the text-editor + Git workflow because it preserves the diff history and code-review process.

Does Microsoft Sentinel have built-in Copilot analytic rules?

Microsoft Sentinel includes a Microsoft Defender for Cloud Apps connector that captures Power BI activity events, including Copilot interactions. Additional analytic rules can be authored against these events. Microsoft has published a Copilot-specific analytic rule library that tenants can deploy as a starting point.

How does EPC Group support regulated-industry Copilot rollouts?

EPC Group works with healthcare, financial services, and federal-sector enterprises on Power BI Copilot rollouts aligned to HIPAA, SOC 2, SOX, SR 11-7, and FedRAMP frameworks. The standard pattern is a 16-week engagement covering discovery, foundation, metadata population, pilot, governance update, and broad rollout. Our consultants — including Microsoft Press bestselling author Errin O'Connor — bring direct experience across hundreds of regulated-industry Copilot deployments.

What is the typical capacity consumption for Copilot Summarize in a 5,000-user tenant?

Capacity consumption depends on adoption rate. A typical pattern after broad enablement is 1,000–2,500 Summarize invocations per week in a 5,000-user tenant. The CU consumption is workload-specific but typically requires F-SKU sizing at F4 or larger for the Copilot workload alone, beyond the existing Power BI workload.

Next Steps

If your enterprise is preparing to roll out Power BI Copilot in a regulated environment, the practical next steps:

  1. Run the Microsoft Purview Compliance Manager Copilot-readiness assessment for your tenant.
  2. Audit sensitivity-label coverage on every published semantic model.
  3. Establish the Copilot audit-log routing to your SIEM.
  4. Inventory your priority semantic models and plan the Copilot Tooling Format population.
  5. Engage a partner with deep regulated-industry Copilot implementation experience to compress the planning timeline.

EPC Group has 29 years of enterprise Microsoft consulting experience and is Microsoft Solutions Partner with the core designations. We were historically the oldest continuous Microsoft Gold Partner in North America from 2016 until the program's retirement. Our consultants — including Microsoft Press bestselling author Errin O'Connor — bring direct experience across hundreds of regulated-industry Copilot deployments in healthcare, financial services, and government. To discuss your Copilot rollout, contact EPC Group for a 30-minute discovery call.

Share this article:
EO

Errin O'Connor

CEO & Chief AI Architect

Microsoft Press bestselling author with 29 years of enterprise consulting experience.

View Full Profile

Need Help with Microsoft Copilot?

Our team of experts can help you implement enterprise-grade microsoft copilot solutions tailored to your organization's needs.

Microsoft Copilot Consulting ServicesSchedule a Consultation