EPC Group - Enterprise Microsoft AI, SharePoint, Power BI, and Azure Consulting
G2 High Performer Summer 2025, Momentum Leader Spring 2025, Leader Winter 2025, Leader Spring 2026
BlogContact
Ready to transform your Microsoft environment?Get started today
(888) 381-9725Get Free Consultation
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌

EPC Group

Enterprise Microsoft consulting with 28+ years serving Fortune 500 companies.

(888) 381-9725
contact@epcgroup.net
4900 Woodway Drive - Suite 830
Houston, TX 77056

Follow Us

Solutions

  • All Services
  • Microsoft 365 Consulting
  • AI Governance
  • Azure AI Consulting
  • Cloud Migration
  • Microsoft Copilot
  • Data Governance
  • Microsoft Fabric
  • vCIO / vCAIO Services
  • Large-Scale Migrations
  • SharePoint Development

Industries

  • All Industries
  • Healthcare IT
  • Financial Services
  • Government
  • Education
  • Teams vs Slack

Power BI

  • Case Studies
  • 24/7 Emergency Support
  • Dashboard Guide
  • Gateway Setup
  • Premium Features
  • Lookup Functions
  • Power Pivot vs BI
  • Treemaps Guide
  • Dataverse
  • Power BI Consulting

Company

  • About Us
  • Our History
  • Microsoft Gold Partner
  • Case Studies
  • Testimonials
  • Blog
  • Resources
  • Contact

Microsoft Teams

  • Teams Questions
  • Teams Healthcare
  • Task Management
  • PSTN Calling
  • Enable Dial Pad

Azure & SharePoint

  • Azure Databricks
  • Azure DevOps
  • Azure Synapse
  • SharePoint MySites
  • SharePoint ECM
  • SharePoint vs M-Files

Comparisons

  • M365 vs Google
  • Databricks vs Dataproc
  • Dynamics vs SAP
  • Intune vs SCCM
  • Power BI vs MicroStrategy

Legal

  • Sitemap
  • Privacy Policy
  • Terms
  • Cookies

© 2026 EPC Group. All rights reserved.

‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
Home / Blog / Power BI Semantic Model Governance

Power BI Semantic Model Governance: One Definition of Revenue

By Errin O'Connor, Chief AI Architect & CEO of EPC Group | Updated April 2026

Finance says revenue is $142 million. Sales says it is $156 million. Operations says it is $138 million. All three are pulling from Power BI. All three are technically correct — based on their definition of "revenue." This is the semantic model governance problem, and it undermines every enterprise BI deployment that does not address it explicitly.

When Everyone Defines Revenue Differently

The scenario is painfully common. A CFO presents quarterly revenue to the board using a Power BI dashboard. A board member asks a question, pulls up their own analytics tool, and sees a different number. Trust evaporates. Not because the data is wrong, but because the definition of "revenue" varies across systems, datasets, and departments.

Finance defines revenue as net revenue after returns and allowances. Sales defines revenue as gross bookings including pending deals. Operations defines revenue as shipped and invoiced orders. Each department built their own Power BI semantic model with their own DAX measures. Each model pulls from a different combination of source tables. The numbers are internally consistent within each model but irreconcilable across them.

This is not a data quality problem. It is a semantic governance problem. The data is accurate — the definitions are inconsistent. And without a governed semantic layer, every Power BI report built on top of these models perpetuates the inconsistency.

EPC Group has seen this pattern in every Fortune 500 Power BI engagement we have conducted. The larger the organization, the more definitions of "revenue" exist. We have audited environments with 14 different revenue measures across 8 datasets — none of them documented, none of them certified, and several of them feeding board-level dashboards.

What Is a Semantic Model and Why It Is the Foundation of Trust

A Power BI semantic model (Microsoft renamed "datasets" to "semantic models" in 2023) is the data layer that defines tables, relationships, measures, hierarchies, and business logic. It sits between raw data sources and reports. When you create a DAX measure like Total Revenue = SUM(Sales[Amount]), that measure lives in the semantic model.

The semantic model is where business meaning gets encoded. Raw data has columns and values. The semantic model adds context: this column is "revenue," this relationship connects orders to customers, this hierarchy rolls up regions to countries to global. Reports consume this semantic layer — they do not query raw data directly.

This architecture means that whoever controls the semantic model controls the truth. If the Finance semantic model defines Revenue = SUM(Invoice[NetAmount]) - SUM(Returns[Amount]) and every Finance report uses that model, then every Finance report agrees on revenue. The problem arises when Sales builds a separate model with Revenue = SUM(Opportunities[Amount]) WHERE Stage = 'Closed Won'. Now you have two truths.

The Certified Semantic Model Framework

Power BI's endorsement system provides two levels: Promoted and Certified. Promoted is self-service — any dataset owner can promote their model. Certified is governed — only designated authorities can certify a model, and certification implies that the model has passed quality, accuracy, and security validation.

EPC Group's certification framework requires semantic models to pass five criteria before earning the Certified badge:

  1. Business logic validation. Every measure and calculated column has been reviewed by the domain owner (Finance for financial measures, Sales for pipeline measures) and documented with a plain-English definition, formula, and source lineage.
  2. Data quality checks. Automated tests validate row counts, null rates, referential integrity, and value ranges after every refresh. A single test failure blocks certification until resolved.
  3. Refresh reliability. The model has maintained a 99%+ refresh success rate for the past 30 days. Intermittent failures indicate source instability that disqualifies certification.
  4. Security configuration. Row-level security and object-level security are correctly configured and tested per the enterprise RLS guide. No certification without verified security.
  5. Documentation. The model includes a description, owner name, refresh schedule, source systems, and a data dictionary for every table and measure. If it is not documented, it is not certified.

Certification authority should be restricted. EPC Group recommends 3–5 designated certifiers per business domain — typically the data steward plus 1–2 senior analysts. Broad certification authority dilutes the badge's meaning. If everyone can certify, certification means nothing.

Building a Centralized Measure Library

The most powerful technique for preventing metric drift is centralizing measure definitions in a single semantic model that serves as the organization's measure library. This model contains all canonical business measures — Revenue, Gross Margin, Customer Count, Churn Rate, NPS — with approved DAX formulas and documentation.

Reports connect to this certified model via live connection. They consume the measures as-is without the ability to modify or override them. If a report needs a custom calculation not in the library, the request goes through the CoE to evaluate whether it should be added to the certified model or handled as a local extension via composite model.

The measure library approach delivers a critical benefit: when the business changes a definition (e.g., revenue now excludes a specific product line due to a divestiture), you update the measure in one place. Every connected report instantly reflects the new definition. Without a measure library, you update the measure in 47 separate datasets and hope you did not miss any.

Example Measure Library Entries

Measure NameDAX FormulaOwnerLast Validated
Net RevenueSUM(Invoice[NetAmount]) - SUM(Returns[Amount])Finance2026-04-01
Gross Margin %DIVIDE([Net Revenue] - [COGS], [Net Revenue])Finance2026-04-01
Active CustomersDISTINCTCOUNT(Orders[CustomerID])Sales2026-03-15
Employee HeadcountCOUNTROWS(FILTER(Employees, [Status] = "Active"))HR2026-04-01

Composite Models: Extending Without Breaking

A strict "only use certified models" policy breaks down when departments need data that does not exist in the certified model. The Sales team needs to combine certified revenue data with their local territory mapping spreadsheet. HR needs to join certified headcount data with a benefits enrollment file from a vendor portal.

Composite models solve this by allowing a report to combine a DirectQuery connection to the certified semantic model with local import data. The certified measures remain authoritative and untouchable — they are queried live from the source model. The local data extends the model without modifying it.

The key governance rule for composite models: local data can add context to certified measures, but it should never redefine them. If a composite model contains a local measure called "Revenue" that overrides the certified definition, the governance framework has failed. Automated scanning should flag any local measures that share names with certified measures.

When integrated with Microsoft Copilot, composite models enable natural language queries that ground their answers in certified measures while incorporating department-specific context. Copilot inherits the semantic model's measure definitions, which means certified measures produce trustworthy AI-generated insights while uncertified local measures carry appropriate caveats.

Detecting and Preventing Metric Drift

Metric drift is the gradual divergence of measure definitions across an organization. It starts innocently: an analyst copies a measure from the certified model and tweaks it slightly for a specific use case. Six months later, 15 datasets contain 15 slightly different versions of "Revenue."

Prevention requires automated detection. EPC Group deploys a governance automation solution that:

  • Scans all semantic models weekly using the Power BI REST API and Tabular Object Model (TOM) to extract every measure definition.
  • Compares local measures against the certified library using fuzzy name matching and DAX expression similarity analysis.
  • Flags potential duplicates for CoE review — "Dataset X contains a measure called 'Total Revenue' that differs from certified measure 'Net Revenue' by excluding returns."
  • Generates a drift score per dataset — the percentage of measures that deviate from certified definitions. High-drift datasets are prioritized for remediation.

This is not about punishing analysts who create local measures. It is about visibility. The CoE cannot govern what it cannot see. Automated drift detection turns invisible inconsistencies into actionable remediation items.

Implementation Roadmap: From Chaos to One Source of Truth

Semantic model governance is not a one-week project. It requires organizational alignment, technical implementation, and cultural change. EPC Group's standard roadmap:

PhaseDurationActivitiesDeliverables
Discovery2–3 weeksInventory all semantic models, extract all measures, map data lineageComplete model inventory, measure catalog, lineage map
Definition2–3 weeksWorkshop with domain owners to agree on canonical definitionsApproved measure library, data dictionary
Build3–4 weeksCreate certified semantic models, configure endorsement, set up monitoringCertified models published, drift detection deployed
Migration3–4 weeksMigrate reports to certified models, retire duplicate datasetsReports migrated, deprecated models archived

The hardest phase is Definition. Getting Finance, Sales, and Operations to agree on a single definition of "revenue" is not a technical challenge — it is a political one. EPC Group facilitates these workshops with a structured methodology that focuses on the business question each metric answers, not the calculation itself. Once stakeholders agree on what "revenue" means in business terms, the DAX formula follows naturally.

Frequently Asked Questions

What is a Power BI semantic model and why does it matter for governance?

A semantic model (formerly called a dataset) is the data layer that sits between raw data sources and Power BI reports. It defines tables, relationships, measures, and business logic. Governance matters because when multiple semantic models define 'revenue' differently — one includes returns, another excludes them — every downstream report inherits that inconsistency. A governed semantic model is the single source of truth: one definition of revenue, certified by the business, used by every report.

What is the difference between 'Promoted' and 'Certified' endorsement in Power BI?

Promoted means a dataset owner recommends their model for broader use — it is a self-service endorsement. Certified means a designated governance authority (the CoE or data steward) has validated the model's data quality, business logic, refresh reliability, and security configuration. Certified datasets display a gold badge in the Power BI service. EPC Group recommends restricting certification authority to 3–5 people per domain to maintain credibility.

How do you prevent metric drift when multiple teams build Power BI reports?

Metric drift occurs when different teams create their own measures with slightly different definitions. Prevention requires three controls: (1) certified semantic models with centrally managed measure libraries that teams cannot modify, (2) live-connected reports that inherit measures from the certified model rather than defining their own, and (3) automated scanning that flags reports containing locally defined measures that duplicate certified ones.

What are composite models in Power BI and when should you use them?

Composite models allow a report to combine data from a certified semantic model (via DirectQuery connection) with local data (via import). This lets teams extend the certified model with department-specific data without modifying the source. Use composite models when a team needs 80% of the certified model plus 20% of their own data — for example, adding local budget targets alongside certified actuals. The certified measures remain unchanged and authoritative.

How long does it take to implement semantic model governance across an enterprise?

EPC Group's standard engagement runs 10–14 weeks for organizations with 50–200 datasets. Weeks 1–3: inventory and audit all existing semantic models. Weeks 4–6: identify canonical models per domain and define measure libraries. Weeks 7–10: migrate reports to certified models and configure live connections. Weeks 11–14: implement monitoring, certification workflows, and CoE training. The timeline extends for organizations with 500+ datasets or significant data quality issues.

Need One Source of Truth for Your Power BI Metrics?

EPC Group's Semantic Model Governance engagement delivers a certified measure library, migration roadmap, and automated drift detection in 10–14 weeks. We have unified metric definitions for Fortune 500 organizations across finance, healthcare, and manufacturing. Call (888) 381-9725 or schedule an assessment.

Schedule a Semantic Model Governance Assessment