EPC Group - Enterprise Microsoft AI, SharePoint, Power BI, and Azure Consulting
G2 High Performer Summer 2025, Momentum Leader Spring 2025, Leader Winter 2025, Leader Spring 2026
BlogContact
Ready to transform your Microsoft environment?Get started today
(888) 381-9725Get Free Consultation
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌

EPC Group

Enterprise Microsoft consulting with 28+ years serving Fortune 500 companies.

(888) 381-9725
contact@epcgroup.net
4900 Woodway Drive - Suite 830
Houston, TX 77056

Follow Us

Solutions

  • All Services
  • Microsoft 365 Consulting
  • AI Governance
  • Azure AI Consulting
  • Cloud Migration
  • Microsoft Copilot
  • Data Governance
  • Microsoft Fabric
  • vCIO / vCAIO Services
  • Large-Scale Migrations
  • SharePoint Development

Industries

  • All Industries
  • Healthcare IT
  • Financial Services
  • Government
  • Education
  • Teams vs Slack

Power BI

  • Case Studies
  • 24/7 Emergency Support
  • Dashboard Guide
  • Gateway Setup
  • Premium Features
  • Lookup Functions
  • Power Pivot vs BI
  • Treemaps Guide
  • Dataverse
  • Power BI Consulting

Company

  • About Us
  • Our History
  • Microsoft Gold Partner
  • Case Studies
  • Testimonials
  • Blog
  • Resources
  • Contact

Microsoft Teams

  • Teams Questions
  • Teams Healthcare
  • Task Management
  • PSTN Calling
  • Enable Dial Pad

Azure & SharePoint

  • Azure Databricks
  • Azure DevOps
  • Azure Synapse
  • SharePoint MySites
  • SharePoint ECM
  • SharePoint vs M-Files

Comparisons

  • M365 vs Google
  • Databricks vs Dataproc
  • Dynamics vs SAP
  • Intune vs SCCM
  • Power BI vs MicroStrategy

Legal

  • Sitemap
  • Privacy Policy
  • Terms
  • Cookies

Our Specialized Practices

PowerBIConsulting.com|CopilotConsulting.com|SharePointSupport.com

© 2026 EPC Group. All rights reserved.

Power BI Semantic Model - EPC Group enterprise consulting

Power BI Semantic Model

Enterprise guide to semantic model design, calculation groups, field parameters, composite models, Direct Lake, governance, and shared model architecture.

What Is a Semantic Model in Power BI?

What is a semantic model in Power BI? A Power BI semantic model is the centralized data layer that defines tables, relationships, DAX measures, hierarchies, calculation groups, and security rules for reporting. Previously called a "dataset," Microsoft renamed it in 2023 to reflect its role as a meaning-based layer between raw data and business insights. The semantic model is the single source of truth — one well-designed model serves dozens or hundreds of reports with consistent metrics, governed access, and optimized performance.

The semantic model is the most important artifact in your entire Power BI deployment. Every report, every dashboard, every metric — all of it depends on the quality of the underlying semantic model. A poorly designed model produces slow reports, inconsistent numbers, and ungovernable analytics sprawl. A well-designed model produces fast, trustworthy, enterprise-grade business intelligence.

EPC Group has designed and optimized Power BI semantic models for Fortune 500 organizations across healthcare, financial services, and government. This guide covers every aspect of enterprise semantic model design — from the fundamental rename from "dataset" to advanced features like calculation groups, field parameters, composite models, and Direct Lake in Microsoft Fabric.

Whether you are designing a new semantic model from scratch, migrating from legacy datasets, or optimizing an existing model for Fabric, this guide provides the enterprise methodology EPC Group applies to every engagement.

Semantic Model vs Dataset vs Datamart

Microsoft's terminology has evolved, and the distinctions matter for architecture decisions. Understanding what each component does — and does not do — prevents costly design mistakes.

ComponentWhat It IsEngineBest For
Semantic Model (Dataset)In-memory analytical model with tables, relationships, DAX measures, RLS, and calculation groupsVertipaq (columnar compression)Primary analytics layer — reports, dashboards, metrics
DatamartSelf-service relational database with SQL endpoint, no-code ETL, and auto-generated semantic modelAzure SQL (relational)Teams needing SQL access to curated data subsets
Lakehouse (Fabric)Delta Lake storage with SQL analytics endpoint and auto-created semantic modelSpark + SQL + Direct LakeEnterprise data platform with unified storage in OneLake
Warehouse (Fabric)Full T-SQL data warehouse with cross-database queries and semantic modelDistributed SQL engineComplex transformations requiring full T-SQL support

Key Takeaway: Every Fabric item (Lakehouse, Warehouse, Datamart) auto-generates a default semantic model. But default models lack optimized relationships, measures, and governance. EPC Group always creates a purpose-built semantic model on top of Fabric storage — treating the auto-generated model as a starting point, not a finished product.

Six Enterprise Semantic Model Design Principles

These foundational principles guide every semantic model EPC Group builds. Violating any one creates performance, governance, or trust problems.

Star Schema Foundation

Every semantic model starts with a star schema — fact tables with numeric measures surrounded by dimension tables with descriptive attributes. The Vertipaq engine is optimized specifically for this pattern.

Single Source of Truth

One certified semantic model per business domain. All reports connect to the shared model via live connection. No duplicate models, no conflicting numbers.

Calculation Groups

Replace hundreds of duplicate measures with reusable calculation items. Time intelligence, currency conversion, and scenario analysis as modular, maintainable groups.

Optimal Storage Modes

Choose Import, DirectQuery, composite, or Direct Lake based on data volume, freshness requirements, and capacity constraints. No one-size-fits-all.

Governance & Certification

Endorsement levels (Promoted, Certified), workspace permissions, lineage tracking, and a formal certification process with documented quality criteria.

Shared Model Architecture

Publish certified models to dedicated workspaces. Report authors connect via live connection. Centralized security, single refresh schedule, consistent metrics.

Relationships and Cardinality in Semantic Models

Relationships define how filters propagate through your semantic model. Every misconfigured relationship produces either incorrect numbers or degraded performance — often both. EPC Group validates every relationship as part of our semantic model audit.

Cardinality Rules

  • One-to-many (1:*) is the default and preferred cardinality — dimension tables (one side) filter fact tables (many side)
  • Many-to-one (*:1) is the reverse perspective of 1:* — functionally identical, direction depends on which table you start from
  • One-to-one (1:1) typically indicates tables that should be merged — two tables with the same grain and the same key should be a single table
  • Many-to-many (*:*) should be avoided unless modeling bridge tables for multi-valued dimensions (e.g., a product with multiple categories) — always use a bridge table with two 1:* relationships instead

Cross-Filter Direction

Single-direction cross-filtering (dimension filters fact) is the correct default. Bidirectional filtering should be enabled only when a specific visual requirement demands it — and even then, prefer CROSSFILTER() in DAX over model-level bidirectional settings. Bidirectional relationships create ambiguous filter paths, degrade Vertipaq query optimization, and produce unexpected results in visuals with multiple fact tables.

Role-Playing Dimensions

When a single dimension table connects to a fact table through multiple columns (e.g., Date dimension connecting to OrderDate, ShipDate, and DueDate), use one active relationship and USERELATIONSHIP() in DAX for the others. Do not duplicate the dimension table — duplicates waste memory and create maintenance burden. A single Date dimension with three relationships (one active, two inactive) is the correct pattern.

Calculation Groups: Eliminate Measure Sprawl

Calculation groups are the most impactful feature for enterprise semantic model maintainability. They replace hundreds of duplicate measures with a single reusable group of calculation items that modify any measure dynamically.

How Calculation Groups Work

A calculation group is a special table with one column (the calculation items) that applies DAX transformations to any measure selected in a visual. When a user places "Sales Amount" as a measure and "Time Intelligence" as a slicer, selecting "YTD" applies TOTALYTD() to Sales Amount. Selecting "PY" applies SAMEPERIODLASTYEAR(). The measure itself is unmodified — the calculation group wraps it dynamically.

Common Calculation Group Patterns

Time Intelligence

Items: YTD, QTD, MTD, PY, PY YTD, YOY, YOY%, Rolling 12 Months

Replaces 8 variants per measure — 30 base measures x 8 = 240 measures reduced to 30 + 1 group

Currency Conversion

Items: USD, EUR, GBP, JPY, Local Currency

Replaces 5 variants per measure — eliminates currency-specific measures entirely

Scenario Analysis

Items: Actual, Budget, Forecast, Variance, Variance %

Replaces 5 variants per measure — enables dynamic actual-vs-budget on any KPI

Display Formatting

Items: Per Unit, Per 1000, Percentage of Total, Running Total, Moving Average

Replaces 5 variants per measure — report authors apply formatting without new measures

Enterprise Impact: EPC Group implemented calculation groups for a Fortune 500 financial services client that had 1,200+ measures. After consolidation: 180 base measures + 4 calculation groups. Model maintenance time dropped from 40 hours/month to 8 hours/month. Measure consistency issues (different YTD formulas across departments) were eliminated entirely.

Field Parameters: Dynamic Report Experiences

Field parameters let report consumers switch which measures or columns appear in visuals without editing the report. This transforms static, single-purpose pages into dynamic, multi-purpose dashboards — reducing report page count by 40-60% while improving user experience.

Use Cases for Field Parameters

  • Executive dashboards where users toggle between Revenue, Profit, Margin, and Units Sold on the same visual
  • Comparison charts where users select which dimension to analyze by — Region, Product, Department, or Customer Segment
  • Mobile reports where screen space is limited and every visual must serve multiple purposes
  • Self-service scenarios where business users need flexibility without editing the underlying report
  • Drill-through pages that adapt their columns based on the source visual context

Limitation: Field parameters create a disconnected table using NAMEOF() — they do not support dynamic security (RLS does not apply to field parameter selections). Test thoroughly before deploying in compliance-sensitive environments. EPC Group validates field parameter behavior against RLS in every deployment.

Composite Models and Direct Lake in Fabric

Storage mode selection is one of the highest-impact decisions in semantic model design. The right choice depends on data volume, freshness requirements, capacity budget, and the Fabric migration timeline.

Storage ModePerformanceData FreshnessBest For
ImportFastest (in-memory Vertipaq)Scheduled refresh (minutes to hours)Datasets under 1GB, dashboards needing maximum speed
DirectQueryDepends on source (slower)Real-time (every query hits source)Real-time requirements, source handles query load
CompositeFast dimensions + live factsHybrid (cached dimensions, live facts)Datasets 1-100GB, balance speed and freshness
Direct Lake (Fabric)Near-Import speedNear-real-time (reads OneLake Parquet)Fabric environments, eliminating refresh pipelines

Direct Lake: The Fabric Game-Changer

Direct Lake eliminates the biggest pain point in enterprise Power BI: data refresh. Instead of importing data into the Vertipaq engine (which requires scheduled refresh pipelines, timeout management, and capacity allocation for refresh processing), Direct Lake reads Delta/Parquet files directly from OneLake. The data is already there — the semantic model simply reads it.

For a client with a 50GB dataset that previously required a 45-minute import refresh twice daily, EPC Group migrated the semantic model to Direct Lake. The result: zero refresh time (data updates as soon as Lakehouse pipelines write new Parquet files), query performance within 10% of Import mode, and complete elimination of refresh timeout failures.

Direct Lake does require Fabric capacity (F64 or higher for production workloads) and data stored in Delta format. EPC Group builds the Lakehouse-to-Direct Lake pipeline as a standard part of every Fabric deployment.

Semantic Model Performance Optimization

Performance optimization is not an afterthought — it is a design discipline. Every decision from schema to DAX affects query speed, refresh time, and capacity cost.

Remove Unused Columns

Every column in the semantic model consumes memory — even columns referenced by zero visuals. Audit with DAX Studio VertiPaq Analyzer and remove all unused columns.

Impact: 20-40% memory reduction

Use Integer Surrogate Keys

Replace text keys with integer keys for all relationships. Integer comparisons are 3-5x faster than text comparisons and compress more efficiently in Vertipaq.

Impact: 30-50% faster joins

Implement Incremental Refresh

For datasets over 1GB, configure incremental refresh to process only new and changed partitions. Requires a DateTime column and query folding support.

Impact: 80-98% refresh time reduction

Optimize DAX Measures

Use CALCULATE with explicit filters, avoid FILTER() on large tables, prefer SUMMARIZECOLUMNS, minimize iterator functions on high-cardinality columns.

Impact: 40-70% query speed improvement

Leverage Aggregation Tables

Create pre-aggregated summary tables for executive dashboards. Power BI automatically routes queries to the smallest table that satisfies the visual.

Impact: 70-90% dashboard load improvement

Enable Query Folding

Ensure Power Query transformations fold to the source database. Unfolded steps require Power BI to download raw data before transforming — exponentially slower.

Impact: 50-80% refresh improvement

Semantic Model Governance: Endorsement, Certification, and Lineage

Without governance, semantic model sprawl is inevitable. Organizations end up with dozens of competing models producing different numbers for the same metrics. EPC Group implements a three-tier governance framework for every enterprise deployment.

Three-Tier Endorsement Framework

Exploratory

No Endorsement

Personal workspace models for ad-hoc analysis. Not shared, not governed, not trusted for decisions. Any user can create.

Promoted

Promoted

Team-level models that have been reviewed by the data team. Published to shared workspaces. Useful but not yet certified against organizational standards.

Certified

Certified

Enterprise-grade models that meet all quality, accuracy, security, and documentation standards. Only designated certifiers (Power BI admins) can certify. The gold standard.

Certification Criteria

  • Data accuracy validated against source systems — reconciliation within 0.01% tolerance for financial metrics
  • Star schema design with documented relationships, grain definitions, and column descriptions
  • Row-level security implemented and tested for all applicable security scopes
  • Performance benchmarks met — average visual render under 3 seconds, refresh under SLA
  • DAX measures documented with business definitions, formulas, and owners
  • Lineage documented — source systems, transformation logic, refresh schedule, and downstream reports cataloged
  • Incremental refresh configured for datasets over 1GB with partition management documented
  • Owner assigned with clear escalation path for data quality issues

Shared Semantic Model Architecture

The shared semantic model pattern is the single most impactful governance decision for enterprise Power BI. Instead of every report author creating their own data model (leading to 50+ models with conflicting numbers), a centralized team builds and certifies shared models that all report authors connect to.

Architecture Pattern

  • Dedicated model workspace — certified semantic models live in a dedicated workspace (e.g., "Finance - Certified Models") with restricted Build permissions
  • Live connection — report authors connect to the shared model via Live connection, consuming zero additional capacity (no data duplication)
  • DirectQuery to dataset — when report authors need to extend the model with local tables, they use DirectQuery to Power BI dataset (now semantic model)
  • Separation of model and reports — the model workspace contains only models; report workspaces contain only reports. This enables independent deployment cycles
  • Centralized RLS — row-level security defined once in the shared model applies to every connected report automatically
  • Single refresh schedule — one refresh schedule per model instead of 50 separate refreshes for 50 duplicate models

Enterprise Impact: EPC Group deployed shared semantic models for a healthcare system with 12 hospitals. Before: 340 separate datasets, 15 different revenue calculation formulas, 8 conflicting patient volume numbers. After: 7 certified shared models, one revenue formula, one patient volume definition. Report count stayed at 200+ but model count dropped by 95%. Monthly refresh capacity cost dropped by $4,200.

EPC Group Semantic Model Methodology

Our 4-phase approach takes any organization from ungoverned model sprawl to a certified, optimized, shared semantic model architecture.

1

Discovery & Audit

Week 1

Inventory all existing datasets, their refresh schedules, RLS configurations, measure definitions, and downstream reports. Identify duplicate models, conflicting measures, and performance bottlenecks using DAX Studio and ALM Toolkit.

Deliverable: Semantic model landscape audit with consolidation roadmap

2

Model Design

Week 2

Design the target star schema with fact/dimension tables, relationships, calculation groups, field parameters, and storage modes. Define the shared model workspace architecture, endorsement criteria, and governance processes.

Deliverable: Semantic model design document with governance framework

3

Build & Migrate

Weeks 3-5

Build the certified semantic models in Tabular Editor. Implement calculation groups, RLS, incremental refresh, and composite/Direct Lake storage. Migrate existing reports to connect to shared models.

Deliverable: Certified semantic models with migrated reports

4

Validate & Govern

Week 6

Performance test every connected report. Validate data accuracy against source systems. Certify models through the endorsement framework. Train model owners on maintenance and governance workflows.

Deliverable: Certified production models with trained governance team

Frequently Asked Questions

What is a semantic model in Power BI?

A semantic model in Power BI is the unified data layer that defines tables, relationships, measures, hierarchies, and business logic for reporting. Previously called a "dataset," Microsoft renamed it to "semantic model" in late 2023 to better reflect its purpose — it provides a semantic (meaning-based) layer between raw data sources and report visuals. The semantic model contains the star schema design, DAX measures, row-level security rules, calculation groups, and field parameters that transform raw data into business-ready analytics. EPC Group designs semantic models as the single source of truth for enterprise analytics — one well-governed model that serves dozens or hundreds of reports.

What is the difference between a semantic model, dataset, and datamart in Power BI?

A semantic model and dataset are the same thing — Microsoft renamed "dataset" to "semantic model" in 2023. Both refer to the data model containing tables, relationships, and DAX logic published to the Power BI Service. A datamart is a self-service, fully managed relational database within Power BI Premium that allows analysts to create SQL-queryable data stores without Azure SQL or Synapse. Key differences: semantic models use Vertipaq in-memory compression and DAX; datamarts use a SQL endpoint with T-SQL queries. Semantic models are the primary analytics layer for reports. Datamarts are useful when teams need SQL access to a subset of enterprise data. EPC Group recommends semantic models as the primary analytics layer and datamarts only for teams that require direct SQL access.

How do calculation groups work in Power BI semantic models?

Calculation groups are reusable sets of DAX calculation items that modify how measures behave — eliminating the need to create dozens of duplicate measures for common transformations like time intelligence. For example, instead of creating "Sales YTD," "Profit YTD," "Revenue YTD," and "Cost YTD" separately, you create one calculation group called "Time Intelligence" with items like YTD, QTD, MTD, PY, PY YTD, and YOY%. Every measure in the model can then be combined with any calculation item. This reduces a 200-measure model to 30 base measures plus a single calculation group. EPC Group implements calculation groups in every enterprise semantic model — they reduce measure count by 60-80% and dramatically simplify maintenance.

What are field parameters in Power BI and when should you use them?

Field parameters allow report consumers to dynamically switch which columns or measures appear in visuals without editing the report. Users select from a dropdown (e.g., "Revenue," "Profit," "Units Sold") and the visual updates to show the selected metric. Under the hood, field parameters create a disconnected table with NAMEOF() references to measures or columns. Use cases: executive dashboards where users toggle between KPIs, comparison visuals where users pick which dimensions to slice by, and mobile reports where screen space is limited. EPC Group uses field parameters in 90%+ of executive dashboards — they reduce the number of report pages by 40-60% while giving users more flexibility.

What is Direct Lake mode in Microsoft Fabric and how does it affect semantic models?

Direct Lake is a new storage mode in Microsoft Fabric that reads Parquet files directly from OneLake into the Vertipaq engine — combining the performance of Import mode with the freshness of DirectQuery. Unlike Import (which copies data during refresh) or DirectQuery (which queries the source live), Direct Lake reads columnar Parquet files on demand without a data copy step. Benefits: near-instant "refresh" because data is already in OneLake, query performance close to Import mode, no separate ETL pipeline to maintain. Limitations: requires Fabric capacity (F64 or higher for production), data must be in Delta/Parquet format in OneLake, and not all DAX functions are supported in the initial framing. EPC Group is migrating enterprise clients from Import and DirectQuery to Direct Lake as part of Fabric adoption — typical refresh times drop from 30-60 minutes to under 30 seconds.

How do you govern and certify semantic models in Power BI?

Power BI provides two levels of semantic model endorsement: Promoted (any dataset owner can promote their model as recommended) and Certified (only designated certifiers approved by the Power BI admin can certify a model as meeting organizational quality standards). Governance best practices: 1) Establish clear certification criteria — data accuracy validation, performance benchmarks, RLS implementation, documentation requirements, 2) Create a semantic model registry that tracks all certified models, their owners, refresh schedules, and downstream reports, 3) Use workspace-level permissions to control who can build reports against certified models, 4) Enable lineage view to track data flow from source through semantic model to reports. EPC Group establishes governance frameworks that typically include a 5-step certification process with automated validation checks.

What are shared semantic models and why are they important for enterprise Power BI?

Shared semantic models (also called shared datasets) allow multiple reports across different workspaces to connect to a single published semantic model — creating a true single source of truth. Without shared models, every report author creates their own data model, leading to conflicting numbers, duplicated refresh schedules, and ungovernable sprawl. With shared models, one team maintains the certified semantic model, and report authors use "Live connection" or "DirectQuery to Power BI dataset" to build reports against it. Benefits: consistent metrics across the organization, reduced capacity consumption (one model instead of 50 copies), centralized security through model-level RLS, and simplified governance. EPC Group implements shared semantic model architectures for every enterprise client — a typical deployment has 5-10 certified models serving 200+ reports.

How do composite models improve Power BI semantic model performance?

Composite models allow a single semantic model to combine multiple storage modes: Import (fast, cached), DirectQuery (real-time, source-queried), and Dual (both). The enterprise strategy: Import small dimension tables (products, customers, dates — fast filtering), keep large fact tables in DirectQuery (transactions, events — always current, no refresh needed), and use aggregation tables in Import mode for high-level visuals while DirectQuery serves detail-level drill-through. Composite models also support DirectQuery to Power BI datasets — enabling you to extend a certified shared semantic model with additional local tables without duplicating the entire model. EPC Group designs composite model architectures for datasets between 1-100GB, balancing performance, freshness, and capacity consumption.

What are the best practices for Power BI semantic model performance optimization?

Enterprise semantic model performance optimization covers five areas: 1) Schema design — star schema with integer keys, minimal columns, proper cardinality, 2) DAX efficiency — use CALCULATE with explicit filters, avoid FILTER() on large tables, prefer SUMMARIZECOLUMNS over ADDCOLUMNS/SUMMARIZE, 3) Storage modes — use composite models with Import dimensions and DirectQuery facts, or migrate to Direct Lake in Fabric, 4) Refresh optimization — implement incremental refresh with partition management, use query folding to push transformations to source, 5) Capacity management — right-size Premium/Fabric capacity, monitor with Capacity Metrics app, set max memory per dataset limits. EPC Group performance audits typically improve semantic model query times by 50-80% and reduce refresh duration by 60-90%.

Related Resources

Power BI Consulting Services

Enterprise Power BI implementation, optimization, and managed services from EPC Group.

Read more

Power BI Data Modeling Best Practices

Deep technical guide to star schema design, relationships, calculated columns vs measures, and performance optimization.

Read more

Enterprise Analytics Solutions

Full-stack Microsoft analytics: Fabric, Power BI, Azure AI, and enterprise operating models.

Read more

Get Your Semantic Model Optimized

Schedule a free semantic model assessment with EPC Group. We will audit your current models, identify consolidation opportunities, and deliver a shared semantic model architecture that eliminates conflicting metrics, reduces capacity costs, and improves query performance by 50-80%.

Get Semantic Model Assessment (888) 381-9725