Getting Started with Microsoft Fabric: Enterprise Guide
By Errin O'Connor, Chief AI Architect at EPC Group | Published April 2026 | Updated April 15, 2026
Your first 30 days with Microsoft Fabric will determine whether your organization sees it as transformative or just another platform to manage. Here is how to get it right.
Before Day 1: Capacity Sizing for Trial
Microsoft offers a 60-day Fabric trial capacity that any Power BI admin can activate. The trial provides an F64-equivalent capacity — enough for serious evaluation but not enough for production load testing. Here is how to approach capacity sizing for your first real deployment:
| F-SKU | Monthly Cost | Best For |
|---|---|---|
| F2 | ~$262 | Individual developer sandbox |
| F4 | ~$524 | Small team development/testing |
| F8 | ~$1,048 | Proof-of-concept with real data |
| F64 | ~$8,400 | Production — mid-size enterprise |
| F128 | ~$16,800 | Production — large enterprise |
| F256 | ~$33,600 | Production — heavy concurrent workloads |
EPC Group recommendation: Start with the free trial (F64-equivalent) for evaluation. When moving to production, begin with F64 and monitor with the Fabric Capacity Metrics App for 30 days. Scale up or down based on actual utilization data. F-SKUs support pause/resume — unlike old P-SKUs — so you can pause non-production capacities overnight and on weekends to cut costs by 60%.
Week 1: Choose Your First Project
The first Fabric project should be high-visibility, low-risk, and deliver value within 2 weeks. Here are the three project types EPC Group recommends:
Option A: Direct Lake Conversion (Best for Power BI-heavy orgs)
Take your largest Import-mode Power BI dataset — the one that takes 45 minutes to refresh every morning and occasionally fails. Move its source data into a Fabric Lakehouse using a Dataflow Gen2 or Pipeline. Rebuild the semantic model to use Direct Lake connectivity.
Result: Near-instant report loading with no scheduled refresh. The report looks identical to users, but it is always current and never fails due to refresh timeouts. This is the single most compelling Fabric demo for business stakeholders.
Option B: Dataflow Gen2 Replacing Manual ETL (Best for data teams)
Identify a process where someone downloads data from a source system, transforms it in Excel, and uploads it to a SharePoint list or Power BI dataset. Replace that manual process with a Dataflow Gen2 that connects to the source, transforms the data visually, and lands it in a Lakehouse table.
Result: Hours of manual work eliminated. Data freshness improves from daily/weekly to every 15 minutes. The data team sees Fabric as a productivity multiplier, not just another platform.
Option C: Real-Time Dashboard (Best for IT/Operations)
Connect Azure Event Hubs, IoT Hub, or a custom Kafka stream to a Fabric KQL Database using Eventstream. Build a Real-Time Dashboard showing live metrics — server health, application errors, transaction volumes, or IoT sensor readings.
Result: A live operational dashboard built in hours, not weeks. IT leadership sees immediate value. This is particularly compelling for organizations that have been trying to build real-time dashboards with Power BI streaming datasets and hitting limitations.
Week 2: Lakehouse vs Warehouse — Your First Storage Decision
Both Lakehouse and Warehouse store data in OneLake. Both support SQL queries. The choice comes down to your team's skills and your data's structure:
| Factor | Lakehouse | Warehouse |
|---|---|---|
| Query languages | Spark (Python, Scala, R) + SQL | T-SQL only |
| Schema approach | Schema-on-read (flexible) | Schema-on-write (structured) |
| Data types | Structured + semi-structured | Structured only |
| Best for teams with | Python/data engineering skills | SQL Server/T-SQL skills |
| ML/data science support | Native (Spark notebooks) | Limited (SQL only) |
| Direct Lake support | Yes | Yes |
EPC Group recommendation: Start with Lakehouse unless your entire team is SQL-only. The Lakehouse includes a SQL analytics endpoint that lets T-SQL users query it while data engineers use Spark — you get both interfaces. You can always add a Warehouse later for specific structured workloads.
Week 3: Connecting Existing Power BI
Your existing Power BI reports do not need to be rebuilt. Moving a Power BI workspace to a Fabric capacity is a configuration change, not a migration. Here is the process:
- Assign workspace to Fabric capacity: In the Power BI admin portal, assign your workspace to the Fabric F-SKU capacity. All reports, datasets, and dataflows continue to work.
- Identify Direct Lake candidates: Look for Import-mode datasets larger than 1GB with scheduled refreshes. These benefit most from Direct Lake conversion.
- Create a Lakehouse for source data: Use a Pipeline or Dataflow Gen2 to land the source data that currently feeds your Power BI Import datasets into Lakehouse Delta tables.
- Rebuild semantic models for Direct Lake: Create new semantic models that point to the Lakehouse tables using Direct Lake mode. Reconnect existing reports to the new semantic models.
- Deprecate old datasets: Once reports are running on Direct Lake, decommission the old Import datasets and their refresh schedules.
EPC Group typically converts 10-20 Power BI datasets to Direct Lake in the first month of a Fabric engagement. The performance improvement is immediately visible to report consumers.
Week 4: When to Keep Synapse
Not everything should move to Fabric on day one. Keep Azure Synapse if any of these conditions apply:
- Dedicated SQL Pool with complex T-SQL: Synapse Dedicated SQL Pools support stored procedures, materialized views, and result set caching that Fabric Warehouse is still maturing on. If your data warehouse relies on 500+ stored procedures, the migration effort is significant.
- Custom Spark configurations: Synapse Spark pools allow custom library installations, cluster sizing, and autoscale configurations that Fabric Spark does not fully support yet. If your data science team needs specific Spark configurations, keep Synapse.
- Synapse Link integrations: Synapse Link for Cosmos DB and Synapse Link for Dataverse provide near-real-time replication. Fabric supports Mirroring as an alternative, but it is newer and may not cover all your Synapse Link scenarios.
- Compliance requirements locked to specific Azure regions: Fabric capacities are available in most Azure regions, but if your compliance framework requires data residency in a region where Fabric is not yet available, Synapse is the safer choice.
The hybrid approach works: keep Synapse for complex workloads, use Fabric for new workloads and Power BI. OneLake shortcuts can connect to Azure Data Lake Storage that Synapse writes to, letting Fabric read Synapse output without data duplication.
Common Mistakes in the First 30 Days
- Starting with F2/F4 and judging performance: F2 has 2 Capacity Units. F64 has 64. Performance on F2 is not representative of production experience. Always evaluate on at least F64 or the free trial capacity.
- Migrating everything at once: Fabric is not a big-bang migration. Start with one workload (usually Power BI with Direct Lake), prove value, and expand. Trying to move ADF + Synapse + Power BI + ML simultaneously creates risk with no quick wins.
- Ignoring OneLake governance: OneLake is multi-tenant by default within your Fabric tenant. Without workspace-level permissions and sensitivity labels, data engineers in one workspace can see data in another. Set up governance policies from day one.
- Not monitoring capacity utilization: Install the Fabric Capacity Metrics App immediately. If your capacity is consistently above 80% utilization, you need to scale up before users experience throttling.
- Skipping the Center of Excellence: Fabric touches data engineering, data science, BI, and IT infrastructure. Without a cross-functional governance group (Center of Excellence), workspace sprawl and ungoverned data proliferation will happen within weeks.
- Forgetting about cost management: Fabric capacities bill 24/7 unless paused. Set up auto-pause for development capacities. Use workload management settings to prevent a single runaway Spark job from consuming the entire capacity.
The 30-Day Implementation Timeline
Days 1-5: Foundation
Activate trial or provision F64. Assign Power BI workspaces. Install Capacity Metrics App. Establish workspace naming conventions. Set OneLake permissions.
Days 6-10: First Lakehouse
Create your first Lakehouse. Ingest data from one source system via Pipeline or Dataflow Gen2. Validate data quality. Build a SQL analytics endpoint view.
Days 11-15: Direct Lake
Build a Direct Lake semantic model on the Lakehouse. Connect existing Power BI reports. Benchmark performance against Import mode. Share with stakeholders.
Days 16-20: Second Workload
Add a second workload: Dataflow Gen2 for a manual ETL process, or a Notebook for data science exploration. Begin training the team on the new workload.
Days 21-25: Governance
Deploy sensitivity labels on Lakehouse tables. Set up Purview integration. Define workspace access policies. Create a Fabric governance runbook.
Days 26-30: Decision
Review Capacity Metrics. Calculate TCO comparison vs current stack. Present findings to leadership. Decide: expand to production, adjust capacity, or stay on current stack.
Frequently Asked Questions
What is the minimum Fabric capacity for enterprise workloads?
F64 (~$8,400/month) is the minimum production capacity for enterprise workloads. F2 ($262/month) and F4 ($524/month) are suitable for development, testing, and proof-of-concept only — they will throttle under production load. F64 provides enough Capacity Units (CUs) to run concurrent Power BI reports, data pipelines, and notebook workloads for a mid-size team. For large enterprises with 500+ active analytics users, F128 or F256 is typical. Use the Fabric Capacity Metrics App to monitor utilization and right-size after 30 days of production usage.
Should I start with a Lakehouse or Warehouse in Fabric?
Start with a Lakehouse if your team includes data engineers comfortable with Python/PySpark and you need to process semi-structured data (JSON, CSV, Parquet). Start with a Warehouse if your team is SQL-first and your data is already structured in relational databases. Both store data in OneLake and both support SQL queries. The Lakehouse also supports Spark notebooks, making it more flexible for data science workloads. If unsure, start with Lakehouse — it has a SQL analytics endpoint that lets SQL users query it while data engineers use Spark. You can always add a Warehouse later.
Can I connect my existing Power BI reports to Fabric without rebuilding them?
Yes. When you move a Power BI workspace to a Fabric capacity (F-SKU), all existing reports, datasets, and dataflows continue to work unchanged. There is no rebuild required. If you want to take advantage of Direct Lake mode, you will need to create a Lakehouse or Warehouse in Fabric, land your data there, and then rebuild the semantic model (dataset) to use Direct Lake connectivity instead of Import or DirectQuery. The reports themselves do not change — only the underlying dataset connection changes.
When should I keep Azure Synapse instead of moving to Fabric?
Keep Synapse if: 1) You have a mature Synapse Dedicated SQL Pool with complex T-SQL stored procedures, views, and security policies that would require significant refactoring. 2) Your team relies on Synapse Spark pools with custom cluster configurations not yet supported in Fabric. 3) You have Synapse Link for Cosmos DB or Dataverse integrations in production. 4) Your organization is not ready to adopt capacity-based billing (Fabric) vs. resource-based billing (Synapse). Microsoft has committed to long-term Synapse support. Migration to Fabric is optional, not mandatory.
What are the best quick wins to demonstrate Fabric value in the first 30 days?
Three quick wins EPC Group recommends: 1) Direct Lake on your largest Power BI dataset — if you have a 5GB+ Import dataset that takes 30+ minutes to refresh, move the source data to a Lakehouse and switch to Direct Lake. Report performance stays fast and you eliminate the refresh schedule entirely. 2) Dataflow Gen2 replacing a manual ETL process — identify a team that exports data to Excel, transforms it, and uploads to Power BI. Replace that with a Dataflow Gen2 that lands data directly in OneLake. 3) Real-Time Dashboard for IT operations — connect an Event Hub or Log Analytics workspace to a KQL Database and build a real-time dashboard in 2 hours. These three wins demonstrate Fabric's value to business users, data teams, and IT leadership simultaneously.
Get a Fabric Quick Start Engagement
EPC Group's 4-week Fabric Quick Start gets your enterprise from zero to production-ready: capacity provisioning, first Lakehouse, Direct Lake conversion, governance setup, and team training. Fixed scope, fixed price.
Call (888) 381-9725 or schedule a consultation below.
Schedule a Fabric Quick Start