Fabric Migration Guide for Legacy BI Teams
By Errin O'Connor | April 2026
Your organization has years invested in a BI stack that works — SQL Server, SSIS, SSAS, SSRS, maybe Synapse, maybe Cognos or BusinessObjects. Now Microsoft is telling you that Fabric is the future. This guide is the migration playbook your BI team needs: what to assess, what to move first, what to leave alone, and how to avoid the mistakes that derail BI modernization projects.
Is Your Organization Ready for Fabric?
Not every organization should migrate to Fabric today. Before committing budget, assess readiness across five dimensions:
| Dimension | Ready | Not Ready |
|---|---|---|
| Data Platform | Already on Azure (Synapse, ADLS, ADF) or Power BI Premium | Entirely on-premises with no Azure footprint |
| Team Skills | SQL + Power BI proficiency; some Python/PySpark exposure | Team knows only legacy tools (Cognos, SSRS) with no modern BI skills |
| Data Volume | Growing data volumes straining current platform performance | Small, stable data volumes well-served by current tools |
| Governance | Need unified governance across data lake, warehouse, and BI | Single-purpose BI with no data lake or multi-source complexity |
| Budget | Can invest $150K+ in migration plus Fabric capacity licensing | No migration budget; need zero-cost transition |
If you score "Not Ready" on three or more dimensions, focus on foundational modernization first — move to Azure, upskill the team, establish governance — before targeting Fabric.
Data Estate Mapping: Know What You Have
Before migrating anything, document your current data estate comprehensively:
- Data sources: Every database, file share, API, SaaS connector, and manual data feed that produces data for your BI environment. Include source system owners and refresh frequencies.
- ETL/ELT pipelines: SSIS packages, Azure Data Factory pipelines, custom scripts, stored procedures, or manual processes that move and transform data. Document dependencies and scheduling.
- Data warehouse: SQL Server databases, Azure SQL, Synapse dedicated pools, or third-party warehouses. Catalog schemas, tables, views, stored procedures, and their consumers.
- Semantic models: SSAS cubes (multidimensional or tabular), Power BI datasets, Cognos Framework Manager models, or BusinessObjects universes. These are your business logic layer and the hardest to migrate.
- Reports and dashboards: SSRS reports, Power BI reports, Cognos reports, or other BI output. Document active usage — reports nobody uses should not be migrated.
- Security model: How access is controlled at each layer — database roles, SSAS roles, Power BI workspace roles, row-level security, and integration with Active Directory / Entra ID.
OneLake Strategy: What Moves, What Stays
OneLake is Fabric's unified storage layer — analogous to OneDrive for data. Every Fabric workspace gets a OneLake location. Your strategy for OneLake should follow this decision framework:
- Move to OneLake: High-value analytical data that is actively queried, data that benefits from Direct Lake mode in Power BI (dramatically faster queries), data that needs unified governance through Microsoft Purview, and data that feeds multiple downstream consumers.
- Shortcut (do not move): Data in existing Azure Data Lake Storage that is infrequently accessed, data in Amazon S3 or Google Cloud that you do not own or control, compliance-constrained data that must remain in a specific geography or storage account, and archival data kept for regulatory retention.
- Leave entirely: Operational databases (OLTP) that are not part of your analytical workload, data already served well by existing pipelines with no performance or governance issues, and data that will be decommissioned within 12 months.
Semantic Model Migration Strategy
Semantic models are the business logic layer — the measures, hierarchies, relationships, and calculations that turn raw data into business meaning. This is the most complex part of Fabric migration:
From SSAS Tabular to Fabric
SSAS tabular models migrate most cleanly to Fabric. Power BI semantic models in Fabric are effectively SSAS tabular models hosted in the cloud. Migration steps:
- Export the tabular model as a .bim file from SSAS.
- Import into a Power BI Desktop file or use XMLA endpoint to deploy directly to a Fabric workspace.
- Update data source connections from on-premises SQL to Fabric Lakehouse/Warehouse.
- Validate DAX measures, relationships, and hierarchies.
- Enable Direct Lake mode if the data source is a Fabric Lakehouse (significant performance improvement over Import mode).
From SSAS Multidimensional (OLAP Cubes) to Fabric
This is the hardest migration path. SSAS multidimensional cubes use MDX, not DAX; they support features (writeback, parent-child hierarchies with unary operators, many-to-many dimensions) that do not have direct equivalents in Fabric semantic models. Options:
- Rebuild in DAX: The recommended long-term approach. Rebuild the cube as a Power BI semantic model with DAX measures. Requires significant effort but produces a modern, maintainable model.
- Run in parallel: Keep SSAS multidimensional running for complex cubes while migrating simpler models to Fabric. Phase out cubes as DAX equivalents are validated.
- Use Azure Analysis Services: As a transitional step, move SSAS multidimensional to Azure Analysis Services (which supports both MDX and DAX) while planning the full Fabric migration.
From Non-Microsoft BI to Fabric
Migrating from Cognos, BusinessObjects, Tableau, or Qlik requires rebuilding semantic models from scratch in Power BI/Fabric. There is no automated conversion. The approach:
- Document business logic from existing tool (calculations, filters, hierarchies, security).
- Build the Fabric data pipeline first — get the data into Lakehouse/Warehouse.
- Rebuild the semantic model in Power BI, validating calculations against the legacy system.
- Run both systems in parallel for 30-60 days with matched outputs before decommissioning.
FinOps: Controlling Fabric Costs
Fabric uses a capacity-based pricing model (CU — Capacity Units). Without FinOps discipline, costs can spiral:
- Right-size capacity: Start with F64 (the minimum production SKU at ~$9,000/month). Scale up only when monitoring shows sustained capacity pressure. Fabric supports capacity scaling (up and down) via API or portal.
- Use smoothing and bursting: Fabric's CU model allows short bursts above purchased capacity, smoothed over time. Optimize batch workloads to run during off-peak hours when burst capacity is available.
- Monitor CU consumption: Use the Fabric Capacity Metrics app (built-in) to track which workspaces, workloads, and users consume the most CUs. Identify and optimize expensive queries.
- Separate dev/test from production: Use lower-tier capacity (F2, F4) for development and testing. Reserve production capacity for production workloads.
- Pause unused capacity: Fabric capacity can be paused (no charges when paused). Set up automation to pause dev/test capacity outside business hours — this alone can cut non-production costs by 60-70%.
- Compare to current costs: Build a TCO comparison: sum all current costs (Synapse, ADF, ADLS, Power BI Premium, SSAS, SSIS server licensing) and compare to projected Fabric capacity costs. Most organizations save 10-25% in steady state.
Security Architecture in Fabric
Fabric security operates at multiple layers, and getting it right requires deliberate design:
- Workspace security: Fabric workspaces use roles (Admin, Member, Contributor, Viewer). Map these to your organizational structure — typically one workspace per department or domain.
- Item-level security: Individual items (lakehouses, warehouses, semantic models, reports) can have granular permissions independent of workspace roles.
- Row-Level Security (RLS): Semantic models support RLS via DAX filters. Use RLS to restrict data visibility by department, region, or business unit.
- OneLake security: Data in OneLake inherits the security model of the workspace and item. Shortcuts inherit the security of the source system — ensure source permissions are appropriate.
- Purview integration: Fabric integrates with Microsoft Purview for sensitivity labeling, data classification, and lineage tracking. Enable this from day one.
Phased Migration Approach
Do not attempt a big-bang Fabric migration. Phase the work by risk and value:
- Phase 1 (Weeks 1-4): Assessment and Architecture — Data estate mapping, readiness assessment, target architecture design, capacity sizing, FinOps baseline.
- Phase 2 (Weeks 5-8): Foundation — Fabric capacity provisioning, workspace structure, security model, Purview integration, CI/CD pipeline setup.
- Phase 3 (Weeks 9-16): Pilot Domain — Migrate one business domain end-to-end: data pipeline, lakehouse/warehouse, semantic model, reports. Validate with business users.
- Phase 4 (Weeks 17-28): Expand — Migrate remaining domains using patterns established in Phase 3. Parallel run with legacy systems.
- Phase 5 (Weeks 29-36): Optimize and Decommission — Performance tuning, FinOps optimization, user training, legacy system decommissioning.
Frequently Asked Questions
What legacy BI platforms does Microsoft Fabric replace?
Fabric consolidates capabilities that previously required multiple products: Azure Synapse Analytics (data warehousing and Spark), Azure Data Factory (ETL/ELT), Azure Data Lake Storage (data lake), Power BI Premium (analytics and reporting), and third-party tools for data quality and governance. For organizations on legacy stacks like IBM Cognos, SAP BusinessObjects, Oracle OBIEE, or Tableau Server + Snowflake, Fabric provides a unified alternative. However, 'replace' is a strong word — Fabric excels at the Microsoft-native end-to-end experience, but specific legacy tools may still be needed for edge cases like complex OLAP cubes or proprietary connectors.
Do we need to migrate all our data to OneLake?
No. Fabric supports OneLake Shortcuts, which create virtual pointers to data in existing locations — Azure Data Lake Storage, Amazon S3, Google Cloud Storage, or on-premises via gateway. Shortcuts let you query external data through Fabric without physically moving it. The recommended approach: migrate high-value, frequently-accessed data to OneLake for performance and governance benefits; use shortcuts for cold storage, compliance-constrained data, or data you don't own.
How do we handle existing Power BI reports during Fabric migration?
Existing Power BI reports and semantic models (formerly datasets) continue to work in Fabric without modification. The migration path is progressive: (1) Assign existing Power BI workspaces to Fabric capacity. (2) Existing reports work immediately. (3) Gradually migrate data sources from Azure SQL/Synapse to Fabric Lakehouse or Warehouse. (4) Update semantic model connections to point to Fabric data sources. (5) Rebuild only the reports that need Fabric-specific features (Direct Lake mode, OneLake integration). There is no forced cutover.
What does Fabric migration cost compared to keeping the legacy stack?
Fabric F64 capacity (the minimum for production workloads) starts at approximately $9,000/month. For a mid-size enterprise replacing Synapse + Data Factory + Power BI Premium P1, the Fabric equivalent typically costs 10-20% less in licensing when you account for eliminated Azure service costs. However, the migration itself costs $150,000-$500,000 depending on data volume, complexity, and custom code remediation. The TCO break-even point is usually 12-18 months post-migration. The non-financial benefit — a unified platform instead of five separate services — reduces operational complexity significantly.
What skills does our existing BI team need to learn for Fabric?
The good news: if your team knows SQL, Power BI, and basic Azure concepts, they have 60-70% of what they need. The gaps: (1) Lakehouse architecture — understanding medallion patterns (bronze/silver/gold), parquet/delta formats, and when to use Lakehouse vs. Warehouse. (2) PySpark or Spark SQL for notebook-based data engineering (not required for all roles, but important for at least 2-3 team members). (3) OneLake governance — shortcuts, security boundaries, and capacity management. (4) Dataflows Gen2 — the Power Query-based ETL engine that replaces Azure Data Factory for many use cases. Budget 4-8 weeks of structured training for the core team.
Plan Your Fabric Migration
EPC Group runs Fabric Readiness Assessments and end-to-end migrations for enterprise BI teams — from legacy stacks to production Fabric environments with governance, security, and FinOps built in. Call (888) 381-9725 or schedule an assessment.
Request a Fabric Readiness Assessment