Connecting AI to Power BI: 5 Approaches Beyond Microsoft Copilot
By Errin O'Connor — April 2026
Microsoft Copilot for Power BI is the obvious starting point for AI-powered analytics — natural language questions, auto-generated visuals, and narrative summaries built into the Power BI interface. But Copilot is one model, one vendor, one set of capabilities. Enterprise organizations need a broader AI toolkit: large language models for narrative generation, custom ML models for domain-specific predictions, computer vision for document extraction, and multi-model orchestration that routes the right task to the right AI. This guide covers five production-proven approaches to connecting AI with Power BI that go beyond what Copilot offers today.
Approach 1: Azure OpenAI Service — LLM-Powered Annotations and Anomaly Explanation
Azure OpenAI gives you GPT-4o, GPT-4.1, and other frontier models within your Azure tenant, with enterprise security controls, private endpoints, and data residency guarantees. The primary use case for Power BI integration is automated insight generation — having the LLM explain why a KPI changed, what drove an anomaly, or what the data pattern suggests.
Architecture pattern: A Fabric notebook or Azure Function runs on a schedule (hourly, daily, or triggered by a data refresh). It queries the Power BI semantic model via XMLA endpoint or reads from the Lakehouse, constructs a prompt with the relevant data context (e.g., “Revenue dropped 12% week-over-week. Here are the top 10 contributing factors by dimension...”), sends it to Azure OpenAI, and writes the LLM's narrative response to a table. The Power BI report displays this narrative in a text card or custom visual that updates with each data refresh.
When to use: Real-time or near-real-time annotations inside dashboards. Anomaly explanation. Automated data storytelling for operational dashboards where users need context, not just numbers.
Limitations: Azure OpenAI context windows (128K tokens for GPT-4o) may not be sufficient for very large datasets without summarization. Cost at high inference volume can be significant — EPC Group recommends caching responses and only re-generating when the underlying data changes meaningfully.
Approach 2: Anthropic Claude via API — Long-Context Narrative Generation
Where Azure OpenAI excels at short-form, structured outputs, Anthropic's Claude models shine in long-context analysis and narrative writing. Claude can process up to 200K+ tokens in a single prompt — meaning you can feed it an entire quarter's worth of KPI data across every business unit, region, and product line and get back a coherent executive summary that rivals what a senior analyst would write.
Architecture pattern: A Python script (running as an Azure Function, Fabric notebook, or scheduled job) extracts data from the Power BI semantic model, formats it as structured CSV or JSON, sends it to the Claude API with a detailed system prompt defining the report format and tone, and writes the resulting narrative to Azure Blob Storage or a database table. Power BI displays the narrative, or it's delivered as a PDF attachment via Power Automate.
Use cases EPC Group has deployed:
- Quarterly Business Reviews (QBRs) — Claude generates 10–15 page narrative reports summarizing financial performance, operational KPIs, and strategic recommendations from Power BI data. Delivered to the board as polished PDFs.
- Healthcare compliance narratives — for AI governance in healthcare, Claude summarizes patient outcome dashboards into narratives that compliance officers review, with citations back to the source data.
- Multi-property hotel performance summaries — weekly narratives for hotel GMs comparing their property to portfolio averages, generated from the same Power BI semantic model that feeds the interactive dashboards.
Governance consideration: Claude is accessed via API — data leaves your Azure tenant and goes to Anthropic's infrastructure. For HIPAA or FedRAMP workloads, EPC Group sanitizes PII before sending data to the Claude API and keeps all identifiable data within Azure. For non-regulated workloads, Anthropic's data retention policies (no training on API data) provide adequate protection.
Approach 3: Python Visuals with Embedded ML Models
Power BI supports Python and R visuals that execute scripts when the visual renders. This opens the door to embedding machine learning models directly in the report — anomaly detection, clustering, forecasting, or classification displayed as matplotlib, seaborn, or plotly charts.
Architecture pattern: A Python visual receives a filtered subset of data from the Power BI semantic model (whatever is in scope based on slicers and filters). The script loads a pre-trained model (pickled scikit-learn model, ONNX model, or a simple statistical algorithm), runs inference on the data, and renders the result as a chart. The visual updates dynamically as users interact with filters.
Production examples:
- Anomaly detection visual — Isolation Forest model flags outlier transactions in financial dashboards. Red dots on a scatter plot indicate anomalies, with tooltips showing the anomaly score.
- Customer segmentation — K-means clustering visual that segments customers by purchase behavior, displayed as a 2D scatter plot with PCA-reduced dimensions. Clicking a cluster filters the rest of the dashboard to that segment.
- Time-series decomposition — statsmodels seasonal decomposition showing trend, seasonal, and residual components of a KPI, helping analysts understand whether a change is structural or cyclical.
Limitations: Python visuals in the Power BI service run in a sandboxed environment with no network access, a 5-minute timeout, and limited library availability. They re-execute on every interaction, which can slow reports. For production, EPC Group uses Python visuals for display and runs the actual ML inference in a Fabric notebook or Azure ML pipeline that writes results to a table — the visual just reads the pre-computed output.
Approach 4: Microsoft Fabric Data Science Notebooks
Microsoft Fabric unifies data engineering, data science, and BI in a single platform. Fabric Data Science notebooks (Spark-based, supporting PySpark, Python, R, and Scala) can train and deploy ML models that write predictions directly to the Lakehouse — and Power BI reads from the same Lakehouse via DirectLake mode.
Architecture pattern: A Fabric notebook reads training data from the Lakehouse, trains a model (using MLflow for experiment tracking), registers the model in the Fabric ML model registry, and schedules a scoring pipeline that writes predictions to a Lakehouse Delta table. The Power BI semantic model includes this table via DirectLake, so predictions appear in dashboards with near-zero latency and no data duplication.
Use cases:
- Demand forecasting — LightGBM or Prophet model trained on historical sales data, external signals (weather, events, economic indicators), and calendar features. Predictions written to Lakehouse and displayed alongside actuals in the Power BI sales dashboard.
- Customer churn prediction — classification model scoring active customers daily. Churn probability appears as a column in the customer dimension, enabling Power BI slicers like “show me high-risk customers with > $100K annual spend.”
- Supply chain risk scoring — model that scores suppliers based on lead time variability, quality defect rates, financial health indicators, and geopolitical risk factors. Risk scores surface in Power BI procurement dashboards.
Why this is EPC Group's preferred approach: Fabric notebooks keep everything within the Microsoft security boundary, use the same capacity as Power BI (no separate Azure ML billing), support MLflow for model versioning and governance, and write directly to Lakehouse tables that Power BI reads via DirectLake — the tightest integration available today.
Approach 5: Custom AI Visuals (Power BI Custom Visuals SDK)
The Power BI Custom Visuals SDK allows developers to build TypeScript/D3.js visuals that can call external APIs — including AI endpoints. This enables embedding AI directly into the visual interaction layer, where users interact with AI outputs as native Power BI elements.
Architecture pattern: A custom visual built with the pbiviz SDK receives data from the semantic model, calls an Azure Function (which proxies to Azure OpenAI, Claude, or a custom ML endpoint), and renders the AI response alongside the data visualization. The Azure Function handles authentication, rate limiting, and response caching.
Examples EPC Group has built:
- AI-annotated chart visual — a line chart that automatically displays LLM-generated annotations at significant inflection points. Hover over a data point and see “Revenue increased 23% due to Black Friday promotions and new product launch in the Southeast region.”
- Natural language query panel — a custom visual that provides a chat-like interface within the Power BI report, routing questions to Azure OpenAI with the current filter context as grounding data. More flexible than the built-in Copilot Q&A because it can target custom models and include business-specific instructions.
- Predictive tooltip visual — hover over a customer or product and see a predictive overlay: “This customer has a 73% probability of churning in the next 90 days based on declining order frequency and support ticket volume.”
Limitations: Custom visuals require TypeScript development expertise and must pass organizational governance review before deployment. They also need certification if distributed via AppSource. For most organizations, EPC Group recommends approaches 1–4 first and reserves custom visuals for high-value use cases that justify the development investment.
Building a Multi-Model AI Strategy for Power BI
The common mistake is treating AI integration as a single-model problem — deploying Copilot and calling it done. Enterprise organizations benefit from a multi-model approach where different AI capabilities serve different analytics needs:
| Use Case | Recommended Model/Approach | Integration Point |
|---|---|---|
| Self-service Q&A | Microsoft Copilot for Power BI | Native in Power BI |
| Anomaly explanation | Azure OpenAI (GPT-4o) | Fabric notebook or Azure Function |
| Executive narratives | Anthropic Claude | Scheduled Python job via API |
| Demand forecasting | LightGBM / Prophet in Fabric | Fabric Data Science notebook |
| Customer segmentation | scikit-learn in Fabric | Fabric Data Science notebook |
| Document extraction | Azure AI Document Intelligence | Azure Function pipeline |
| In-visual AI interaction | Custom Power BI visual + Azure OpenAI | Custom Visual SDK |
The orchestration layer — deciding which model handles which request — can be as simple as purpose-built pipelines (each use case has its own pipeline) or as sophisticated as a semantic router that classifies the intent and routes to the appropriate model. EPC Group starts with purpose-built pipelines and adds orchestration only when the number of AI touchpoints justifies the complexity.
Frequently Asked Questions
Can I use Azure OpenAI directly inside Power BI reports?
Not as a native visual — but you can call Azure OpenAI endpoints from Power BI in three ways: (1) Python or R visuals that invoke the Azure OpenAI SDK at render time, (2) Power Automate flows triggered by Power BI data alerts that send prompts to Azure OpenAI and write results back to a Dataverse or SQL table displayed in the report, or (3) Fabric Data Science notebooks that score data and store AI-generated outputs in a Lakehouse table consumed by the semantic model. EPC Group recommends option 3 for production workloads because it decouples the AI inference from report rendering, avoids timeout issues, and allows you to cache and audit AI outputs.
How is Anthropic Claude different from Azure OpenAI for BI narratives?
Claude excels at long-context analysis — it can process 200K+ tokens of tabular data in a single prompt, making it ideal for generating executive narratives that summarize an entire quarter's performance across dozens of KPIs without chunking. Azure OpenAI (GPT-4o) has strong instruction-following and function-calling capabilities that work well for structured extraction and short-form insights. EPC Group uses Claude for quarterly business reviews and board-ready summaries, and Azure OpenAI for real-time, shorter-form annotations within dashboards.
Are Python visuals in Power BI suitable for production ML models?
Python visuals are useful for prototyping and displaying ML outputs but have significant limitations in production: they run in a sandboxed Python environment on the Power BI service, have a 5-minute timeout, cannot access the network (no API calls), and re-execute on every interaction. For production ML, EPC Group deploys models as Azure ML managed endpoints or Fabric ML models and writes predictions to a table that Power BI consumes via Import or DirectQuery — the visual displays results, not runs inference.
What is the best multi-model AI strategy for enterprise Power BI?
EPC Group recommends a tiered approach: Microsoft Copilot for Power BI as the self-service Q&A layer (natural language to visual), Azure OpenAI for structured in-dashboard annotations and anomaly explanations, Claude for long-form narrative generation and executive summaries, and custom ML models (scikit-learn, LightGBM, or PyTorch) deployed as Fabric ML models or Azure ML endpoints for domain-specific predictions like demand forecasting or churn scoring. Each model serves a different use case — the key is routing the right task to the right model.
How do you ensure AI governance when connecting multiple AI models to Power BI?
EPC Group implements AI governance at three layers: (1) Data governance — semantic model access controls, RLS, and sensitivity labels ensure AI models only access authorized data. (2) Model governance — all AI endpoints are registered in a model catalog with version tracking, input/output logging, and performance monitoring. (3) Output governance — AI-generated content in dashboards is labeled as AI-generated, includes confidence scores where applicable, and is subject to human review workflows for high-stakes decisions. This framework aligns with the NIST AI Risk Management Framework and EU AI Act requirements.
Connect AI to Your Power BI Environment
EPC Group designs and deploys multi-model AI integrations for Power BI — from Azure OpenAI annotations and Claude narrative generation to Fabric ML models and custom AI visuals. We help enterprise organizations move beyond basic Copilot into production AI that drives real business decisions. Call (888) 381-9725 or request a consultation to discuss your AI + BI architecture.
Request an AI + BI Architecture Consultation