EPC Group - Enterprise Microsoft AI, SharePoint, Power BI, and Azure Consulting
G2 High Performer Summer 2025, Momentum Leader Spring 2025, Leader Winter 2025, Leader Spring 2026
BlogContact
Ready to transform your Microsoft environment?Get started today
(888) 381-9725Get Free Consultation
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌

EPC Group

Enterprise Microsoft consulting with 28+ years serving Fortune 500 companies.

(888) 381-9725
contact@epcgroup.net
4900 Woodway Drive - Suite 830
Houston, TX 77056

Follow Us

Solutions

  • All Services
  • Microsoft 365 Consulting
  • AI Governance
  • Azure AI Consulting
  • Cloud Migration
  • Microsoft Copilot
  • Data Governance
  • Microsoft Fabric
  • vCIO / vCAIO Services
  • Large-Scale Migrations
  • SharePoint Development

Industries

  • All Industries
  • Healthcare IT
  • Financial Services
  • Government
  • Education
  • Teams vs Slack

Power BI

  • Case Studies
  • 24/7 Emergency Support
  • Dashboard Guide
  • Gateway Setup
  • Premium Features
  • Lookup Functions
  • Power Pivot vs BI
  • Treemaps Guide
  • Dataverse
  • Power BI Consulting

Company

  • About Us
  • Our History
  • Microsoft Gold Partner
  • Case Studies
  • Testimonials
  • Blog
  • Resources
  • Contact

Microsoft Teams

  • Teams Questions
  • Teams Healthcare
  • Task Management
  • PSTN Calling
  • Enable Dial Pad

Azure & SharePoint

  • Azure Databricks
  • Azure DevOps
  • Azure Synapse
  • SharePoint MySites
  • SharePoint ECM
  • SharePoint vs M-Files

Comparisons

  • M365 vs Google
  • Databricks vs Dataproc
  • Dynamics vs SAP
  • Intune vs SCCM
  • Power BI vs MicroStrategy

Legal

  • Sitemap
  • Privacy Policy
  • Terms
  • Cookies

© 2026 EPC Group. All rights reserved.

‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
Home / Blog / BYOAI in the Enterprise

BYOAI in the Enterprise: Managing 67 Unsanctioned AI Tools

By Errin O'Connor | Published April 15, 2026 | Updated April 15, 2026

Your employees are already using AI. The question is not whether shadow AI exists in your organization — it does. The question is whether you have visibility into it, governance over it, and a strategy for turning chaos into a competitive advantage. The average enterprise has 67 unsanctioned AI tools in active use. Here is how to find them, assess them, and govern them.

The Scale of Shadow AI in 2026

In Q1 2026, EPC Group conducted shadow AI audits across 12 enterprise clients ranging from 3,000 to 45,000 employees. The findings were consistent and alarming:

  • Average of 67 unique AI tools detected per enterprise, up from 31 in our 2024 audits.
  • 23% of detected tools had received sensitive corporate data including source code, financial projections, customer lists, and internal strategy documents.
  • 41% of employees reported using at least one AI tool not provided or approved by IT.
  • 78% of those employees were unaware their usage violated any company policy — because no AI-specific policy existed.
  • 12 of the 67 tools on average had terms of service allowing user inputs to be used for model training, meaning corporate data was being used to train competitor-accessible AI models.

This is not a technology problem. It is a governance vacuum. Employees are using AI because it makes them more productive. They are using unsanctioned tools because the sanctioned alternatives are either nonexistent, too restrictive, or too slow to deploy. The solution is not to block AI — it is to govern it.

Shadow AI Inventory Methodology

The first step in any AI governance initiative is knowing what you are governing. Our five-layer discovery methodology identifies shadow AI tools that single-technique approaches miss.

Layer 1: Network Traffic Analysis

Deploy Microsoft Defender for Cloud Apps or your existing CASB to analyze outbound traffic against a curated list of 200+ known AI service domains. This catches browser-based AI usage (ChatGPT, Claude.ai, Gemini, Perplexity) and API traffic from developer tools. Update the domain list monthly — new AI services launch weekly.

Layer 2: Endpoint Detection

Scan managed endpoints for installed AI applications, browser extensions, and IDE plugins. VS Code extensions like GitHub Copilot, Cursor, and Tabnine are development-focused AI tools that process source code. Browser extensions for grammar (Grammarly AI), writing (Jasper), and productivity (various) often send content to external AI services.

Layer 3: OAuth Consent Audit

Review Entra ID application consent grants for AI-related OAuth applications. Employees who sign into AI services with their corporate Microsoft account create consent records. This layer catches tools that employees access with SSO, which network analysis might classify as "Microsoft authentication traffic" rather than AI usage.

Layer 4: Purview Data Classification

Configure Purview DLP policies to detect sensitive information types in outbound web traffic to AI service domains. This layer does not just find the tools — it identifies which tools have received sensitive data, which is the actual risk you need to quantify.

Layer 5: Employee Survey

Technical detection has blind spots: personal devices, mobile apps, home network usage. An anonymous survey asking employees which AI tools they use (with explicit amnesty for honest responses) typically reveals 15-20% more tools than technical methods alone. Frame it as research, not enforcement.

Risk Assessment Framework for Shadow AI Tools

Once inventoried, each tool needs a risk score. Our framework evaluates seven dimensions, each scored 1-5, producing a composite risk score of 7-35.

DimensionScore 1 (Low Risk)Score 5 (High Risk)
Data training policyNo input data used for trainingAll inputs used for model training
Compliance certificationsSOC 2 Type II, ISO 27001, HIPAA BAANo certifications or audits
Enterprise agreementEnterprise tier with DPA availableConsumer-only, no enterprise terms
Data residencyConfigurable, compliant regionsUnknown or non-compliant processing
Access controlsSSO, SCIM, role-based accessShared passwords, no SSO
Audit loggingFull audit trail, SIEM integrationNo logging or visibility
Data sensitivity exposedPublic information onlyPHI, PII, financial, trade secrets

Tools scoring 7-14: Approve pathway. Tools scoring 15-24: Monitor pathway. Tools scoring 25-35: Block pathway.

The Three Governance Tiers: Block, Monitor, Approve

Binary "block everything" or "allow everything" approaches both fail. The effective model has three tiers with clear criteria, remediation paths, and review cycles. Our Virtual Chief AI Officer service manages this tiering on an ongoing basis.

Tier 1: Block

Tools with unacceptable risk profiles are blocked at the network level and endpoint level. Criteria: data used for training with no opt-out, no enterprise agreement available, no compliance certifications, and/or terms of service that claim ownership of outputs. Currently blocked in most of our client environments: consumer-tier ChatGPT (not enterprise), unvetted browser AI extensions, AI tools from sanctioned countries, and any tool scoring 25+ in the risk assessment.

Tier 2: Monitor

Tools with moderate risk profiles that have an enterprise upgrade path. These are allowed but monitored: traffic is logged, sensitive data detection is active, and usage is reported monthly. Users see a notification that their usage is monitored and are directed toward approved alternatives. The goal is to either upgrade these tools to Tier 3 (Approve) or migrate users to sanctioned alternatives within 90 days.

Tier 3: Approve

Tools that pass security review, have enterprise agreements, provide adequate compliance certifications, and offer the controls the organization requires. These are provisioned through IT with SSO integration, included in the AI acceptable use policy, and supported by internal training. Current typical approved list includes: Microsoft 365 Copilot, ChatGPT Enterprise, Claude for Enterprise, GitHub Copilot Business, and Perplexity Enterprise.

AI Acceptable Use Policy: The 10 Non-Negotiable Clauses

Every enterprise needs an AI-specific acceptable use policy. Generic IT policies do not cover AI-specific risks. These 10 clauses form the foundation of every policy we draft:

  1. Approved tool list. Named list of sanctioned AI tools with permitted use cases for each.
  2. Prohibited data types. Explicit list of data categories that may never be input to any AI tool: PII, PHI, financial records, trade secrets, source code (unless in approved coding tools), legal documents under privilege.
  3. Redaction requirements. Before using any AI tool with business data, employees must redact or anonymize sensitive information. Provide specific redaction procedures and tools.
  4. Output verification mandate. AI-generated content must be reviewed by a qualified human before use in any business context. No AI output is "approved by default."
  5. Intellectual property guidelines. Clarify ownership of AI-generated content, restrictions on using AI for patent-related work, and copyright considerations.
  6. Incident reporting procedures. If an employee accidentally pastes sensitive data into an unapproved AI tool, there must be a clear reporting process — without punitive consequences for honest reporting.
  7. Customer-facing AI disclosure. Rules for when and how to disclose AI usage to customers, especially in regulated industries.
  8. Personal AI usage boundaries. Clarify whether personal AI subscriptions may be used for work tasks on personal devices.
  9. AI procurement process. Any new AI tool acquisition must go through security and compliance review before procurement. No exceptions for "free tier" tools.
  10. Quarterly policy review. The AI landscape changes too fast for annual policy reviews. Commit to quarterly updates with stakeholder input.

Incident Response for AI Data Leakage

When sensitive data is exposed to an unsanctioned AI tool, you need an incident response playbook that addresses AI-specific considerations. Our AI Readiness Assessment includes an incident response readiness evaluation.

  1. Identify and contain. Determine which AI tool received data, what data was sent, and who sent it. Block further access to the tool. If the tool has an API, check if the data can be deleted via API request.
  2. Assess regulatory impact. If PHI was sent to a non-BAA AI tool, it may constitute a HIPAA breach requiring notification. If EU personal data was sent to a non-GDPR-compliant service, it may require DPA notification. Engage legal immediately.
  3. Contact the AI vendor. Request data deletion confirmation. Request confirmation that the data was not used for model training. Document the vendor's response for your compliance record.
  4. Assess training data risk. If the AI tool uses input data for training (check the ToS), the sensitive data may persist in the model indefinitely. This significantly elevates the incident severity and may require disclosure.
  5. Remediate and update controls. Block the tool (if not already blocked), update DLP policies to detect the specific data pattern, communicate the incident (anonymized) to the organization as a learning opportunity, and update the acceptable use policy if gaps exist.

Replacing Shadow AI with Sanctioned Alternatives

Blocking shadow AI without providing alternatives guarantees failure. For every blocked tool, offer a sanctioned alternative that serves the same need:

  • Employees using consumer ChatGPT → Deploy Microsoft 365 Copilot or ChatGPT Enterprise
  • Developers using free Copilot extensions → Deploy GitHub Copilot Business with organizational policy controls
  • Marketing teams using Jasper/Copy.ai → Provide Copilot with marketing-specific prompts and templates
  • Research teams using consumer Perplexity → Deploy Perplexity Enterprise with centralized billing
  • Legal teams using Claude free tier → Deploy Claude Enterprise with BAA and data retention controls

The sanctioned alternative must be at least as capable, at least as easy to access, and available within 2 weeks of blocking the shadow tool. Anything less drives employees back underground.

Frequently Asked Questions

What is BYOAI and why is it a risk for enterprises?

BYOAI (Bring Your Own AI) is the enterprise phenomenon where employees adopt AI tools — ChatGPT, Claude, Gemini, Perplexity, Midjourney, and dozens of specialized AI apps — without IT knowledge, security review, or governance oversight. It is shadow IT accelerated: while a shadow SaaS tool might store a few documents, a shadow AI tool receives the full context of whatever the employee pastes into it. Our audits find an average of 67 unsanctioned AI tools per enterprise, with 23% of those having received sensitive corporate data.

How do you discover which AI tools employees are using?

Discovery requires a multi-layer approach: (1) network traffic analysis via Defender for Cloud Apps or your CASB to detect traffic to known AI service domains, (2) endpoint monitoring for installed AI desktop apps and browser extensions, (3) Purview data classification scanning for sensitive data appearing in outbound web traffic to AI services, (4) OAuth consent audit in Entra ID for AI apps employees have authorized, and (5) anonymous employee survey to surface tools that evade technical detection. The combination typically reveals 3-5x more tools than IT is aware of.

Should enterprises block all unsanctioned AI tools?

No. Blanket blocking drives usage underground — employees use personal devices, mobile hotspots, and consumer accounts to access the same tools without any visibility. The effective strategy is governance tiers: Block tools with unacceptable risk profiles (no enterprise agreements, data used for training, no SOC 2), Monitor tools with moderate risk (enterprise tier available but not yet procured), and Approve tools that meet security, compliance, and data handling requirements. This reduces shadow usage by 70-80% in our experience.

What should an AI acceptable use policy cover?

An effective AI acceptable use policy covers: (1) approved AI tools and their permitted use cases, (2) prohibited data types for AI input (PII, PHI, financial data, trade secrets, source code), (3) required redaction procedures before using AI tools, (4) output verification requirements (employees cannot use AI output without review), (5) intellectual property guidelines for AI-generated content, (6) incident reporting procedures for accidental data exposure, and (7) consequences for policy violations. The policy should be reviewed quarterly as the AI landscape evolves.

How does Microsoft Purview help detect shadow AI data leakage?

Purview provides three capabilities for shadow AI detection: (1) Insider Risk Management detects patterns of sensitive data being copied to web browsers or AI applications, (2) Data Loss Prevention policies can be configured to block or warn when sensitive information types (SSNs, credit cards, health records) are pasted into known AI service URLs, and (3) the new AI Hub provides centralized visibility into Copilot interactions and can be extended with custom connectors for third-party AI monitoring. Combined, these create a detection mesh that catches most data leakage vectors.

Get Your Shadow AI Under Control

EPC Group runs Shadow AI Discovery Audits in 2-3 weeks. We identify every AI tool in your environment, assess risk, build governance tiers, draft your acceptable use policy, and deploy sanctioned alternatives. Call (888) 381-9725 or schedule below.

Schedule a Shadow AI Discovery Audit