Shadow AI: The Hidden Risk of Unapproved AI Tools—and How to Detect Them

by Martin Škoda Posted on May 06, 2026

As beneficial as that shiny, new artificial intelligence tool might look, it’s always ideal to get the buy-in from your organization’s decision makers. If an employee were to bypass their organization’s established policies and controls when related to new tools, it introduces a newly developing security risk: Shadow AI.

Shadow AI isn’t a new type of software that cybercriminals have started developing. It actually refers to the use of unsanctioned AI tools—chatbots, browser extensions, cloud services or AI-powered applications—by employees for business purposes without oversight.

Some of these practices include:

  • Copying and pasting internal data into public AI chatbots
  • Using AI browser plugins that connect to unknown cloud services
  • Installing desktop AI assistants that silently upload data
  • Integrating unapproved AI APIs into internal workflows

While the intent is often to work more efficiently, the consequences can be serious.

Why It’s a Problem

Shadow AI introduces multiple risks:

  • Data leaks: Sensitive information—such as source code, customer data or internal documents—can be inadvertently shared with third-party AI services.
  • Compliance violations: Uploading regulated data to unvetted platforms may breach GDPR, NIS2 or internal governance policies.
  • Malware and supply chain threats: Some AI tools, especially browser extensions, may contain malicious code or request excessive permissions.
  • Loss of visibility: IT teams can’t protect what they can’t see. Shadow AI creates blind spots in network traffic and security posture.

Real-World Example: Samsung’s ChatGPT Incident

In April 2023, Samsung experienced three separate data leaks within 20 days after allowing employees to use ChatGPT. Engineers uploaded proprietary source code and internal meeting transcripts to the chatbot, unintentionally exposing sensitive information. The company responded by banning generative AI tools across its workforce—highlighting how quickly Shadow AI can become a liability.

Alternative (more general though) real-word example that happened in the healthcare sector from 2024 to 2025:

  • Incidents: Hospitals and health systems globally have reported instances of well-meaning clinicians feeding patient information into AI assistants (e.g. to summarize medical notes or draft letters). For example, in early 2025 some healthcare organizations discovered employees inputting patient records into ChatGPT, not realizing this contravened privacy laws.
  • Risk: Regulatory and privacy breaches – Public AI tools are not HIPAA-compliant, and any Protected Health Information (PHI) uploaded could violate patient confidentiality and data protection laws.
  • Response: Multiple providers responded by issuing bans or strict guidance on using external AI with patient data. Organizations ramped up training to ensure staff understood that uploading PHI to unsanctioned apps is prohibited.

How Flowmon ADS Helps

The Progress Flowmon Anomaly Detection System (ADS) provides a powerful way to detect Shadow AI activity at the network level—without relying on endpoint agents or user behavior monitoring.

Here's how:

  • Network-level visibility: The Flowmon solution monitors all traffic across your network, including encrypted connections. While it cannot inspect encrypted content, it analyzes metadata and behavioral patterns to detect unusual or unauthorized communications. This is essential for identifying traffic to AI services that may not be officially approved.
  • Behavioral detection: Using machine learning and heuristics, Flowmon identifies deviations from normal behavior—such as new domains, unexpected data transfers or connections to known AI endpoints.
  • Custom blacklists and threat intelligence: Security teams can define and update lists of disallowed AI services. The Flowmon solution alerts you when these are accessed, providing full context: who connected, when and how much data was transferred.
  • Forensic insight: The Flowmon solution stores historical flow data, enabling retrospective analysis of Shadow AI usage. This supports incident response, user education and compliance reporting.
  • Automated response: The Flowmon solution can integrate with firewalls or SIEM systems to trigger real-time actions—such as blocking suspicious destinations or opening incident tickets.

Balancing Innovation and Security

AI tools are here to stay—and banning them outright isn’t a sustainable strategy. Instead, organizations need to strike a balance: enabling innovation while maintaining control. That starts with clear policies and is enforced through visibility.

Flowmon ADS gives you that visibility. It acts as a radar for your network, helping you detect Shadow AI before it becomes a security incident. With Flowmon network detection and response capabilities, you can embrace AI confidently—knowing your data, users and compliance posture are protected.


Martin Škoda
View all posts from Martin Škoda on the Progress blog. Connect with us about all things application development and deployment, data integration and digital business.
More from the author

Related Tags:

Related Products:

Flowmon

Network observability platform with AI-powered detection for cyberthreats, anomalies and fast access to actionable insights for greater network and application performance across hybrid cloud ecosystems.

Overview

Related Tags

Prefooter Dots
Subscribe Icon

Latest Stories in Your Inbox

Subscribe to get all the news, info and tutorials you need to build better business apps and sites

Loading animation