In 2023, shadow IT meant a developer spinning up an unauthorized cloud instance or a sales team using a personal Dropbox for customer files. Security teams had frameworks for finding it, policies for managing it, and tools for bringing it under control.
Shadow AI in 2026 is a different problem at a different scale.
A new Darktrace study finds that 76% of organizations now cite shadow AI as a definite or probable problem — a 15-point jump from 61% in 2025. More strikingly, 1 in 8 companies reports AI breaches now linked to agentic systems: AI agents that don’t just answer questions or generate content, but autonomously take actions across enterprise systems, connected devices, and operational infrastructure.
An AI agent that books meeting rooms through your building management system, adjusts HVAC schedules based on occupancy data, monitors physical access logs, and escalates anomalies to your security team is performing functions that, a year ago, required a human at every step. That same AI agent, if compromised, manipulated through prompt injection, or simply misconfigured, has the same access — and no human reviewing its decisions in real time.
What Shadow AI Actually Looks Like in an Enterprise
The term “shadow AI” encompasses a range of deployments that share one characteristic: they’re operating without security oversight, governance controls, or organizational awareness.
Line of business AI tools deployed without IT involvement. A procurement team integrating an AI negotiation assistant with their ERP system. A facilities team using an AI-powered energy optimization platform connected to building controls. An HR department deploying an AI screening tool with access to HRIS data. Each of these represents an AI system with access to enterprise data and systems that security teams likely don’t know exists.
AI agents operating on IoT and smart building infrastructure. This is the category with the most direct relevance to enterprise IoT security. AI platforms that interface with building management systems, access control, smart lighting, HVAC, and security cameras are being deployed by facilities teams, smart building vendors, and individual building managers — often without security review. When an AI system has write access to your building’s physical controls, it is part of your security perimeter whether or not it’s on your asset inventory.
Developer-deployed AI in internal tooling. Engineering teams using AI coding assistants with access to source code repositories, CI/CD systems, and internal APIs. The AI has effective access to your most sensitive intellectual property and the infrastructure that deploys your products.
Personal AI tool use on corporate devices. Employees using consumer AI services — ChatGPT, Claude, Gemini — to process work documents, draft internal communications, or analyze data. The organizational data submitted to these tools leaves the enterprise perimeter entirely.
Vendor-embedded AI in enterprise software. Many enterprise software vendors have embedded AI capabilities into their platforms — often enabled by default — that employees begin using without explicit organizational decision. The AI capabilities in these tools may have data access and action capabilities that were not contemplated when the original software was procured.
Why Agentic AI Creates a New IoT Security Problem
The distinction between generative AI (which produces outputs) and agentic AI (which takes actions) is the line that separates a productivity tool from an attack surface.
When an AI agent:
- Connects to your smart building’s BMS to optimize energy use
- Interacts with your access control system to manage visitor credentials
- Monitors your security camera feeds and dispatches alerts
- Controls your conference room booking and AV systems
- Manages IoT sensor data from environmental monitoring devices
…it is operating as an autonomous actor in your physical environment. And unlike human employees who make decisions, agentic AI systems make decisions at machine speed, at scale, without fatigue, and often without any human in the loop to notice when something is wrong.
Three specific failure modes create IoT security risk:
Prompt injection through IoT data. An AI agent that reads sensor data, security logs, or smart device event feeds can be manipulated by an attacker who controls the data source. If your occupancy sensors are compromised, and an AI agent reads occupancy data to make access control decisions, malicious sensor data can cause the AI to make decisions the attacker wants. The attacker manipulates the IoT device, which manipulates the AI, which manipulates physical systems.
Overprivileged AI access to device control. AI agents are often granted access scopes that are far broader than their actual function requires. An AI energy optimization agent that needs read access to HVAC sensors but has been granted write access to all building systems is a significant risk if the agent is compromised or manipulated.
No audit trail for AI-initiated actions. When a human employee changes an access control policy or adjusts a building setpoint, there is typically a record of who made the change, when, and in what system. When an AI agent makes the same change, the audit trail may record only “system change” without clearly attributing the action to a specific AI invocation, the user who initiated it, or the reasoning behind it. This makes incident investigation extremely difficult.
The Numbers Behind the Risk
The Darktrace State of AI Cybersecurity 2026 report, which surveyed over 1,500 security professionals globally, reveals the scale of the problem:
92% of security professionals are concerned about the use of AI agents across the workforce and their impact on security — up significantly from prior years.
76% of organizations cite shadow AI as a definite or probable security problem — a 15-point increase year-over-year.
1 in 8 companies reports AI-related breaches linked specifically to agentic systems — autonomous AI that takes actions rather than just generating content.
3 in 4 organizations lack adequate governance frameworks for the AI systems operating in their environment.
More than half of security teams report discovering AI systems operating in their environment that they were unaware of — the shadow AI problem in concrete terms.
The year-over-year acceleration is as significant as the absolute numbers. Shadow AI went from a problem affecting 61% of organizations to 76% in a single year. Agentic AI adoption is growing faster than security governance. The gap between AI deployment velocity and AI security governance is widening.
The Smart Office Attack Surface
For enterprise security teams responsible for physical locations — offices, campuses, manufacturing facilities, retail environments — the AI-IoT intersection creates a specific attack surface that deserves focused attention.
Building Management Systems (BMS) with AI integration. Modern smart buildings increasingly connect their BMS — which controls HVAC, lighting, access control, elevators, and fire systems — to AI optimization platforms. The AI has write access to physical building controls. Compromise or manipulation of the AI creates potential to manipulate physical building systems.
AI-powered physical security systems. Computer vision systems that analyze camera feeds, detect anomalies, and dispatch alerts are increasingly AI-driven. An adversary who can manipulate the AI’s perception — through adversarial inputs, training data poisoning, or prompt injection in any natural language component — can blind or mislead the physical security system.
Smart meeting rooms and AV systems. AI assistants integrated into conference room systems have access to meeting content, attendee information, and in some cases audio and video feeds. The sensitive business information discussed in meetings is processed by AI systems that may not be under security team oversight.
Visitor management and access control. AI-enhanced visitor management systems that verify identities, manage access credentials, and integrate with HR systems for employee departures represent a meaningful security function being performed by AI — often deployed by facilities teams without security review.
Environmental and occupancy monitoring. AI systems that read occupancy sensors, monitor air quality, and adjust environmental systems based on real-time data are embedded throughout smart buildings. The sensor data these systems process can be manipulated, and the actions they take in response can have physical consequences.
What Security Leaders Can Do About Shadow AI
The shadow AI problem does not have an easy technical fix — you cannot simply deploy a tool that finds and blocks all unauthorized AI. The problem is fundamentally about governance: establishing clear organizational policies, creating discovery mechanisms, and building accountability structures.
1. Establish an AI system registry. Every AI system operating in your environment — including vendor-embedded AI in existing platforms — should be inventoried with the same discipline as traditional IT assets. Who deployed it? What data does it access? What actions can it take? Who is responsible for its security? This registry does not exist at most organizations. Building it is the prerequisite for everything else.
2. Define AI access scope requirements. For AI systems with access to IoT devices, building systems, or operational infrastructure, establish minimum necessary access principles. An AI agent should have access to exactly the data and systems it needs to perform its function — not broad access granted for convenience.
3. Require audit logging for all AI-initiated actions. Any action taken by an AI agent in an enterprise system — a building setpoint change, an access credential modification, a configuration update — should generate a log entry that clearly attributes the action to the AI system, the time, and the triggering user or event. This log must be retained and reviewed.
4. Implement AI governance policy before deploying agentic systems. If your organization is evaluating or deploying agentic AI — systems that take autonomous actions — establish governance requirements before deployment, not after. Who can approve agentic AI deployments? What data access requires security review? What actions require human approval before execution?
5. Test AI systems against manipulation. For AI systems with access to sensitive data or control over physical systems, conduct adversarial testing that specifically examines prompt injection susceptibility, behavior with manipulated input data, and response to edge cases. This is not standard QA — it requires security expertise specific to AI system behavior.
6. Include AI systems in your incident response planning. When you are planning for security incidents, include scenarios where an AI system is the attack vector — either compromised directly, manipulated through prompt injection, or behaving unexpectedly due to data manipulation upstream. What is your process for detecting AI-initiated anomalous actions? For isolating an AI system that may be compromised? For auditing its historical actions post-incident?
The Governance Gap Is the Security Gap
The fundamental problem with shadow AI is not technical — it’s organizational. AI systems are being deployed faster than governance frameworks are being developed. The pace of adoption has outrun the pace of oversight.
The Darktrace data suggests that most organizations are aware of this gap — 92% of security professionals are concerned about AI agent security. The challenge is that awareness has not translated into governance at the same speed that AI deployment has happened.
For security leaders, the practical implication is this: the AI systems operating in your environment today are, on the whole, less governed than the IT systems you’ve been managing for the past decade. They have access to more data, more systems, and more physical infrastructure. And they’re making decisions and taking actions at a scale and speed that human oversight cannot match without purpose-built governance frameworks.
The window for getting ahead of this is narrowing. Every month that shadow AI deployment outpaces governance is a month of expanding unmanaged attack surface. The organizations that establish AI governance frameworks now will be better positioned than those who wait for an incident to force the conversation.
Data in this article draws on Darktrace’s State of AI Cybersecurity 2026 report (April 2026), IBM threat intelligence, and Trend Micro’s AI-fication of Cyberthreats research. Shadow AI statistics reflect survey data from security professionals across industries and geographies.



