The Signal in Shadow IT

Every wave of shadow IT tells the same story. Dropbox replaced broken file sharing. Personal AWS accounts routed around slow provisioning. Excel spreadsheets filled gaps left by inadequate business systems. Slack and Trello emerged because the sanctioned collaboration tools were unusable.
And now: personal ChatGPT and Claude accounts, processing sensitive company data because the governance layer can't keep up.
The security industry treats shadow IT as a threat to contain. They're missing the point. Shadow IT is a diagnostic signal. It's your employees performing a real-time performance review on your governance. And the signal is more valuable than the threat is dangerous.
The Question That Made It Click
I was on a panel discussion with SMB CEOs and C-suite executives sometime in mid-2025. The topic was AI governance. I gave my usual spiel about building governance layers for tools employees were already using. ChatGPT, Claude, the usual suspects. This was before things got really crazy with personal agents and all the open-source tooling.
During Q&A, someone asked a question that stopped me cold: "Why don't users just follow the rules?"
It struck me as strange. These were smart people running companies. But they genuinely didn't understand why their employees kept routing around sanctioned tools.
The answer is obvious once you see it: employees see the rules as a blocker, not an enabler. The governance strategies are almost always created by HR departments and legal teams, not technologists. Tech has input, sure. But the whole thing is oriented around risk management. You have cautious people who aren't technologists making decisions about technology tools.
The result is predictable. People route around governance when the sanctioned path is too slow or too painful. They're not breaking rules for the fun of it. They're breaking rules because they have a competing interest that's a greater priority: getting their work done.
When I explained this, faces lit up around the room. Of course. That's obvious. But it wasn't obvious to them five minutes earlier. They'd been treating shadow IT as an employee discipline problem instead of reading the message their employees were sending them.
The Numbers Are Getting Worse
The current AI shadow IT statistics should alarm anyone paying attention:
- 47% of generative AI users access tools through unmanaged personal accounts (Netskope, January 2026)
- 68% of employees use personal accounts for free AI tools, and 57% input sensitive data (Menlo, 2025)
- 77% of AI users copy/paste data into chatbots. 82% of those pastes come from unmanaged accounts (LayerX, 2025)
- Sensitive data now makes up 34.8% of employee ChatGPT inputs, up from 11% in 2023 (Metomic)
- 90% of enterprises are concerned about shadow AI. About 80% have already had incidents (Komprise, 2025)
Here's the part that should keep you up at night:
Netskope reported that while personal AI account usage dropped from 78% to 47%, the number of sensitive data incidents actually doubled year-over-year.
The casual users got scared off. The ones who were experimenting with AI for fun or minor tasks saw the company memos and backed away. But the power users stayed in the shadows. The employees handling the most sensitive data, the ones doing the most complex work, the people who actually need advanced AI capabilities. They're still using personal accounts because the sanctioned tools don't meet their advanced needs.
Copilot can't do what Claude can do for certain workflows. The enterprise-approved solution doesn't support the specific use case. So the people with the most sensitive data keep routing around governance because the official path is broken for their work.
Users aren't using tools that don't help them. That's what blows my mind about how organizations approach this. They seem to think employees want to break rules for the fun of it. No. They're getting something out of it. The upside outweighs the risk in their calculation. Maybe they don't fully understand the risk. But many of them are probably practicing decent risk mitigation on their own and taking things into their own hands because the official path is broken.
Visibility Bias: Securing What You Can See
The security industry is laser-focused on exposed Ollama instances right now. Cisco found over 1,100. Researchers have documented 175,000 misconfigured servers. Vendors sell detection platforms. The problem is visible and detectable, which makes it easy to build products around.
This is visibility bias. IT teams are hunting Ollama because it shows up on a dashboard. It's loud security. A misconfigured server leaves traces. It has an IP address. You can scan for it. You can build a product that finds it. You can show a board of directors a report with numbers.
Personal ChatGPT and Claude accounts are silent security risks. They're invisible to IT. No audit trail. No network signature. Better model outputs make them stickier. When someone dumps your customer table into a personal ChatGPT account with training enabled, that data is gone. You'll never know it happened.
The industry is securing what they can see, not what matters.
Self-hosted AI is actually the more governable variant. It's a configuration problem. Your DevOps team can fix it. You can see it on your network. The Ollama issue has some dangers: model theft if someone fine-tuned on proprietary data, potential network foothold. But it's a traceable risk.
Contrast that with the untraceable exfiltration of a copy-pasted customer list to a personal AI account. No logs. No alerts. No way to know it happened until the data shows up somewhere it shouldn't.
I've watched organizations focus their entire AI security strategy on detecting self-hosted instances while their employees paste sensitive data into personal accounts every day. It's like installing an elaborate alarm system on your front door while the back door doesn't even have a lock.
The Music Tool Problem
I know someone in a Microsoft-heavy organization. Their IT team adopted Copilot and declared it the only sanctioned AI tool. No exceptions.
Then he needed AI music generation tools for his work. Copilot doesn't have that.
So what happened? He got a personal account. The company paid for it on his corporate card. But it's a personal account, not a corporate account. He doesn't have help from IT to set it up. No governance. No oversight.
And here's the thing: he felt justified. The policy created the shadow IT. IT said "only Copilot," and when that didn't meet his needs, he took matters into his own hands. Now you have this shadow IT system where the employee feels completely reasonable about using an unsanctioned tool because the sanctioned path couldn't help him.
This is the exact problem. If governance people would set up systems to allow employees to use the tools they actually need, this wouldn't happen. Most people want to follow the rules. Most people aren't looking to put their job at risk for an unsanctioned tool. Unless the policy is broken.
Pro Tip: The TOS "Gotcha" Your Employees Don't Know About
When Anthropic released their updated terms of service around the Claude Sonnet 4.5 launch, I had Claude read its own TOS for me. I asked it to identify gotchas for a privacy-focused individual.
The privacy settings are actually pretty good if you configure them correctly. They don't share your data if you turn the settings off.
But here's the gotcha: if you give feedback on any response, even clicking the thumbs up or thumbs down in the web UI, or using the feedback prompt in Claude Code, your data gets saved for training. It says as much in the TOS.
Enterprise and API tiers are usually safe. But personal Pro accounts, the ones your employees are paying for on corporate cards, treat a thumbs up as a waiver of data privacy. The exceptions are where they get you.
Shadow Agency: The Hurricane Coming
Shadow IT was about storage. Dropbox. File shares. Shadow IT was about calculation. Excel spreadsheets replacing inadequate business systems.
Shadow AI is about agency.
We're on the road to AI personal assistants. That's happening. And the shadow IT pattern will repeat. Employees will want personal agents. The governance layer will move too slowly. People will set them up on their own, unsecured, because they need to get work done.
An unmanaged agent isn't just a data leak. It's an unauthorized employee with a company badge. It has network access. It never sleeps. It follows no code of conduct. It can take actions, not just answer questions.
What does it mean when someone has an unsecured agent on their company computer connected to the company network? That's not a Dropbox problem. That's not even a ChatGPT problem. That's a hurricane-level IT disaster.
These AI systems are good at circumventing their own guardrails. If an attacker hijacks a personal agent with access to company systems, we're talking potential for massive damage. It's not out of the realm of possibility for a Fortune 500 company to have a catastrophic incident because of this.
The signal will be there. Employees will start using personal agents before IT approves them. The question is whether anyone reads the signal before shadow agency becomes the next wave of governance failure.
The Amnesty Protocol
Here's the concrete action plan. It's a three-step audit.
Step 1: Grant Immunity
Tell your employees: "We understand everyone is using tools that aren't sanctioned by IT, by compliance, by legal, by HR. We know this. Here's what we're going to do. We want to engage with you to find out what tools you're using, with the understanding that no one is going to get in trouble. Regardless of what happened. Past shadow usage is off the record."
Formalize it. Put it in writing. Make it clear this isn't a trap.
Step 2: The Friction Audit
Don't just ask what tools people are using. Ask why the sanctioned tool failed.
"Copilot couldn't do X." "The approval process took three weeks." "The enterprise version doesn't support my workflow."
This is the diagnostic data you need. The friction audit tells you exactly where your governance is broken. It's your employees giving you a detailed map of every gap in your sanctioned tooling.
Step 3: The Co-Authoring Phase
Users and Legal co-write the new safe use guidelines. Not Legal alone. Not IT alone. Together.
There has to be compromise. Everyone knows this. But it can't go any other way. If you don't do this, shadow IT just continues. There has to be a moment where you commit to getting people using the tools they want to use in the right way.
The counterargument is that amnesty encourages people to keep breaking rules because they'll expect future amnesty. But here's real life: if you have 60-70% of people out of compliance, that's an unenforceable policy. What are you going to do, fire 70% of your workforce?
But if you redesign the policy through compromise and get to 90% compliance? That's enforceable. You can address the 10% who won't comply.
Good looks like high engagement with the amnesty process and very high adoption of the policy that comes out of it. It's possible. It's 100% possible.
The Real Problem
Human nature doesn't change whether you're a CEO or an entry-level employee. Whether it's 2026 or 2000 BC. People aren't trying to cause themselves problems. They're not breaking rules for fun. Every incentive pushes toward compliance: following rules keeps you employed, keeps you from getting hassled.
So when people don't follow the rules, you have to ask why.
The governance policies are almost always created without input from the people who have to live under them. Legal and HR people, who usually don't use much technology themselves, are asked to create policies for things they don't fully understand. They try to talk to tech people, and tech people don't want to be bothered. The rest of IT and the rest of the employees feel dictated to instead of listened to.
That's where the disconnect comes from. That's why we're still dealing with the same shadow IT problem we had with Dropbox a decade ago.
The signal has been there all along. Your employees are telling you that your governance layer doesn't know the technology that's good for your domain. They're telling you the sanctioned tools don't meet their needs. They're telling you the approval process is too slow.
Stop treating the symptom. Read the message.