The AI Security Illusion: Are We Pushing Water Uphill?
TL;DR: Corporations are worried about employees pasting sensitive data into LLMs. But are these efforts too little, too late? The one thing companies need most right now: learning velocity.
In a world of billions of LLM interactions per day, is trying to wall off internal knowledge delaying the inevitable? And worse, are you losing a competitive edge by doing so?
I’m not suggesting we upload P&Ls or pipeline innovations into ChatGPT [1].
What I am suggesting is that trying to prevent every employee from using public LLMs is starting to feel a bit like pushing water uphill.
The top AI platforms today—ChatGPT, DeepSeek, Gemini, Anthropic, Perplexity—generate billions of queries per month. And while enterprise IT teams are scrambling to block AI tools for fear of data leakage [2], we might need to confront a harder truth:
The real risk isn’t that someone might paste confidential information into an LLM.
It’s that your smartest competitor is already using LLMs aggressively and learning faster than you are.
In other words, your organization may be driving horse-and-buggies while autonomous EVs fly past on the highway. Do you know why?
Let’s explore
Side notes:
Yes, technically you could load every 10K and annual report into something like NotebookLM and run competitive intelligence prompts all day... but that’s a different post. The point here is that so much “internal” information is already publicly accessible, and we may be underestimating how fast others are synthesizing it.
We’re talking about public-facing LLMs like ChatGPT—not secure, enterprise-grade instances that many organizations are now adopting internally or via APIs.
The Corporate Paradox: Protect or Progress?
If it’s not obvious by now, take a look around and you’ll quickly realize that most information security policies were built for a world that predates large-scale generative AI. We locked down external drives, we restricted mobile apps, and we limited access to websites.
And for good reason. Security matters.
The paradox: In trying to protect information, we may be stalling the one thing companies need most right now: learning velocity.
The fear of leakage is understandable. But the cost of inaction is rarely discussed.
While one company worries about what not to enter into ChatGPT...
Another company is training its talent to ask better questions and derive more insightful answers in minutes.
Only one of those organizations is learning how to move faster, smarter, and with confidence.
Research from Gartner (2024) estimates that “by 2026, enterprises that successfully adopt generative AI will outpace peers by 30% in productivity gains.” But a separate Gartner survey shows that over 40% of enterprises still block GenAI tools altogether, citing IP risk and regulatory uncertainty.
Similarly, Accenture’s 2024 Technology Vision advocates for a shift from "data protection" to "AI literacy." Their findings show that companies investing in secure GenAI enablement (e.g., internal sandboxed tools with real use cases) report 3x faster cycle times in strategy and ops.
This doesn’t mean we should abandon caution, but it does suggest that fear-based policies create a drag on capability building.
Contrarian POV: The Compliance Warning
To be fair, not everyone agrees. Forrester warns that “shadow AI” (i.e., the ungoverned use of AI by employees) will be a top security risk in 2025. Their guidance: until internal LLMs mature, companies must "over-index on governance and enforce strict boundaries."
And Harvard Business Review (March 2024) published a cautionary piece titled “Generative AI is Not a Free Lunch”, noting that even well-intentioned prompts can inadvertently expose patterns, customer data, or strategic plans that sophisticated scraping tools can learn from, even if not directly retrievable.
The takeaway: your POV isn’t wrong, but your comfort with risk may not be universally shared.
The Horse-and-Buggy Syndrome
Remember when companies tried to ban email for fear it would expose sensitive conversations? Or when early Bring Your Own Device (BYOD) policies forbade smartphones?
Those efforts didn’t just fail. They slowed the rate of organizational adaptation.
I fear the same thing is happening now with generative AI.
We're not protecting the future—we're preserving the past.
Ask yourself:
How many emails get forwarded outside your organization each week?
How many contractors, vendors, or ex-employees still have access to your tools?
How many screenshots, summaries, or working docs end up shared on personal devices?
Now compare that to the number of people trained to use LLMs securely, with discipline and oversight.
In most companies, that number is zero.
So what exactly are we protecting against?
We Don’t Need Panic. We Need a New Playbook.
This isn’t an argument for reckless openness. It’s a call for strategic modernization.
If you’re leading a team, department, or enterprise, here’s a simple framework to evaluate your AI posture:
The middle lane—applied critical training—is where maturity lies.
We need to move past the compliance vs. chaos binary and into a secure enablement mindset, where employees are not just allowed to use AI, but trained how to use it responsibly.
Final Thoughts & Reflections
In sum, this isn’t about the tools. As always, it’s about the talent. LLMs are unlikely to replace your workforce entirely (you’re already seeing failed experiments). But your competitors who use them well? They might.
If the first wave of GenAI fear was about hallucinations, security, and control…The second wave must be about enablement, literacy, and velocity. Because the future doesn’t belong to those who hide from the tools. It belongs to those who master them securely, strategically, and unapologetically.
Reflection Questions for Leaders:
What’s your current posture on LLM usage? Does it reflect fear or foresight?
Have you created a safe space for employees to learn how to use AI securely?
Where are you underestimating the opportunity cost of your AI restrictions?
The cat isn’t just out of the bag. It’s been out, learned how to code, and just launched a startup that’s going after your customer base.
Simple, not easy.