| Episode | Status |
|---|---|
| Episode | Status |
|---|---|
Most security playbooks weren’t built for an era where AI moves faster than policy. Rob T. Lee says the default answer of “no” is creating a far bigger problem: shadow AI — widespread, unsanctioned us...
Rob T. Lee, Chief AI Officer at SANS Institute, argues that the default security answer of 'no' is creating widespread shadow AI—unsanctioned usage that poses greater organizational risk than controlled experimentation. He advocates for a fundamental shift: security teams should act as 'lifeguards' enabling small experiments rather than gatekeepers, organizations need accountability partners (not mythical 'AI champions'), and executives must personally engage with AI daily to set informed strategy. The path forward requires treating AI adoption like building muscle memory—30 minutes of daily tinkering, accepting frustration as learning, and creating governance that enables rather than blocks innovation.
Security teams face three conflicting demands: defining governance policies, utilizing AI in their own workflows, and protecting systems they don't fully understand. Most organizations default to a 'framework of no' that creates analysis paralysis, while the real competitive risk is standing still. The challenge is balancing the need to move fast with the responsibility to reduce risk.
The 'framework of no' drives employees to shadow AI—unsanctioned use across organizations. An MIT study found that the only organizations achieving ROI from AI are doing so through shadow AI. The solution is for security teams to act like lifeguards at a playground, watching and guiding rather than blocking experimentation.
While organizations obsess over enterprise AI security, individuals are blindly sharing intimate data in personal AI interactions without understanding the risks. The Palisades fire arrest case demonstrated how ChatGPT logs can be subpoenaed as evidence. People treat AI like privileged communication when it's not, creating personal and legal exposure.
Organizations created AI policies 2-3 years ago and haven't updated them as the technology evolved. Most lack agentic AI policies entirely. The biggest oversight is blocking connector integrations between already-approved enterprise tools, preventing employees from creating the workflows where real value emerges.
Security expertise was originally built through 'hacking'—tinkering with technology to understand what could go wrong. Security leaders must return to this experimental mindset rather than waiting for definitive playbooks. The gap between an 'expert' and a beginner is just two weeks of daily practice. Frustration is the signal that learning is happening.
Rob's personal approach mirrors longevity health practices: dedicate 30 minutes daily to AI learning through diverse channels including TikTok, Instagram, AI Daily Brief podcast, and bookmarking practical demonstrations. The key is treating it like reading the newspaper—a daily habit, not a weekend project. AI hackathons provide valuable community learning even for beginners.
Organizations need 'accountability AI partners' per business unit, not mythical 'AI champions' who are expected to have all answers. Like having workout partners for different muscle groups, different departments (HR, finance, marketing) need their own learning communities. Small-form training should focus on immediately implementable tasks, not comprehensive overviews.
SANS developed the Secure AI Blueprint by bringing experts together to admit collective uncertainty. The framework centers on three questions: How do we protect AI? How do we use it in daily work? How do we govern it? Organizations must first identify which pillar they're addressing before seeking solutions. Proper governance is like sleep for health—the foundation everything else depends on.
Governance means defining acceptable use policies tailored to your organization's data sensitivity and regulatory requirements. Hospitals face HIPAA constraints, marketing firms have looser requirements, most corporations fall in between. The EU 'right to be forgotten' creates technical challenges—you can't unbake a chocolate chip from a 175 billion parameter cookie. The workaround is query-time filtering rather than retraining models.
Rob challenges the common practice of boards hiring AI experts to inform strategy, arguing executives must personally engage with AI daily. Outsourcing AI knowledge is like setting internet strategy in the 90s without using email or browsers. Executives who use AI for simple tasks (photo editing) can connect dots to strategic decisions (marketing team sizing, cost savings). AI changes how you problem-solve, not just what tools you use.
At AI conferences, the top concern isn't nation-state attacks—it's shadow AI. Current restrictive policies drive employees to unsanctioned use because they fear being left behind, especially after seeing companies like Accenture lay off 11,000 workers who 'couldn't be reskilled in AI.' This creates a panic state where workers use AI regardless of policy, generating the exact security risks organizations tried to prevent. The solution: enable experimentation with sunlight, default to yes, and have security act as lifeguards.
Sunlight on Shadow AI: When Security Learns to Tinker—Rob T. Lee from the SANS Institute on AI Risk
Ask me anything about this podcast episode...
Try asking: