| Episode | Status |
|---|---|
The gap between a promising AI pilot and enterprise-wide, scalable impact remains a critical divide where momentum and resources often stall. Today's guest is Deborah Golden, U.S. Chief Innovation Off...
Deborah Golden, Deloitte's U.S. Chief Innovation Officer, reveals why most AI initiatives stall despite adequate technology and infrastructure. The core issue: organizations try to force probabilistic AI systems into deterministic business structures built for predictability. Success requires fundamental shifts in funding models (portfolio vs. project-based), team architecture (cross-functional with shared accountability), cultural norms (rewarding intelligent failure), and executive leadership roles (shield, translator, enabler). Companies must move from asking 'how can AI optimize X' to 'what business would we be in if we could predict customer needs before they know them.'
Golden identifies two critical roadblocks causing AI project failures: the 'corporate immune system' that rejects novelty, and a 'crisis of imagination' among leaders. 86% of executives report outdated technology infrastructure, yet the deeper issue is trying to run adaptive AI on rigid systems designed for stability and conformity, not probabilistic outcomes.
Organizations that overcome AI stalls fundamentally re-architect three systems: funding models, team structures, and data governance. The shift from project-based to portfolio-based funding enables staged investment with VC-like discipline, while cross-functional 'fused teams' with shared accountability move decision-making to the edges and dramatically increase speed.
The quality of strategic questions determines the quality of AI strategy. Instead of asking 'how can AI optimize our call center,' leaders should ask 'what business would we be in if we could predict every customer's needs before they knew them.' This shift in narrative automatically generates more far-reaching answers requiring different ROI thinking.
Organizations must distinguish between two types of failure: intelligent failure (desirable, happens at the edge of knowledge through well-designed experiments) and sloppy failure (preventable, from negligence or cutting corners). Success requires metrics that coexist with unpredictability, rapid blameless postmortems, and psychological safety that separates outcomes from people.
Sandboxes fail when organizations don't define their specific purpose upfront. Golden distinguishes between sandboxes for general learning/training versus those for specific R&D hypotheses. Successful sandboxes require controlled environments with clear hypotheses, expected outcomes, and defined variables - whether testing cybersecurity, training super users, or derisking business transformation.
AI-era leadership requires orchestrating ecosystems rather than commanding from the top. Leaders must play three critical roles: Shield (protecting nascent initiatives from bureaucracy), Translator (bridging technical and business worlds), and Enabler (actively dismantling barriers). Success depends on knowing when to oscillate between these roles based on circumstances.
Golden's final advice: AI isn't fundamentally a technology challenge but a catalyst forcing overdue conversations about disruption. Leaders across industries are anxious about not knowing where to begin, but that's acceptable - AI is built for unpredictability in an unpredictable world. The opportunity is using AI as impetus to build faster, smarter, more adaptive organizations rather than just deploying another technology.
Driving the Systemic Change for AI – with Deborah Golden of Deloitte
Ask me anything about this podcast episode...
Try asking: