| Episode | Status |
|---|---|
Russ Fradin sold his first company for $300M. He’s back in the arena with Larridin, helping companies measure just how successful their AI actually is. In this episode, Russ sits down with a16z Gener...
Russ Fradin, founder of Larridin and former Comscore executive, discusses the critical measurement gap in enterprise AI adoption. With companies spending $700B on AI in 2024, most lack basic infrastructure to determine if their investments are working. Drawing parallels to the digital advertising revolution, Fradin explains how measurement infrastructure—not just better technology—will determine which AI companies survive, and reveals that 85% of enterprises believe they have only 18 months to become AI leaders or fall behind permanently.
Fradin draws direct parallels between today's AI adoption challenges and the 1990s digital advertising revolution. Just as companies poured money into online ads without knowing if they worked, enterprises are now spending billions on AI with no measurement infrastructure. The industry didn't take off because ads got better—it took off because companies like Comscore built boring infrastructure to prove ads worked.
The fundamental shift happening is software budgets replacing labor budgets at unprecedented scale. Companies with $10B labor budgets and $1B software budgets will flip to $8B labor and $1B+ software spending. This creates a new problem: CFOs who never optimized $1B software budgets now need to scrutinize $1B+ AI spending with the same rigor they apply to labor costs.
Larridin's approach tackles three critical questions: What AI tools exist in your company? Are people actually using them? Are users more productive? The first layer reveals shadow AI usage, the second drives safe adoption through employee engagement tools, and the third combines behavioral data with productivity surveys to measure actual impact—something no traditional survey can accomplish alone.
Individual employees want to work less and earn more, while companies want maximum output. This creates a paradox where productive AI users might hide their efficiency gains rather than take on more work. The solution requires measuring productivity at aggregate levels, comparing heavy users versus light users, and understanding that individual-level productivity is 'unknowable'—only group-level trends matter for enterprise decisions.
Larridin's survey of 350 IT heads revealed critical insights: $700B in enterprise AI spending, 70% of leaders believe they're wasting money, and 85% believe they have only 18 months to become AI leaders or fall behind permanently. This creates a perfect storm of massive budget growth, anxiety about effectiveness, and employee confusion about what they're allowed to do.
Larridin's Nexus product creates safe wrappers around AI models with custom-trained LLaMA models that block illegal or prohibited queries. A European bank example illustrates the absurdity of current training: one 28-year-old creating a 30-slide deck for a global call versus systematic enablement. Employees need assurance they won't look dumb and won't get fired—especially critical in regulated industries with EU AI compliance requirements.
Developer-heavy companies can measure AI productivity through spend on tools like Cursor—tracking dollars spent per engineer creates natural leaderboards. One founder discovered his best engineer wasn't spending anything on Cursor, revealing a measurement gap. Cursor 'has taken mediocre engineers and made them good, but it's taking amazing engineers and made them gods,' illustrating how measurement must account for baseline skill levels.
When a measure becomes a target, it ceases to be a good measure—the core challenge in AI productivity measurement. Companies struggle with defining outputs: emails sent, contracts drafted, code written all fail as targets. The solution requires comparing heavy versus light users without making the comparison itself a target, and accepting that 'compared to what?' is the fundamental question with no perfect answer.
Fradin argues mass unemployment from AI is unlikely due to competitive dynamics: companies that fire employees to boost margins will be destroyed by competitors who keep employees and do more. PE firms have optimized underperforming companies through layoffs for 40 years, yet employment increased. The 'your margin is my opportunity' principle means AI productivity gains will drive growth, not shrinkage. CEOs want to run bigger companies, not smaller ones.
AI's unique challenge: first time highly educated workers face displacement, but they're also best equipped to adapt. Economist Ed Glaeser notes hypereducated people can 'rejigger themselves' unlike previous disruptions affecting low-skill workers. The bigger issue is product marketing—'AI can do anything' doesn't sell. Comscore learned this: 'we know everything' failed, but 'we can tell you Visa vs Mastercard market share in Japan' succeeded. AI needs specific use cases, not horizontal platforms.
The $700 Billion AI Productivity Problem No One's Talking About
Ask me anything about this podcast episode...
Try asking: