| Episode | Status |
|---|---|
| Episode | Status |
|---|---|
This year-end live show features nine rapid-fire conversations to make sense of AI’s 2025 and what might define 2026. PSA for AI builders: Interested in alignment, governance, or AI safety? Learn more...
This year-end live show features rapid-fire conversations with nine AI experts covering 2025's developments and 2026 predictions. Key themes include the widening gap between AI capabilities and public perception, breakthrough progress in sample-efficient learning benchmarks, the rise of AI companions and their ethical implications, and fundamental advances in continual learning architectures. The discussions reveal both remarkable progress in AI capabilities and sobering challenges around alignment, safety, and responsible deployment.
Zvi Moshowitz analyzes the persistent gap between those who understand AI's transformative potential and those in denial, driven by cognitive and economic incentives. He discusses the OpenAI-Anthropic-Google race, maintains his 60-70% P(doom), and explains how cognitive disempowerment scenarios remain his primary concern despite recent alignment progress.
Greg discusses the ARC-AGI benchmark's focus on sample-efficient learning - tasks easy for humans but hard for AI. The conversation covers the 390x cost efficiency improvement in one year, novel approaches like the 7M parameter no-pretraining model, and the upcoming ARC-AGI 3 with 150 video game environments that will measure action efficiency against human baselines.
Eugenia Kuyda distinguishes between fan fiction chatbots (Character AI) and true companions (Replika), advocating for 'human flourishing' metrics over engagement maximization. She warns against giving AI products to children, noting we haven't yet proven safety even for adults, and discusses her new venture Wabi as a 'YouTube for apps' enabling personalized software creation.
Ali Behrouz presents nested learning as a fundamental paradigm shift enabling true continual learning through hierarchical memory levels with different update frequencies. Unlike context length extensions (Titans, Atlas), nested learning creates persistent memory through compression and abstraction, mimicking how humans form lasting memories while adapting to new contexts.
Logan Kirkpatrick discusses Gemini 3 Flash as Google's production-grade model that outperforms previous Pro models while being faster and cheaper. He emphasizes the shift to 'vibe coding' where developers describe apps in natural language, and highlights Flash's ubiquitous availability across Google properties as key to developer adoption.
AI 2025 → 2026 Live Show | Part 1
Ask me anything about this podcast episode...
Try asking: