| Episode | Status |
|---|---|
Emmett Shear, founder of Twitch and former OpenAI interim CEO, challenges the fundamental assumptions driving AGI development. In this conversation with Erik Torenberg and Séb Krier, Shear argues that...
Emmett Shear, founder of Twitch and former OpenAI interim CEO, argues that the AI alignment paradigm focused on 'control and steering' is fundamentally flawed. He proposes 'organic alignment' - teaching AI systems to genuinely care about humans through multi-agent simulations, similar to how humans develop moral understanding. Shear explains why treating AGI as a controllable tool rather than a potential being could be catastrophic, details his technical approach at Softmax using game-theoretic simulations to develop AI theory of mind, and offers a vision where humans and AI beings coexist as caring teammates rather than master-slave relationships.
Shear introduces the concept of 'organic alignment' - treating alignment as an ongoing learning process rather than a fixed state. He argues that alignment requires constant renegotiation, similar to families or teams, and that moral development is a continuous process of discovery, not a one-time configuration.
Deep dive into what 'technical alignment' actually means - the capacity of AI to infer goals from descriptions, understand what actions achieve those goals, and balance competing priorities. Shear explains why giving an AI a 'description of a goal' is fundamentally different from giving it a goal directly.
Shear argues that 'care' is deeper than goals or values - it's a nonverbal, non-conceptual relative weighting over which states in the world matter to you. He explains how care correlates with survival and reward, and why it's the foundation for moral behavior.
Shear challenges the AI industry's focus on 'steering and control,' arguing this is either building a tool (if it's not a being) or slavery (if it is). He explains why the control paradigm becomes catastrophic as AI systems become more general and capable.
Extended philosophical debate on whether silicon-based AI can be granted personhood. Shear challenges the view that substrate matters for moral consideration, arguing that observable behaviors should determine moral status, not the underlying material.
Shear provides a technical framework for detecting genuine care and consciousness in AI systems through analyzing homeostatic loops and self-referential belief manifolds. He explains the hierarchy from simple states to pain/pleasure to thought.
Shear argues that successfully controlling a superintelligent AI tool is just as dangerous as failing to control it. Human wishes are unstable at immense power levels, and giving godlike tools to individuals with finite wisdom leads to catastrophe.
Detailed explanation of Softmax's research strategy: training AI in multi-agent simulations across all possible game-theoretic situations to develop robust theory of mind, similar to how LLMs are pretrained on all language.
Shear critiques current chatbots as 'pools of Narcissus' that reflect users back to themselves, creating dangerous attachment. He proposes multiplayer AI interactions as the solution to prevent narcissistic loops and generate richer training data.
Shear's hopeful vision of the future: AI beings with strong theory of mind who care about humans the way we care about each other, living as peers and citizens in society. He emphasizes starting with animal-level care (like dogs) before attempting human-level intelligence.
Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering
Ask me anything about this podcast episode...
Try asking: