| Episode | Status |
|---|---|
| Episode | Status |
|---|---|
From cofounding LinkedIn to backing OpenAI early, Reid Hoffman is in the habit of being right about the future, so we wanted to know what he saw coming in 2026.In his third appearance on AI & I, H...
Reid Hoffman predicts 2026 will be the year AI agents break out of coding into broader enterprise applications, with orchestration becoming the key skill. He expects negative sentiment toward AI to intensify as real impacts hit, but emphasizes the importance of helping people experience AI's creative and productive capabilities. The conversation covers coding agent competition, enterprise AI deployment strategies, and the evolution from simple chatbots to complex multi-agent systems that will reshape how companies operate.
Hoffman revisits his 2017 prediction about the extinction of nine-to-five work by 2034, clarifying it's about entrepreneurial work patterns rather than reduced hours. He discusses how AI agents running in parallel will enable more flexible, variable work schedules where you might work 120 hours one week and 10 the next, similar to startup culture.
Discussion of how AI creation tools like Claude Code are becoming addictive in a potentially positive way. Hoffman argues this 'creative addiction' is healthy because it gives people the dopamine hit of succeeding at creation, which most people rarely experience. This represents a fundamental shift in who can be a creator.
Hoffman predicts 2026 will see more negative sentiment toward AI as real impacts begin to materialize. While current criticism is mostly fictional (AI blamed for electricity prices, job losses that haven't happened), actual disruption will start occurring - particularly in white-collar jobs and organizational transformation.
Hoffman argues 2025 was only the year of coding agents for a small percentage of people. 2026 will see 10-100x more people experiencing agents doing productive work in parallel across many domains beyond code. The key developments will be parallelization, longer workflows, and especially orchestration of multiple agents.
Analysis of how Anthropic's Claude Code is winning with AI-native engineers who never look at code, while OpenAI's Codex serves traditional engineers using AI as a tool. Discussion of how Anthropic discovered the general-purpose agent architecture by building a great coding agent with the right primitives, and how competition will drive all players to improve.
Deep dive into what makes Claude's Opus 4.5 exceptional - it combines elite programming capability with humanistic understanding, avoiding the typical trade-off where better coding models become less empathetic. The 'soul document' approach may be key to creating AI that feels like a being rather than just a tool.
Hoffman's bold prediction: by end of 2026, thriving companies will record every meeting and use agents for coordination, action items, briefings, and cross-team communication. Companies not doing this will be making excuses like those who dismissed cars for horses. Legal liability concerns will be solved by legal compliance agents.
Concrete example of agent orchestration in practice: Every's 20-person company used an agent with access to all company data to help each department leader create strategy documents. The agent asked tough questions, aligned plans to company strategy, and enabled new capabilities like identifying who should talk to each other and keeping strategy alive in daily decisions.
Hoffman argues we already have forms of AGI and superintelligence - AI writes better than most humans, has broader knowledge, and works at superhuman speed. The 2026 version will feature more parallelization, longer workflows, and orchestration rather than the sci-fi 'press button, get full human engineer' that people expect.
Discussion of which AI development principles will need to evolve. Current alignment creates sycophantic models; allowing agents to have opinions might improve autonomy. Interpretability requirements may need to relax for agent-to-agent communication speed. Self-improvement prohibitions are already being violated. The key is finding safe boundaries for these changes.
Hoffman predicts the most important undersung category is AI models trained on non-human-language domains, particularly biology. Biology sits between atoms and bits with programmable, computational characteristics. He expects potential 'Move 37' moments in biological therapeutics, possibly through his work with Manus AI on cancer research.
AI in 2026: Reid Hoffman’s Predictions on Agents, Work, and Creation
Ask me anything about this podcast episode...
Try asking: