| Episode | Status |
|---|---|
| Episode | Status |
|---|---|
In this AMA episode, Nathan gives an update on his son Ernie’s cancer treatment and how frontier AI models are helping him navigate complex medical decisions. PSA for AI builders: Interested in alignm...
Nathan provides a personal update on his son's cancer treatment, highlighting how frontier AI models (Claude, Gemini, GPT) are helping navigate complex medical decisions with remarkable accuracy. He analyzes whether Claude Opus 4.5 represents AGI-level coding, shares insights from building three family apps during hospital stays, and evaluates the AI landscape including Chinese models, chip controls, and live player analysis of Google DeepMind, OpenAI, Anthropic, and Meta.
Nathan reports his son Ernie is in remission after aggressive chemotherapy, with minimal residual disease testing showing fewer than 1 in a million cancer cells detected. He details how AI models helped identify advanced testing options and continue to provide oncologist-level medical guidance throughout treatment.
Nathan evaluates whether Claude Opus 4.5 represents AGI-level coding by sharing his experience building three family apps during hospital stays. While acknowledging clear improvements, he questions whether the step change justifies the AGI designation, noting the model still struggles with certain debugging scenarios.
Nathan distinguishes between AI technology being real (definitely is) versus whether all loans will be repaid (uncertain). He analyzes financial engineering around data centers, CoreWeave-style companies, and venture valuations like LM Arena's $1.7B, suggesting bubble dynamics at the VC level.
Through testing Chinese models (DeepSeek, Kimi, Qwen, GLM) on document reading tasks, Nathan found them 'nowhere close' to frontier US models despite competitive benchmark scores. He attributes this to limited customer feedback loops and smaller teams, suggesting chip controls are having measurable impact.
Nathan critiques the sudden reversal on H200 chip sales to China, arguing the US gave away bargaining leverage without getting concessions. He supports Peter Wildeford's 'rent but don't sell' position as a more defensible middle ground that maintains cooperation potential.
Nathan ranks Google DeepMind as the top live player due to unmatched combination of profitable core business ($100B revenue, $1B/week profit), proprietary TPU chips, deepest research bench across all AI domains, and billions of users for distribution. Gemini 3 shows they've learned to avoid being 'too vanilla.'
Nathan analyzes OpenAI's aggressive financial strategy as deliberately creating 'too big to fail' dynamics through balance sheet commingling and massive debt obligations. While GPT-5.2 Pro remains competitive, OpenAI no longer has clear technical lead and appears to be betting on government backstop if buildout falters.
Nathan identifies Anthropic as having the current best overall model (Claude Opus 4.5), best safety research, and exceptional talent retention. Their focus on real-world usefulness over benchmarks, combined with interpretability work and responsible scaling policies, positions them as the quality leader.
Nathan evaluates Meta's open-source Llama strategy as highly effective for commoditizing AI infrastructure while maintaining optionality. Their massive user base, advertising revenue, and willingness to give away models creates unique competitive position, though they lag in frontier capabilities.
AMA Part 1: Is Claude Code AGI? Are we in a bubble? Plus Live Player Analysis
Ask me anything about this podcast episode...
Try asking: