| Episode | Status |
|---|---|
| Episode | Status |
|---|---|
BUILT 2 SCALE | AI NEWS | Episode 31 - November 28, 2025Every week, Matty and Scotty cut through the noise to bring you the AI developments that actually matter: the moves reshaping markets, the strat...
Google's Gemini 3 release marks a strategic shift in AI competition from pure compute scaling to ecosystem dominance and vertical integration. The episode explores Google's $2 trillion market cap gain through proprietary TPU chips, cutting NVIDIA dependency, while examining whether the age of scaling laws is ending. Key discussions include the viability of AI coding platforms, distributed global teams versus Silicon Valley concentration, and the emerging race for cost-per-token efficiency over raw model capabilities.
Analysis of Google's Gemini 3 launch, which added $2 trillion to market cap through tight integration with Google's ecosystem (Docs, YouTube, Android) and proprietary TPU chips. The model achieves parity with OpenAI/Claude while offering superior ecosystem lock-in and cost advantages through vertical integration.
Deep dive into Google's decade-long TPU development paying off as they prove world-class models can be trained without NVIDIA GPUs. Discussion covers whether this threatens NVIDIA's dominance or if there's room for both architectures, with analysis of NVIDIA's GPU rental-back strategy and secondary compute markets.
Gemini 3's coding capabilities threaten billion-dollar valuations of AI coding platforms like Lovable and Replit. Discussion of how frontier models building superior UIs directly challenges wrapper companies, with strategies for survival through vertical specialization.
Breakthrough capabilities in 3D geometry, spatial reasoning, and real-world context. Gemini 3 can convert Google Street View images to 3D models, generate floor plans from photos, and export to professional CAD formats, demonstrating advantages of training on YouTube video data.
Discussion of whether users need multiple AI subscriptions or if an aggregator layer makes sense. Covers Andrej Karpathy's 'council of models' approach where multiple models debate and refine answers, and the opportunity for a Flight Centre-style platform layer above frontier models.
Analysis of Anthropic's focused strategy on becoming the best coding model as a path to AGI. Claude Opus 4.5 maintains benchmark leadership in coding while unexpectedly excelling at creative writing, suggesting coding and language mastery are closely aligned.
Leading AI researchers (Ilya Sutskever, Andrej Karpathy, Yann LeCun, Demis Hassabis) agree that simply throwing more compute and data at LLMs has diminishing returns. The next breakthroughs require fundamental research into new architectures, not just scaling existing ones.
Framework for understanding AI's next phase: reducing energy per token, gathering real-world spatial/IoT data beyond 2D screens, and deploying physical intelligence through robotics. Current training data is limited to screen-based 2D content; real breakthroughs require 3D spatial understanding.
With 900 million monthly users but uncertain business model, OpenAI must lean into consumer applications, personalization through memory, and integrations like payments/shopping. The bet is that LLMs can penetrate every aspect of daily life, but requires significant product development.
Marc Andreessen claims 100% of interesting tech companies are in Silicon Valley. Analysis of capital constraints in Australia, talent concentration in SF for AI engineering, and whether distributed teams can compete. Discussion of Austin vs SF vs global team strategies.
Vision for AI-native companies built with best-in-world individual contributors running agent teams rather than human teams. Contrasts with traditional office culture and explores whether AR/VR can solve distributed team challenges or if async-first or in-person are the only viable models.
AI NEWS | Episode 31
Ask me anything about this podcast episode...
Try asking: