| Episode | Status |
|---|---|
Explore Antigravity, Google DeepMind’s innovative new AI developer coding product, with Varun Mohan on Release Notes. This episode dives into Antigravity as a powerful agent development platform, inte...
Varun Mohan, co-lead of Google Antigravity, discusses the new agentic development platform that combines a familiar IDE with browser automation and Gemini 3.0 capabilities. The platform enables developers to orchestrate multiple agents working in parallel, uses artifacts for task communication, and integrates multimodal capabilities including UI control and image generation. Key innovations include asynchronous feedback mechanisms, a knowledge panel for context retention, and the ability to verify agent work through browser screenshots and walkthroughs.
Introduction to Antigravity as an agent development platform with three key differentiators: a familiar editor with agent orchestration capabilities, browser actuation powered by Gemini 3.0, and artifacts as indivisible task units for developer-agent interaction. The platform enables agents to operate with less human intervention for longer periods.
Discussion of the progression from GitHub Copilot's autocomplete (2021) to chat experiences to agentic IDEs. Emphasis on how modern coding agents need multimodal capabilities beyond just code understanding - including browser interaction, research, and document analysis - to truly accelerate software development.
Analysis of how code writing represents only ~20% of developer time. Antigravity addresses the full hierarchy of developer needs: what to build, how to build it, and building it. The platform integrates with surfaces like Google Docs for system designs and bug tracking systems to provide comprehensive development assistance.
Antigravity is heavily dogfooded internally at Google DeepMind, including by leadership like Sergei. The internal Google codebase provides unique hardening for enterprise-scale complexity. Target users include researchers and developers building complex applications, with the goal of enabling singular developers to build entire companies.
Discussion of the complementary relationship between AI Studio (for vibe coding and quick prototypes) and Antigravity (for complex, production applications). A migration path will enable users to start in AI Studio and move to Antigravity as projects mature, making apps more economically valuable.
Exploration of how the traditional editor won't disappear but will evolve. Developers will spend less time on individual lines of code and more time reviewing artifacts - higher-level abstractions of agent work. The editor becomes one of many work surfaces rather than the primary interface.
Antigravity provides configurable autonomy levels through 'agent assisted development' mode. Agents decide when to notify users based on task importance, while developers can provide asynchronous feedback like comments on Google Docs. The system balances long-running tasks with iterative human feedback.
Discussion of how Antigravity currently runs locally but will evolve to support server-side execution for tasks outliving developer machines. Emphasis on the importance of iterative feedback even for asynchronous tasks, as developers learn requirements through seeing code rather than specifying everything upfront.
Antigravity includes a knowledge panel that builds up understanding of user preferences and context over time, similar to Google Search's contextual understanding. The system learns from conversations to avoid re-deriving information and captures knowledge that exists outside the codebase.
While Antigravity's principles could generalize to many work domains, the team maintains focus on developers to avoid building a product that's great for no one. The number of developers is growing as building becomes easier, from Assembly to Python to AI-assisted development.
Antigravity integrates Google's full suite of state-of-the-art models including Gemini 3.0 for coding, Imagen for image generation, and UI control capabilities. The platform provides cutting-edge capabilities on day one and enables the team to push both product and research frontiers.
Comprehensive demonstration of building an Airbnb-for-dogs app showing artifacts (task plans, images, implementation plans), asynchronous feedback, browser verification with screenshots, and parallel agent orchestration across multiple workspaces. Showcases the inbox for managing multiple concurrent agent workflows.
Recommendations for breaking work into smaller, disjoint pieces for better digestibility rather than single large tasks. Future vision includes faster experiences, more capable models handling complex tasks, and new product form factors emerging from increased model capabilities - following the pattern of autocomplete → chat → agents → agent orchestration.
Google Antigravity: Hands on with our new agentic development platform
Ask me anything about this podcast episode...
Try asking: