You’re mid-feature, waiting on an AI agent to finish refactoring your data layer before you can start writing tests. It finishes. You prompt again. You wait again. This is the ceiling of synchronous AI coding — and it’s exactly what Google Antigravity was built to break through.
This Google Antigravity tutorial isn’t a feature overview. It’s a practical walkthrough of the features that change how you work: Manager View for orchestrating multiple agents, the Artifacts review loop for supervising AI output without reading raw diffs, and the Browser Agent for UI testing without leaving the IDE. If you’re already using Claude Code or Cursor, this will slot into your thinking naturally — and challenge some of it.
What Makes Google Antigravity Different From Cursor and Claude Code
Antigravity is built on a modified VS Code fork — which sounds unremarkable until you realize it uses Open VSX rather than the Microsoft Marketplace. That’s a meaningful distinction we’ll come back to.
What’s new here is the mental model. Claude Code and Cursor are fundamentally synchronous tools: you prompt, the agent acts, you review, repeat. Antigravity introduces asynchronous orchestration. You can dispatch five agents to five separate workspaces, step away, and come back to a board of completed tasks waiting for your review — like a standup where the agents report to you, not the other way around.
The benchmark numbers back up the ambition. Antigravity scored 76.2% on SWE-bench Verified and 54.2% on Terminal-Bench 2.0 — performance that puts it ahead of most tools in its class. It reached 6% developer adoption within two months of its November 2025 launch, one of the fastest growth rates for any AI dev tool on record.
This isn’t just VS Code with a chatbot. The architecture is designed around agent autonomy with human oversight — and that distinction underpins everything else in this post.
Installation and First-Time Setup: Importing Your VS Code or Cursor Settings
Download Antigravity from antigravity.google and run the installer. On first launch, you’ll be prompted to import settings from VS Code or Cursor — do this. It pulls your keybindings, themes, and most editor preferences across cleanly.
The one friction point: Open VSX, not the Microsoft Marketplace. Extensions published only to the Microsoft Marketplace won’t appear in Antigravity’s extension search. Before committing to Antigravity for a project, audit your must-have extensions. Most popular ones (ESLint, Prettier, GitLens) have Open VSX counterparts, but niche or proprietary extensions may not.
On the pricing side: the free tier gives you access to Gemini 3.1 Pro and Claude Opus 4.6. Pro is $20/month and Ultra is $249.99/month. For most developers evaluating the tool, the free tier is enough to form a real opinion.
Editor View vs. Manager View — Which Mode to Use and When
Antigravity ships with two distinct interaction modes, and conflating them will frustrate you.
Editor View is synchronous, collaborative, and context-rich. The agent sees your current file, your cursor position, your terminal output. Think of it like pair programming — you’re both looking at the same screen. Use it for:
– In-file refactors where you want tight feedback loops
– Debugging sessions where the agent needs live context
– Quick one-off tasks in a single file or module
Manager View is asynchronous and orchestration-focused. You’re not coding alongside the agent — you’re assigning work across workspaces and reviewing results. Use it for:
– Any task that can run independently of other active work
– Multi-project workflows where different repos are involved
– Situations where you want to batch several large tasks and review them together
The temptation is to default to Editor View because it feels familiar. Resist it for anything that’ll take more than a few minutes. Manager View is where the real productivity multiplier lives.
Setting Up Manager View: Workspaces, the Inbox, and the Parallel Agent Board
Open Manager View from the top navigation bar. The layout has four zones worth knowing:
Workspaces are project containers. Each workspace maps to a codebase — a Git repo, a folder, whatever your project unit is. The key rule: one agent per workspace. Running two agents against the same workspace invites conflicts.
If you have a monorepo, split it into logical workspaces (e.g., /frontend, /api, /infra) before dispatching.
The Inbox is where agent messages land when they need your input. Approval requests, clarifying questions, task completion confirmations — all here. It’s your queue, not a chat window. You process it on your schedule.
The Parallel Agent Status Board is the command center. Each active agent appears as a card with its current task, status (running / waiting / complete), and a link to its most recent Artifact. You can see at a glance which agents are blocked, which are running, and what’s ready for review.
The Playground is an ephemeral sandbox workspace — no persistent state, no Git history. Use it for exploratory prompts or to test a new approach before assigning work to a real workspace.
Manager View supports dispatching up to five agents simultaneously across different workspaces. In practice, three is a comfortable working number when you’re learning the tool — enough to feel the parallelism without overwhelming your review queue.
Choosing the Right Model for Each Task Type
Antigravity lets you set a model per workspace, per task, or at the account level. The choice matters more than most tutorials acknowledge. Before diving into model selection, it’s worth understanding how multi-model stacks behave in production — the tradeoffs surface in ways you don’t anticipate from benchmarks alone.
Here’s a working framework for Antigravity specifically:
Gemini 3.1 Pro — Best for architecture tasks, planning, and high-level design. Strong at reasoning about system structure, generating implementation plans as Artifacts, and handling long-context codebases. This is your default for any task that starts with “figure out how to…”
Claude Opus 4.6 — Best for nuanced backend logic, legacy code comprehension, and tasks where instruction-following precision matters. If you’re working in a codebase with decades of accumulated context and implicit conventions, Opus handles the ambiguity better. It’s also notably good at code review-style critique.
GPT-OSS 120B — Best for cost-sensitive workflows or teams with a policy preference for open-weight models. Performance is competitive on well-defined tasks. Where it lags is open-ended problem-solving with insufficient context.
A practical default: use Gemini 3.1 Pro as your Manager View workhorse, switch to Claude Opus 4.6 for any workspace touching sensitive or complex backend logic, and reserve GPT-OSS 120B for high-volume, well-scoped tasks like test generation.
The Artifacts Review Workflow: Supervise AI Work Without Reading Raw Diffs
Artifacts are the most underexplained feature in Antigravity’s documentation — and they’re the feature that actually replaces a chunk of your PR review process.
When an agent completes a task, it doesn’t just commit code. It produces an Artifact: a structured summary of what it did. Artifacts can contain:
– Task lists with checked-off items
– Implementation plans with rationale
– Screenshots of UI changes
– Browser Agent recordings
– Test results with pass/fail breakdowns
You review the Artifact, not the diff. For most tasks, this is faster and surfaces intent better than staring at a raw git diff.
The key interaction: leave inline comments on the Artifact like you would in Google Docs. The agent reads those comments and adjusts its work without you needing to re-prompt from scratch or restart the task. If you write “the auth logic here doesn’t account for expired tokens,” the agent will address that specific point. This is the review loop that replaces back-and-forth in a chat window.
Antigravity offers four review policies:
– Always Review — every Artifact waits for your approval before the agent proceeds
– Agent-Assisted Development — the agent flags uncertain items for review, auto-proceeds on confident ones
– Auto-Proceed — minimal interruptions, agent continues autonomously
– Custom — configure per workspace
Start with Agent-Assisted Development. It gives you oversight without turning every workspace into a queue of approval dialogs. Loosen to Auto-Proceed only after you’ve validated the agent’s judgment on a few complete tasks in a given workspace.
Using the Browser Agent to Test Your App Without Leaving the IDE
The Browser Agent is one of those features that sounds gimmicky until you use it once.
Spin up a Browser Agent from the Manager View sidebar. Point it at your localhost URL. The agent opens a headless browser, navigates your app, and interacts with it — filling forms, clicking buttons, scrolling through components — while recording the session. When it’s done, it produces a Walkthrough Artifact: a recorded video of the entire session with the agent’s observations annotated inline.
You watch a two-minute video instead of running the app yourself. The agent flags layout issues, broken interactions, console errors, and accessibility problems it encountered during the session.
The full loop looks like this:
1. Prompt the Browser Agent with a test scenario (“Verify the checkout flow works end-to-end”)
2. Agent navigates your localhost, records the session
3. Review the Walkthrough Artifact — watch the recording, read the annotations
4. Leave comments on anything that needs fixing
5. Agent addresses the comments, optionally re-runs verification
For UI-heavy work, this cuts the manual QA loop dramatically — especially when a separate agent is generating the UI changes you’re verifying.
Running a Real Parallel Workflow: Three Agents, Three Tasks, Zero Conflicts
You’re shipping a new checkout feature that touches the data layer, needs unit tests, and includes a UI component.
Traditional approach: do these sequentially. Agent finishes the data layer refactor, you review, then prompt for tests, wait, review, then prompt for the UI component.
With Antigravity Manager View:
- Workspace A / Agent 1: Refactor the data layer. Use Gemini 3.1 Pro. Prompt it with the full scope of the refactor and ask it to produce an implementation plan Artifact before writing any code.
- Workspace B / Agent 2: Write unit tests for the current (pre-refactor) data layer interface. This works in parallel because tests against the stable interface don’t conflict with Agent 1’s refactor. Use Claude Opus 4.6 here.
- Workspace C / Agent 3: Build the new checkout UI component. Stub out the data dependencies so it doesn’t block on Agent 1. Use Gemini 3.1 Pro.
Structuring workspaces to avoid conflict is the skill here. When agents are touching overlapping files, they’ll step on each other. Parallel work succeeds when you scope each workspace to a distinct concern. Git worktrees for parallel agent isolation give each agent its own working tree with no shared file state.
When all three Artifacts land in your Inbox, you review them together, cross-check the interface contracts, and merge. What was a sequential multi-hour workflow compresses into parallel execution plus one focused review session.
Limitations to Know Before You Commit
Antigravity is genuinely impressive, but three limitations are worth surfacing before you adopt it.
No MCP support (yet). Model Context Protocol lets tools like Claude Code and Cursor connect to external data sources — databases, internal wikis, API schemas — at inference time. Antigravity doesn’t support MCP as of April 2026. If your workflow depends on MCP integrations, this is a hard blocker.
Open VSX extension gaps. Already mentioned above, but worth reiterating: audit your extensions before migrating a production workflow. The gap is narrowing, but it’s real.
Terminal Command Auto Execution policies. Antigravity can execute terminal commands autonomously — which is powerful and occasionally alarming. In Settings, configure your Allow List (commands the agent can run without asking) and Deny List (commands it must never run). The default policy requires approval for most commands. Don’t set it to full auto-execute until you understand the scope of what agents are running. A quick audit of your Inbox after the first few tasks will show you which approvals come up repeatedly — those are the candidates to add to your Allow List.
Where Antigravity Fits in a Stack That Already Includes Claude Code or Cursor
The most common question from developers evaluating Antigravity: do I replace Claude Code with this, or run both?
Run both. They serve different parts of the workflow.
Antigravity excels at big-picture orchestration: dispatching multiple agents across workspaces, reviewing Artifacts, using the Browser Agent for UI validation. It’s your async layer.
Claude Code excels at granular execution: CLI-native tasks, GitHub PR integration, CI/CD pipelines, and anything where you need tight terminal control and real-time output. If you’ve built workflows around Claude Code’s agentic terminal capabilities, those don’t go away — they become the execution layer beneath Antigravity’s orchestration.
A practical split: use Antigravity Manager View to plan and dispatch feature work across workspaces. When an agent’s output needs to go through your CI pipeline or trigger a GitHub PR, hand that off to Claude Code. The two tools are complementary, not competing.
Cursor sits somewhere in the middle — stronger than Antigravity on IDE polish and marketplace extensions, weaker on async agent orchestration. With Cursor crossing $2 billion in ARR and roughly 25% market share among generative AI software buyers, it’s not going anywhere. Antigravity targets a different use case: developers who want to move work out of the synchronous loop entirely.
Start with One Parallel Workflow This Week
The mental shift from synchronous to asynchronous AI coding is harder than the setup. Antigravity installs in minutes. Trusting three agents to run in parallel while you do something else takes practice.
Start small: pick one upcoming task you’d normally handle sequentially and split it into two parallel workspaces. Watch how the Artifacts come back. Leave one comment on each. See how the agent responds.
That single experiment will teach you more about Google Antigravity’s actual value — and its real limits — than any tutorial can. Once the pattern clicks, you won’t want to go back to waiting.