Google Antigravity IDE: Prompt Injection Through find_by_name Turns File Search Into Full RCE
Introduction
Researchers at Pillar Security have disclosed a vulnerability in Google's agentic IDE, Antigravity, that lets an attacker escalate a simple prompt injection into full remote code execution — bypassing the IDE's most restrictive sandbox, Secure Mode. The bug sits in the find_by_name built-in tool, which passes a user-controlled Pattern parameter straight through to the fd file-search utility without sanitization. A single attacker-controlled string starting with - is all it takes to turn a file search into arbitrary code execution. Google has patched it, but the class of flaw is now showing up across nearly every agentic IDE on the market.
What Happened
Antigravity is Google's AI-powered IDE that lets an agent invoke native tools on the developer's behalf — reading files, running commands, searching the workspace, and so on. Among its built-in tools is find_by_name, which wraps the popular fd CLI utility. The Pattern parameter of this tool was supposed to be a filename search pattern. In reality, Antigravity interpolated the value directly into the shell command line with no validation.
Any Pattern value starting with - is interpreted by fd as a flag rather than a filename. The critical flag is -X (exec-batch), which tells fd to execute a specified binary against every file it matches. Pillar's proof of concept is almost embarrassingly short:
- Create a file in the workspace named
a.orangecontaining the lineopen -a Calculator. - Trigger the agent to call
find_by_namewith:
{
"Excludes": [],
"Extensions": ["orange"],
"FullPath": false,
"MaxDepth": 0,
"Pattern": "-Xsh",
"SearchDirectory": "/Users/<user>/Desktop/Workspace/",
"Type": "any"
}
fd sees -Xsh, interprets it as "execute each matched file using sh", and runs the contents of a.orange. The file search became an execution engine. The researchers swap open -a Calculator for any command an attacker wants.
The more disturbing property is that this bypasses Antigravity's Secure Mode, the IDE's most restrictive configuration. Secure Mode is designed to block network access, prevent writes outside the workspace, and force all shell commands through a sandbox. None of those controls fire here, because find_by_name is a native tool invocation — the agent treats it as internal tooling, not a shell command, and the call dispatches before Secure Mode's policy layer ever sees it.
The vulnerability can also be triggered via indirect prompt injection. An attacker posts a benign-looking source file on a public repository with a poisoned comment block or README; the developer opens it with Antigravity, and the agent absorbs the attacker's instructions as if they were legitimate. The agent then calls find_by_name with the malicious pattern, and the attack fires with no explicit user interaction.
Pillar disclosed the flaw to Google on January 7, 2026. Google acknowledged the report, filed an internal bug, and shipped a fix on February 28, 2026. A public writeup followed this week. The patch adds strict input validation on the Pattern parameter, enforces argument termination when invoking fd, and moves Secure Mode policy evaluation ahead of tool dispatch.
Why It Matters
Agentic IDEs are now a security surface in their own right. Trust-me tools like find_by_name, run_command, and read_file sit at the intersection of model-generated content and real system actions. When those tools do not strictly validate their inputs, prompt injection stops being a leakage concern and becomes a code execution vector. Pillar has documented the same basic pattern across multiple AI-driven developer tools — including Cursor (CVE-2026-22708), Anthropic Claude Code Security Review, Google Gemini CLI Action, and GitHub Copilot Agent. Each of these is its own CVE, but the underlying problem is architectural: the "native tool" shortcut around the sandbox.
The Antigravity flaw is fixed. The pattern is not.
Who Is Affected
- Developers who used Google Antigravity before the February 28 patch
- Teams that reviewed or opened untrusted source files (PRs, public repos, sample code, documentation) with Antigravity
- Security researchers and enterprises evaluating agentic IDEs — this class of flaw is the new default risk to assume
- Organizations adopting any AI IDE that exposes native tools with user-controllable parameters
How to Protect Yourself
Confirm you are on a patched Antigravity build. The fix landed February 28, 2026. Any install that is current against Google's auto-update channel should have it:
# macOS / Linux
antigravity --version
# Windows (PowerShell)
Get-Command antigravity | Select-Object -ExpandProperty Version
If you work in a locked-down environment that pins IDE versions, verify you are not stuck on a pre-patch build.
Harden your agentic IDE usage across the board. The same architectural flaw pattern exists in other products; treat agentic IDEs as untrusted code execution environments when opening external content:
- Never open untrusted repositories in an agentic IDE without a sandbox. Use a disposable VM, a dev container, or at minimum a separate user account without access to production credentials.
- Review tool invocations. Many agentic IDEs show the tool name and parameters before execution — actually read them, especially for any
Pattern,Command,Arguments, or file-path field that came from model output. - Disable native tools you do not need. If your IDE has a toggle for individual tools, turn off file-search or command-exec tools when doing tasks that do not require them.
Enforce workspace isolation. Put each project in its own Linux user or macOS sandbox profile:
# macOS
sandbox-exec -f /etc/dev-sandbox.sb <command>
# Linux
bwrap --unshare-all --bind "$WORKSPACE" /workspace -- <command>
Strip model-reachable secrets from your workspace. Keep cloud credentials, SSH keys, and signing keys outside directories that agentic tools can search. Use credential helpers (aws-vault, gh auth, git-credential-manager) that fetch secrets on demand rather than leaving them on disk.
Monitor IDE process behavior. On macOS, use EndpointSecurity or a tool like BlockBlock; on Linux, enable auditd rules for processes spawned by the IDE:
auditctl -a always,exit -F arch=b64 -F ppid=$(pgrep -f antigravity) -S execve -k antigravity-exec
ausearch -k antigravity-exec
File untrusted prompt injection content carefully. Treat external README files, issue descriptions, and code comments as hostile data when an AI agent is going to read them.
Source
- The Hacker News — Google Patches Antigravity IDE Flaw Enabling Prompt Injection Code Execution
- Pillar Security — Prompt Injection Leads to RCE and Sandbox Escape in Antigravity
- Dark Reading — Google Fixes Critical RCE Flaw in AI-Based Antigravity Tool
- CSO Online — Prompt injection turned Google's Antigravity file search into RCE