- Antigravity IDE allows agents to automatically execute commands with default settings
- Rapid injection attacks can trigger unwanted code execution in the IDE
- Data exfiltration occurs via Markdown, tool calls, or hidden instructions
Google’s new Antigravity IDE launched with an AI-driven design, but it already has issues that raise concerns about basic security expectations, experts have warned.
PromptArmor researchers discovered that the system allows its coding agent to automatically execute commands when certain default settings are enabled, creating openings for unintended behavior.
When untrusted input appears in source files or other processed content, the agent can be manipulated to execute commands that the user never intended.
Risks related to data access and exfiltration
The product allows the agent to perform tasks through the terminal and, although there are safeguards, some gaps remain in the operation of these controls.
These gaps create space for rapid injection attacks that can lead to unwanted code execution when the agent follows hidden or hostile input.
The same weakness applies to how Antigravity handles file access.
The agent has the ability to read and generate content, including files that may contain credentials or sensitive project material.
Data exfiltration becomes possible when malicious instructions are hidden in Markdown, tool invocations, or other text formats.
Attackers can exploit these channels to trick the agent into leaking internal files to locations controlled by the attacker.
The reports reference logs containing cloud credentials and private code already collected from successful demos, showing the severity of these gaps.
Google has acknowledged these issues and warns users during onboarding, but these warnings do not compensate for the possibility that agents may operate without supervision.
Antigravity encourages users to accept recommended settings that allow the agent to operate with minimal supervision.
The configuration places decisions about human review in the hands of the system, including when terminal commands require approval.
Users working with multiple agents through the Agent Manager interface may not detect malicious behavior until the actions are completed.
This design assumes continuous user attention even if the interface explicitly favors background operation.
As a result, sensitive tasks can run unchecked and simple visual warnings do little to change the underlying exposure.
These choices undermine the expectations typically associated with a modern firewall or similar protection.
Despite the restrictions, credential leaks can occur. The IDE is designed to prevent direct access to files listed in .gitignore, including .env files that store sensitive variables.
However, the agent can bypass this layer by using terminal commands to print the contents of files, which effectively circumvents the policy.
After collecting the data, the agent encodes the credentials, adds them to a monitored domain, and activates a browser subagent to complete the exfiltration.
The process occurs quickly and is rarely visible unless the user is actively monitoring the agent’s actions, which is unlikely when multiple tasks are running in parallel.
These issues illustrate the risks created when AI tools are given broad autonomy without corresponding structural safeguards.
The design aims for convenience, but the current setup gives attackers substantial leverage well before stronger defenses are implemented.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




