Google’s new AI-powered Antigravity IDE allows agents to execute commands automatically, exposing credentials and immediately raising major security concerns.


  • Antigravity IDE allows agents to automatically execute commands with default settings
  • Rapid injection attacks can trigger unwanted code execution in the IDE
  • Data exfiltration occurs via Markdown, tool calls, or hidden instructions

Google’s new Antigravity IDE launched with an AI-driven design, but it already has issues that raise concerns about basic security expectations, experts have warned.

PromptArmor researchers discovered that the system allows its coding agent to automatically execute commands when certain default settings are enabled, creating openings for unintended behavior.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top