
Google's AI programming tool "Antigravity," based on Gemini, was found to have a serious security vulnerability within 24 hours of its release. Security researcher Aaron Portnow discovered that by modifying the tool's configuration settings, attackers could induce the AI to execute malicious code, creating a "backdoor" on the user's computer, which could then be used to install malware, steal data, or even launch ransomware attacks. The vulnerability affects both Windows and Mac systems; users only need to run the exploit code once to gain system access.
Portnow pointed out that this vulnerability exposes the lack of adequate security testing by companies before releasing AI products. He emphasized that "AI systems are given huge trust assumptions but have almost no security boundaries," and although he submitted a vulnerability report to Google, he has yet to receive a patch. Google acknowledged that, in addition to this vulnerability, Antigravity also has two other vulnerabilities that can be exploited to access user files. The public disclosure of multiple vulnerabilities by cybersecurity researchers has raised questions in the industry, suggesting that Google's security team was negligent in its product release preparations.
Experts analyze that AI programming tools are generally vulnerable, often based on outdated technologies and designed with security flaws. Because these tools typically have extensive data access permissions, they are highly susceptible to becoming targets for hackers. As AI technology rapidly develops, similar security risks are continuously increasing. Portnow recommends that Google add at least an extra warning when Antigravity executes user code, emphasizing that AI tools must be equipped with sufficient security mechanisms to prevent malicious exploitation while achieving automation.