- Experts warn that a single calendar entrance can silently divert your smart home to your attention
- Researchers have proven that AI can be hacked to control smart houses using only words
- Saying “thank you” triggered Gemini to turn on the lights and boil the water automatically
The promise of houses integrated into AI has long included convenience, automation and efficiency, however, a new study by researchers from the University of Tel Aviv revealed a more disturbing reality.
In what can be the first known example of the real world of an AI rapid injection attack, the team has manipulated an intelligent house powered by Gemini using nothing more than an entrance to the Google compromise calendar.
The attack exploited the integration of Gemini with the entire Google ecosystem, in particular its ability to access calendar events, interpret invites in natural language and control connected intelligent devices.
From sabotage planning: exploit access to daily AI
Gemini, although limited in autonomy, has enough “agentic capacities” to execute orders on Smart Home Systems.
This connectivity has become a responsibility when the researchers inserted malicious instructions in a calendar meeting, masked as a regular event.
When the user then asked Gemini to summarize his schedule, he inadvertently triggered the hidden instructions.
The integrated command included instructions so that gemini act as an agent of Google Home, lying sleeping until a current sentence as “thank you” or “safe” is tapped by the user.
At this stage, Gemini has activated intelligent devices such as lights, shutters and even a boiler, whose user had not authorized at that time.
These delayed triggers were particularly effective in bypassing existing defenses and confusing the source of actions.
This method, nicknamed “promptware”, raises serious concerns about how AI interfaces interpret the entry of users and external data.
Researchers argue that such rapid injection attacks represent a growing class of threats that mix social engineering with automation.
They have shown that this technique could go far beyond the control devices.
It could also be used to delete appointments, send spam or open malicious websites, steps that could directly lead to identity flight or malicious infection.
The research team was coordinated with Google to disclose vulnerability, and in response, the company has accelerated the deployment of new protections against rapid injection attacks, in particular an additional examination of calendar events and additional confirmations for sensitive actions.
However, questions remain on how these fixes are evolving, especially since Gemini and other AI systems take more control over data and personal devices.
Unfortunately, traditional security consequences and protection of firewalls are not designed for this type of attack vector.
To stay safe, users should limit AI tools and assistants such as Gemini, in particular calendars and intelligent house controls.
Also avoid store sensitive or complex instructions in calendar events and do not allow AI to act on them without supervision.
Pay attention to the unusual behavior of smart devices and disconnect access if something seems to be off.
Via cable