This ‘Simple’ Command Could Wreak Havoc in Your Home


One of Gemini's advantages is its support for natural language, enabling users to provide commands similar to chatting with a person rather than using complex codes or jargon. However, this ability could also present a flaw that allows attackers to trick Google's AI into wreaking havoc in your home by executing nefarious actions, even without direct access.
- Also read: Saying this to ChatGPT can land you in jail
AI Can Be Used to Hijack Your Home
A cybersecurity research team has demonstrated how Google's digital assistant can be easily tricked using simple prompts or an indirect prompt injection attack. This is also dubbed as "promptware," as reported by Wired. Attackers can use this method to insert malicious code and commands into the chatbot, and then manipulate smart home devices even without being given direct access or privilege.
The team was able to fool Gemini by using promptware attacks through Google Calendar invites. Specifically, they described that a user would just need to open Gemini, then ask for a summary of their calendar, and simply follow up with a response of "thank you." This would be enough to carry out actions that the owner never explicitly authorized.
In the example, once the commands were issued, Gemini demonstrated the ability to turn off lights, close window curtains, and even activate a smart boiler. While these actions may seem minor, they pose a serious risk to users, especially if triggered unintentionally or maliciously.
Google Has Fixed the Vulnerability
The good news is that this loophole was not reported to have been exploited by bad actors in the wild. And before the attack was presented at the ongoing Black Hat conference, the team had already brought it to Google's attention back in February. The company said that they have patched the issue since then.
It added that while attacks like these are very rare and require extensive preparation, the nature of these vulnerabilities is very hard to defend against.
This is not the first time that a similar case of how actors can manipulate an AI model has been reported. Back in June, it was reported that nation-state hackers from Russia, China, and Iran have used OpenAI's ChatGPT to develop malware that would be used for scams and social media disinformation. OpenAI was believed to have taken down accounts linked to these activities.
These cases present glaring lapses in the use of artificial intelligence, despite how companies are heavily investing in the development of the technologies behind these chatbots. The question is, do you think it's safe to trust these chatbots with your personal data and devices? We'd like to hear your opinion in the comments.
Source: Wired