AI Can Be Used to Hijack Your Home

A cybersecurity research team has demonstrated how Google’s digital assistant can be easily tricked using simple prompts or an indirect prompt injection attack. This is also dubbed as “promptware,” as reported by Wired. Attackers can use this method to insert malicious code and commands into the chatbot, and then manipulate smart home devices even without being given direct access or privilege.

The team was able to fool Gemini by using promptware attacks through Google Calendar invites. Specifically, they described that a user would just need to open Gemini, then ask for a summary of their calendar, and simply follow up with a response of “thank you.” This would be enough to carry out actions that the owner never explicitly authorized.

Recommended editorial content
This external content from YouTube was selected by the nextpit editorial team and complements the article. You can choose whether you want to load this content.
Allow external content
I agree to external content being loaded. Personal data may be transferred to third-party platforms. Further information can be found in the Privacy Policy.

In the example, once the commands were issued, Gemini demonstrated the ability to turn off lights, close window curtains, and even activate a smart boiler. While these actions may seem minor, they pose a serious risk to users, especially if triggered unintentionally or maliciously.

Google Has Fixed the Vulnerability

The good news is that this loophole was not reported to have been exploited by bad actors in the wild. And before the attack was presented at the ongoing Black Hat conference, the team had already brought it to Google’s attention back in February. The company said that they have patched the issue since then.

It added that while attacks like these are very rare and require extensive preparation, the nature of these vulnerabilities is very hard to defend against.

This is not the first time that a similar case of how actors can manipulate an AI model has been reported. Back in June, it was reported that nation-state hackers from Russia, China, and Iran have used OpenAI’s ChatGPT to develop malware that would be used for scams and social media disinformation. OpenAI was believed to have taken down accounts linked to these activities.

These cases present glaring lapses in the use of artificial intelligence, despite how companies are heavily investing in the development of the technologies behind these chatbots. The question is, do you think it’s safe to trust these chatbots with your personal data and devices? We’d like to hear your opinion in the comments.

We mark partner links with this symbol. If you click on one of these links or buttons–or make a purchase through them–we may receive a small commission from the retailer. This doesn’t affect the price you pay, but it helps us keep nextpit free for everyone. Thanks for your support! Prices are based on the date of publication of this article and may be subject to change.