Major companies continue to advance artificial intelligence, which is improving our lives in many ways. However, with the rapid expansion of AI, malicious actors are also exploiting the technology for their own gain. Some are attempting to build their own AI systems by copying existing models, which can then be used for harmful attacks. A similar case is currently affecting Google.
Attackers Flood Gemini with Thousands of AI Prompts
The latest threat activity report from Google’s Threat Intelligence Group (GTIG) reveals that the company’s Gemini model has been targeted by distillation attacks. In these attacks, perpetrators attempt to clone the chatbot or extract sensitive technological information from it.
In one campaign, Google reported that Gemini was bombarded with 100,000 prompts, far exceeding normal usage limits and overloading the system. The company noted that similar attacks have been repeated, following the same distillation patterns and coming from different groups.
According to Google, mercenary groups from Russia, China, Iran, and other countries may be behind the attacks, alongside researchers from rival AI companies.
In a statement to NBC, GTIG’s chief analyst, John Hultquist, described this as intellectual theft. Companies like Google and OpenAI invest billions in infrastructure to develop their AI models, while successful distillation campaigns allow attackers to benefit without making the same investments.
A similar incident occurred when OpenAI accused its Chinese rival, DeepSeek, of utilizing the same method by training its models using ChatGPT outputs.
What Is an AI Distillation Attack?
Distillation attacks exploit the public availability of AI systems online. Attackers repeatedly query the model in a process known as model extraction, aiming to learn its inner workings and gather critical data.
The stolen information can then be used to build competing AI models or chatbots in other languages. More dangerously, it can be repurposed to develop advanced tools such as malware and spyware that infiltrate devices.
This mirrors last year’s case of malicious apps on Apple’s App Store, which posed as legitimate ChatGPT alternatives. These apps were later found to contain malicious code and were hosted on servers controlled by malicious actors. Apple removed the dangerous apps only weeks after they bypassed security screening, but not before they had been downloaded thousands of times.
How do you use AI or chatbots in your daily life? Do you feel they enhance your experiences, or do the risks outweigh the benefits? We’d love to hear your perspective.
We mark partner links with this symbol. If you click on one of these links or buttons–or make a purchase through them–we may receive a small commission from the retailer. This doesn’t affect the price you pay, but it helps us keep nextpit free for everyone. Thanks for your support! Prices are based on the date of publication of this article and may be subject to change.