Sorry, but there's really nothing to see here right now. Our server is in maintenance mode – and our
little bot is trying to sort the error messages with its last bit of energy.
Whether it was a data leak, a power outage in the cluster, or just a grumpy bit – we don't know yet.
But what we do know is:
The data drama escalated into a server meltdown. But don't worry: We're on it.
What now?
Everything is surely running smoothly over at our colleagues from inside
digital – why not stop by!
Or you could pass the time by visiting our social media channels – for example Instagram, YouTube or TikTok. At least there are no
server problems there – promised.
Thanks for your patience. Our bot is already rebooting at full speed. 🔧
Hackers have found a way to use AI chatbots against you, sneaking malicious command codes into search results and chatbots. Here’s what researchers uncovered and how you can keep your accounts and money secure.
The arrival of AI has arguably reshaped people’s lives. But beyond answering quick questions and solving technical problems, hackers may be exploiting search and AI chatbots to literally hack your device and steal your money and passwords. This was uncovered in new security research, and it is even more alarming because it relies on methods that users are often unaware of.
Instead of using traditional tactics, attackers are leveraging AI to execute commands and install malicious programs that infect a device. Once this is done, the malware can begin its nefarious acts, such as spying and recording sensitive data like passwords. In the worst case, it can siphon your hard-earned money from your bank accounts.
The attack was detailed by the security research group Huntress (via Engadget), which targets Apple Mac devices. They explained that a user might search Google for a guide to free up storage on a Mac, using a keyword phrase like “Clear disk space on macOS.” Among the top results was a step-by-step guide presented by ChatGPT or Grok. Clicking on it opened a chatbot dialogue.
Google Search results showing links to ChatGPT and Grok with compromised commands. Image source: Huntress
In that dialogue, the process included opening the command terminal and entering commands. Unbeknownst to the user, one of those commands was malicious. If entered, it triggered a multi-stage infection, including the installation of malware called AMOS, which infiltrates the system.
No Signs That You’re Hacked
Even worse, these commands typically run normally, with no signs that the system has detected malware. This bypasses macOS’s threat filters.
That makes the attack harder to spot compared to the usual flow of opening a DMG file, an installer, or clicking a suspicious link, where macOS might detect malware during execution.
Hackers plant executable and malicious commands in ChatGPT and Grok. Once executed, it can infiltrate a Mac device and install malware. Image source: Huntress
Once the malware is in the system, unknown to the user, it can execute further actions such as stealing sensitive information that attackers can use to access accounts and financial apps.
Huntress reported the issue to Google when publishing its findings. However, the malicious links and chatbot results remained online for several hours before being taken down.
Fortunately, there are no confirmed reports of large numbers of users being affected. Still, since the exploit has existed for some time, some users may be compromised without realizing it.
What Users Can Do to Stay Safe
Apple will need to issue a security patch to remove the threat. In the meantime, users who suspect compromise should run a security scan to identify and remove malware. If deeper infiltration is suspected, a clean reinstall of macOS is the safest option.
Basic safeguards are also essential, such as changing passwords and enabling two-factor authentication.
How do you trust chatbots? What limitations do you set to keep your device and accounts safe when using these AI tools? Share your thoughts in the comments.
We mark partner links with this symbol. If you click on one of these links or buttons–or make a purchase through them–we may receive a small commission from the retailer. This doesn’t affect the price you pay, but it helps us keep nextpit free for everyone. Thanks for your support!