According to a new Bitkom study, 42% of respondents have received false or fictitious answers from an AI chat. Let that figure sink in! Despite this, only 57 percent reviewed the generated information before using it. Sixty-four percent rated the answers as “unsatisfactory,” and as many as 73 percent classified them as “helpful.” In reality, this means AI tools are a convenient shortcut for many, but one that is often used without a safety net.

We are experiencing a tectonic shift in information behavior where convenience threatens to override critical scrutiny. Instead of working their way through lists of web links, more and more people prefer the direct answer formulated by an AI. This development is largely driven by the desire for efficiency and simplicity, but this act also fundamentally changes how we access information and evaluate its credibility.

The key usage statistics from the Bitkom study provide impressive evidence of this trend. As many as 50% of all internet users in Germany use AI chats such as ChatGPT occasionally instead of relying on a traditional search engine. This behavior is particularly pronounced in the younger generation: among 16 to 29-year-olds, as many as two-thirds use this new method of searching for information. Dr. Bernhard Rohleder, Managing Director of Bitkom, explained the motivation behind this:

“Many people prefer to use the compact answer from the AI chat instead of clicking through search results themselves and looking for clues to their question on websites.”

50% of the Results are Qualitatively Problematic

A look at the age group breakdown is a particularly interesting affair. Younger people are significantly more likely to use AI chats as an entry point to the internet. For them, chat input is increasingly replacing the classic search field, which suddenly feels unnecessarily complicated. Search engines provide links, while AI provides ready-made sentences — and that seems to be the decisive factor.

The problem with this? Large language models appear confident, even when they are wrong. Incorrect facts, unclear sources, outdated information, or completely invented details are still commonplace. Even newer models with higher accuracy produce errors that would have been noticed immediately in a traditional search.

An investigation by the European Broadcasting Union (EBU) underlined this danger with frightening precision. In a test of the free versions of the “big four” chatbots, it was found that almost half of all responses were incorrect. The analysis revealed that over 17% of AI responses were incorrect in key respects, while a further 31% contained significant inaccuracies, such as context or source references.

Vicious Circle of Disinformation

The Bitkom study thus showed less of a trend towards AI search as a replacement, but rather, a shift in responsibility. Many users expect the machine to be right, even if they should actually know that this is not always the case. Convenience beats diligence.

Experts strongly warn against equating these tools with traditional search engines, whose primary goal is to refer to existing sources. Computer science professor Katharina Zweig from the Rhineland-Palatinate Technical University summarized the most important rule of conduct unequivocally: “The first rule is: don’t use it as a search engine.”

The reasons for the high susceptibility to errors lie in the technology itself. The main causes of misinformation include unreliable sources, the mixing of facts and opinions, and the “hallucination” that is common with AI.

These consequences threaten our common foundation of knowledge. A Princeton University study found that up to five percent of new, English-language Wikipedia entries already contain AI-generated material. This creates a dangerous, vicious cycle of disinformation, where AIs are trained with erroneous content generated by other AIs.

Conclusion: Skepticism is the New Media Literacy

The shift of internet search to AI chatbots is a trend that is changing our information landscape for good. Our convenience should not prevent us from recognizing the real and growing danger of disinformation. The studies show that blind faith in technology is not only naïve but dangerous.

In the age of AI, skepticism is not cynicism — it is a prerequisite for survival in the digital world. Critically checking facts and sources is no longer an option, but an essential skill to protect not only ourselves, but also the integrity of our shared sources of knowledge.

Either way, AI-based search will continue to grow. Google has even been building the technology directly into its search platforms for some time. On the other hand, there is an increasing need for users to remain aware of the risks — especially when it comes to sensitive topics.

For you, this means AI can save you a lot of time, but it must not be the last resort. If you receive an answer, you should check it if in doubt — no matter how convincing it sounds.

How do you do that? Do you already use AI chat as a replacement for traditional searches — or do you prefer to continue using Google, DuckDuckGo, and the ilk?