Sorry, aber gerade ist bei uns wirklich gar nix zu holen. Unser Server ist im Wartungsmodus – und unser kleiner Bot versucht gerade, mit letzter Energie die Fehlermeldungen zu sortieren.
Ob’s ein Daten-Leck, ein Stromausfall im Cluster oder einfach nur ein mies gelauntes Bit war – wir wissen es noch nicht. Was wir aber wissen:
Das Daten-Drama eskalierte zu einer Server-Schmelze. Aber keine Sorge: Wir sind dran.
Was jetzt?
Bei unseren Kolleg:innen von inside digital läuft bestimmt alles rund – schau doch mal vorbei!
Oder du vertreibst dir die Zeit mit einem Besuch auf unseren Social-Media-Kanälen – zum Beispiel Instagram, YouTube oder TikTok. Da gibt’s immerhin keine Serverprobleme – versprochen.
Danke für deine Geduld. Unser Bot rebootet schon mit Hochdruck. 🔧
NewsAI: 200 Experts, Nobel Prize Winners, And Politicians Sound the Alarm!
Over 200 leading voices from science, politics, and society are calling on the United Nations to define clear red lines when it comes to the development of artificial intelligence. The appeal hails from Nobel Prize winners, former heads of state and government, as well as pioneers of AI research—and is already considered the most important call for international regulation to date.
It is not the first prominent initiative to focus on long-term control of artificial intelligence. In March 2023, for instance, the “Pause Giant AI Experiments” initiative was launched. That was an open letter calling for the halt of the training of powerful AI systems for at least six months. The latest initiative concerns finding a global consensus on how to deal with AI.
The signatories include Turing Prize winners Geoffrey Hinton and Yoshua Bengio, historian and bestselling author Yuval Noah Harari, former Colombian President Juan Manuel Santos, and several Nobel Peace Prize winners. Their goal? By the end of 2026, there should be internationally binding rules that categorically exclude certain applications of AI—a really tight timeframe that underlines the urgency.
Red Lines for AI: These Boundaries are Sorely Needed!
Red lines are called for against the technology’s most dangerous scenarios: self-replicating systems, autonomous weapons, the use of AI in nuclear command structures, or the mass dissemination of manipulative disinformation. “Without such limits, we run the risk of AI turning from a useful technology into an existential threat,” the appeal stated.
The timing is deliberate: This week, heads of state and governments will discuss the most pressing issues in world politics at the UN General Assembly in New York. The program also includes the launch of the “Global Dialogue on AI Governance” on 25 September, an informal meeting to discuss important aspects of inclusive and accountable AI governance, according to the United Nations website.
With this initiative, experts want to increase the pressure to place AI safety at the top of the agenda. Whether the major world powers—above all, the USA under Donald Trump, China, and Russia—are prepared to enter into binding agreements remains questionable, however. Many countries are pursuing their own interests, particularly the use of AI within the military.
Nevertheless, the appeal is remarkable: never before have so many prominent personalities broadly agreed across different specializations. Another very important point to take note of: unlike previous petitions, which—as mentioned above—called for a pause in further development of particularly powerful AI models, the initiative is not based on a time limit. Instead, the aim is to obtain permanent bans on specific high-risk applications.
The debate is of utmost importance for mankind. After all, AI no longer just powers chatbots or digital assistants, but can also influence democracy, security, and global stability. The “red flag” for certain applications is intended to prevent the technology from getting out of control—before irreversible consequences happen.
How do you feel about this? Can you imagine countries of the world actually agreeing on a sensible compromise concerning the subject of artificial intelligence?
We mark partner links with this symbol. If you click on one of these links or buttons–or make a purchase through them–we may receive a small commission from the retailer. This doesn’t affect the price you pay, but it helps us keep nextpit free for everyone. Thanks for your support!
0 comments