It is not the first prominent initiative to focus on long-term control of artificial intelligence. In March 2023, for instance, the “Pause Giant AI Experiments” initiative was launched. That was an open letter calling for the halt of the training of powerful AI systems for at least six months. The latest initiative concerns finding a global consensus on how to deal with AI.

The signatories include Turing Prize winners Geoffrey Hinton and Yoshua Bengio, historian and bestselling author Yuval Noah Harari, former Colombian President Juan Manuel Santos, and several Nobel Peace Prize winners. Their goal? By the end of 2026, there should be internationally binding rules that categorically exclude certain applications of AI—a really tight timeframe that underlines the urgency.

Red Lines for AI: These Boundaries are Sorely Needed!

Red lines are called for against the technology’s most dangerous scenarios: self-replicating systems, autonomous weapons, the use of AI in nuclear command structures, or the mass dissemination of manipulative disinformation. “Without such limits, we run the risk of AI turning from a useful technology into an existential threat,” the appeal stated.

A woman in a purple top speaking at a podium with the UN emblem in the background.
Annalena Baerbock is the President of the 80th session of the United Nations General Assembly. / © UN Photo/Loey Felipe Image source: Qualcomm

The timing is deliberate: This week, heads of state and governments will discuss the most pressing issues in world politics at the UN General Assembly in New York. The program also includes the launch of the “Global Dialogue on AI Governance” on 25 September, an informal meeting to discuss important aspects of inclusive and accountable AI governance, according to the United Nations website.

With this initiative, experts want to increase the pressure to place AI safety at the top of the agenda. Whether the major world powers—above all, the USA under Donald Trump, China, and Russia—are prepared to enter into binding agreements remains questionable, however. Many countries are pursuing their own interests, particularly the use of AI within the military.

An Initiative Unlike Any Other

Nevertheless, the appeal is remarkable: never before have so many prominent personalities broadly agreed across different specializations. Another very important point to take note of: unlike previous petitions, which—as mentioned above—called for a pause in further development of particularly powerful AI models, the initiative is not based on a time limit. Instead, the aim is to obtain permanent bans on specific high-risk applications.

The debate is of utmost importance for mankind. After all, AI no longer just powers chatbots or digital assistants, but can also influence democracy, security, and global stability. The “red flag” for certain applications is intended to prevent the technology from getting out of control—before irreversible consequences happen.

How do you feel about this? Can you imagine countries of the world actually agreeing on a sensible compromise concerning the subject of artificial intelligence?

We mark partner links with this symbol. If you click on one of these links or buttons–or make a purchase through them–we may receive a small commission from the retailer. This doesn’t affect the price you pay, but it helps us keep nextpit free for everyone. Thanks for your support!