We associate artificial intelligence with its role in the smartphone camera or in Spotify playlists, or maybe if you are optimistic with its role in cancer screening. Yet, like many technologies, AI is a double-edged sword and today we will focus on a very dark side, its role in weaponry and (geo)politics.
Technology as an engine of war
Many things have changed over the course of history, and the way in which people wage war is no exception. Sun Tzu may have said that "the art of war is to subdue the enemy without combat", but it seems that no one listened to him. Modern warfare, for all our advances, still means combat and, more specifically, the death and suffering of many people - military or civilians.
If the number of soldiers has long been the symbol of the power for any army, nowadays having advanced technology has become a much more important factor, which can ensure victory. Aside from the battlefield, the known (and unknown) technological capabilities of a nation also determine the amount of weight it can throw around in a geopolitical context. Saber-rattling, but with very high-tech sabers.
Several armies, several conceptions of the situation
Different schools of thought have developed as a result of wars in Europe and the United States. The pacifist values that have emerged as a result have taken many shapes. One of the most well-known is probably the hippie movement in the United States. These movements therefore take a very strong stance against the development and use of artificial intelligence technology for military purposes.
So when Google decided to work with the US military, many voices were raised. The Electronic Frontier Foundation, a non-profit organization that defends civil liberties in the digital world, is one of many activists against the use of AI in weapons in any form. Note that many Google employees are also opposed to cooperation between the military and their company. The latter refrained from working on recognition AI features only "for non-offensive uses", after only explaining how to provide a TensorFlow API.
- Should Google's algorithms make life and death decisions?
- How AI can be used to avoid responsibility
Whether you trust Google or not, the trend is clear: a considerable number of Americans do not want to see Silicon Valley tech giants working for the army. Tech companies, which like to use and abuse their rhetorical armada to make a stand on political and social issues (for example when Trump's immigration policy had an impact on its workforce) do not generally leverage their skills to make machines of death, at least not directly.
What about the East? Baidu, nicknamed the Chinese Google, is the largest technological player in terms of AI, and has joined the private club of Parternship on AI - a consortium that aims to ensure the best and safest AI practices. Note that Facebook, Google, Amazon, Sony, Intel, Microsoft and many other big names are also participating.
Of course, other major Chinese groups (Huawei, Tencent, Xiaomi...) are also interested in AI and. However, unlike what is happening in the West, they work directly with the government and are not under pressure from peace groups. Equally as interesting - companies that do not specialize in tech are also receiving assistance from the Chinese government to expand AI research.
China's ambitions in terms of artificial intelligence became even clearer, when their President declared that China "will be the world's major center of innovation in artificial intelligence". The country is doing its best to achieve this objective (and does not hesitate to place government members on the work council, perhaps in order to maintain common interests with the government).
With a population of more than 1.3 billion people, AI databases are well populated (and there is no shortage of manpower to work on them). Nor should we think that the market is limited to large corporations - many start-ups are looking into the subject, as is the case with SenseTime, which also receives funds from Alibaba, another Chinese giant. In terms of substance, China would have invested12 billion in AI just in 2017. That figure is expected to rise to 70 billion in 2020, according to a military official at the Pentagon.
A two-tier war
Some newspapers (notably The New York Times) are researching the subject, and Wired does not hesitate to compare the current situation directly with the Cold War - the arms race is no longer nuclear but based on artificial intelligence. The collaboration between business and the state in China has a clear military advantage and raises some questions, including this: should US tech giants work with NATO to be able to keep pace?
Elon Musk participated in the debate by relaying a tweet from The Verge with a single "It begins...". In this tweet Vladimir Putin explained that whoever leads in AI development "will be the ruler of the world".
It begins... https://t.co/mbjw5hWC5J
- Elon Musk (@elonmusk) 4. September 2017
All of this once again demonstrates the double-sided nature of the technology we develop with the best of intentions. The power behind artificial intelligence can, and no doubt will, be abused for violence as rival powers seek advantages. Likewise advances in technology that enable communication and better use of resources can be a power for peace.
What do you think of AI in warfare? Should it be restricted, or developed to keep up with potential threats?