Artificial intelligence developed by Google's DeepMind just scored a comprehensive victory against two human professional StarCraft II players, winning 10 matches in a row. The victory marks a first for AI technology, which has already beaten the world's best players at the Chinese strategy board game, Go.
Team Liquid’s Grzegorz “MaNa” Komincz and Dario “TLO” Wünsch were the pros on the receiving end of the schooling dished out by the AI, named AlphaStar. Each lost a separate series of five games before Komincz was able to score a single victory over the DeepMind team.
The series of matches was streamed on YouTube and Amazon-owned Twitch. It may seem like a frivolous exercise, but overcoming human players at video games is a huge challenge for AI development teams, forcing them to find creative ways of advancing the technology in order to succeed. Defeating human players, especially those ranked amongst the best in the world, is significantly harder in real-time games like StarCraft II compared to board games like chess or Go, which are turn-based.
What is so impressive about this victory for AI is the nature of how it was achieved. Computers can defeat human players by acting more quickly quite easily, but the actions-per-minute (APM) stats show that this was not the case here. AlphaStar was not acting more quickly than the human players - DeepMind's team described the AlphaStar's APM as "significantly lower than the professional players" - it was playing with a smarter strategy.
Another reason it was so tough for the humans was that the DeepMind team used a different 'agent' for each match. Essentially, the AI was using individual competitors, that had trained separately, for each match. This made them very unpredictable players with varied strategies. The DeepMind team said that each individual 'agent' had immersed itself in 200 human years’ worth of StarCraft II matches before facing the professional players - really, the humans didn't stand a chance!
The single victory that Komincz scored over the AI was in an exhibition match using a different interface. AlphaStar has only trained for seven days on the version and hadn't faced any professional players until Komincz, who said: "I was impressed to see AlphaStar pull off advanced moves and different strategies across almost every game, using a very human style of gameplay I wouldn’t have expected."
What do you think about Google's DeepMind victory over professional gamers?