Professional StarCraft II Player Loses Tournament to DeepMinds ai AlphaStar

Spread the love

Team Liquid’s professional player MaNa has lost a series of StarCraft II matches to AI computer player AlphaStar. MaNa did not manage to bend any of the five games against the artificial intelligence, but did beat the AI ​​in a live match.

Blizzard and DeepMind worked closely together for the tournament. The game developer provided a special version of StarCraft, which has the same rules as the regular game, but is focused on AI research. The changes meant that the games could not be viewed live.

Blizzard said he found it interesting that his strategy game is being used as a benchmark for AI research. DeepMind focuses on the game because of its complexity. Previously, the company developed algorithms for mastering chess, Go and Atari games, but the amount of information to be processed, size of the playing field and number of real-time operations required is much greater with StarCraft. For these reasons, the AI ​​world has been interested in real-time strategy games since 2003.

Team Liquid’s Grzegorz ‘MaNa’ Komincz proved willing to take up the gauntlet against the AI. He ranks number 13 in the StarCraft II World Championship Series and specializes in playing with the Protoss breed. The tournament therefore consisted of Protoss vs Protoss matches.

DeepMind ensured that the artificial intelligence could not perform superhuman actions, for example with regard to the speed of the actions. The company got help from Dario ‘TLO’ Wunsch of Team Liquid. He is a Zerg player, but trained with DeepMinds AlphaStar with the Protoss breed. In the end, DeepMind averaged fewer actions per minute for AlphaStar than professional players and the time between deciding and actually acting was also lower than with human players, the company claims. This should make the intelligence of the actions decisive.

In the first match of the battle against MaNa, AlphaStar started aggressively, noting that he and his stalkers did not hesitate to enter the ramp on the way to MaNa’s base. Only in the third game did the AI ​​use the wall-off to block its own disaster. It seemed as if she had learned from MaNa, but in fact both TLO and MaNa played each match against a different agent, or algorithm.

Internally, DeepMind has a kind of its own StarCraft league in which it lets different algorithms play against each other that get better with different iterations thanks to reinforcement learning. In the end, the company chooses the best five who can be used as agents.

What stood out in each of the five games was that AlphaStar regularly played as if he were a human player, but also made decisions and used strategies that pro players wouldn’t use. For example, the AI ​​had a penchant for making a lot of workers at an early stage, up to 24, where pro players already find a lot of 18. AlphaStar excelled in particular in the field of micromanagement, which sometimes involved superhuman actions. For example, he managed to defeat a group of MaNa immortals with a large number of stalkers, where that group would normally be no match for the immortals.

MaNa found the battle instructive, but also strange: “I’ve never played StarCraft II matches like this in my life.” TLO also stated that it was unable to get a grip on its opponent and that it was constantly in the dark, because AlphaStar did not adhere to the conventional playing style. DeepMind thought that was a nice outcome, so that pro players learn to think about other strategies and ways of playing.

Oriol Vinyals of DeepMind explains that AlphaStar focuses on certain parts of the map, only observing parts where it wants to perform actions. Those raw observations are fed to long-short-term memory units and provide neural network activations. This can be regarded as the brain of AlphaStar because it is based on this that it is decided what to do, which actions are performed, which buildings and units must be made and which action must be performed where. Finally, the outcome prediction follows. This is the AI’s assessment of whether she is winning or not. Based on this, it decides whether to attack or retreat, for example. AlphaStar was not imposed any hard rules for this, the AI ​​learned on its own.

Parts of the matches can be viewed, with commentary, on DeepMind’s YouTube channel. The live match that MaNa played against AlphaStar can also be seen there. The agent had the limitation that he could look at the course of the game more like a human being. Again the ai started very aggressively, but made some micro-errors and gradually MaNa was able to beat the ai, although he didn’t give in and no ‘gg’ followed.

You might also like