It's only months since AlphaGo's resounding victories over the human grand masters of the complex strategy game Go were heralded as a sensation. But they are already starting to pale in the light of the latest reports from Google's AI gurus DeepMind. Whereas AlphaGo still mainly relied on accumulated knowledge drawn from human experience, self-learning algorithms enable next-in-line AlphaZero to master Go based solely on the rules and playing a few games against itself. Within just a few hours it can not only knock the reigning machine-based Go champions off their pedestals with ease, but also perform the same feat in both chess and Shogi.
The astonishing thing about AlphaZero's equal prowess in these three very different strategy games is that it stems from a neural network that was originally wired very specifically for Go. AI experts hadn’t rated the chances of broader success very highly before this was put to the test, but AlphaZero battered its opponents in those games as well! A paper entitled "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm" posted recently on arXiv records this spectacular achievement by the AlphaZero program developed by David Silver's team at DeepMind. What fascinates chess aficionados most is how AlphaZero managed to work out all of the most popular opening sequences used by human players for itself - before even quickly dismissing a few.
DeepMind Technologies Limited (UK-London N1C 4AG)