AlphaGo beating international Go champion Lee Sedol is a historic milestone for artificial intelligence (AI). Here are our three key takeaways.
1. Machine Learning Accelerates Go AI by Ten Years
Chess AI surpassed the best human players long ago. Until 2015, Go AI had reached only strong amateur performance.[1] Based on ARK’s analysis of past AI performance improvements, Go AI was expected to take another ten to achieve performance comparable to chess AI. As shown in the chart below, AlphaGo accelerated that schedule by more than ten years, thanks to a combination of Monte-Carlo tree search, a heuristic search algorithm for decision processes, and machine learning.[2]
The chart above examines the performance of chess and Go programs when playing against professional human players. Based on this chart, in chess, AI programs improved at an exponential rate throughout the eighties and nineties. In 1997, AI’s loss rate compared to professional chess players was 0.04%; by 2006, it was down to 0.0004%. Go programs also have improved, but we think that because of the game’s greater complexity, they have trailed chess’ AI performance by about twenty years. The last two orange dots on the chart show the impact of Google’s DeepMind on AlphaGo, first leading to European Go champion Fan Hui’s defeat in 2015, then to world champion Lee Sedol’s demise in 2016. A fundamentally new algorithm, AlphaGo obliterated the prior trend, improving performance by more than two orders of magnitude in a single year… the equivalent of ten years of progress using prior techniques.
We think AlphaGo beating Lee Sedol and its breakthrough performance happened because of machine learning, which uses data instead of human effort to tune algorithms. AlphaGo was trained on 28 million unique board positions from 160,000 expert level games.[3] The algorithm was refined through reinforcement, learning by playing more than a million games against variations of itself. This vast repository of simulated experience allowed AlphaGo to leapfrog existing AI systems that used simpler and shallower forms of learning.
2. GPU + Cloud Is the New Supercomputer
Superhuman AI used to require exotic chips and supercomputers. In 1997, IBM [IBM] managed to defeat grandmaster Gary Kasparov by using the DeepBlue supercomputer powered by custom processors designed specifically for playing chess. In contrast, DeepMind’s AlphaGo computer included commodity servers powered by regular CPUs and GPUs (graphics processing units).[4]
GPUs have become the de facto processor to accelerate machine learning. Originally designed for computer games, their vast parallel processing power has made them a perfect fit for the math heavy operations involved in machine learning.[5] For AlphaGo’s victory, 50 GPUs were used to train the “policy” portion of the network, a process that took three weeks. Without GPUs, training likely would have taken 9 months.[6]
The distributed version of AlphaGo that defeated Lee Sedol used 1,920 CPUs and 280 GPUs, a powerful cluster comparable in performance to one of the world’s Top 500 supercomputers.[7] Google’s cloud infrastructure provisioned the system on demand, allowing DeepMind researchers to concentrate on building smarter software rather than building beefier hardware. We think that the ubiquity of GPUs and cloud infrastructure has democratized supercomputing, allowing researchers to shortcut AI development.
3. AlphaGo Is Smart, But It Is Not General AI
Machine learning is remarkably adaptable. It allows researchers to make breakthroughs in image classification, voice recognition, language translation, game playing, algorithmic trading, and many other applications. That said, ARK believes that machine learning programs like AlphaGo do not come close to “general” artificial intelligence.
General intelligence requires, among other things, an understanding of language and the ability to form memories. According to a DeepMind research paper in Nature, AlphaGo has no concept of language or memory: the program considers each move carefully but it has no memory of its last move. Without memory, AlphaGo cannot remember past mistakes or change tactics after losing a game against a particular player, a severe handicap compared to humans.
We think that an important next step in machine learning will be to incorporate the concept of memory, something that researchers at both Facebook [FB] and DeepMind [GOOG] are actively pursuing. Typical neural nets such as those used for image recognition have no mechanism to reference specific past experience. Recurrent neural networks provide a temporary memory system, allowing the network to “look back” a step or two. Baidu [BIDU] has used recurrent neural networks for speech recognition with great success. For longer term memory, neural nets can be augmented with dedicated memory banks. For example, Facebook’s implementation, called “Memory Networks”, can read a synopsis of Lord of the Rings and answer simple questions about the storyline, as shown below.
Moreover, when the Memory Network is trained on both a general world knowledge set and a scenario based data set, it can answer questions very broadly, referencing knowledge not included in the initial summary, as shown below.
In our opinion, AlphaGo is another example of machine learning triumphing over domain specific algorithms. Modern GPUs and cloud infrastructure have made supercomputing ubiquitous, allowing researchers to focus on software development rather than hardware. Despite these breakthroughs, smarter AI will require more than just larger data sets and faster processors. The next big leap will be neural nets with memory, leading to software that can read, understand, remember and, perhaps one day, show some common sense.