Six Questions for Kevin Ferguson, co-author of Deep Learning and the Game of Go.

Kevin Ferguson and Max Pumperla are deep learning specialists skilled in distributed systems and data science. Together, they built the open source bot BetaGo. They also both count Max, the hero of the movie Pi, as a major influence. “He’s a talented mathematician who slowly loses his mind over the stock market and has an intense relationship with his power tools. That’s essentially my short bio,” says Pumperla.

 


Take 39% off Deep Learning and the Game of Go. Just enter intpumperla into the discount code box at checkout at manning.com.


Which came first, the book or the BetaGo bot?
 

Max (Pumperla) published the first version of the BetaGo bot on GitHub shortly after the Sedol Lee matches. [In 2016, Sedol Lee, the South Korean 18-time World Champion, lost to the AI program AlphaGo.] I can’t remember how I came across it, but I started contributing to it shortly after that. We got the idea to write a book about a year later. We always intended BetaGo to be more educational, rather than trying to make the strongest possible AI, so writing a book to go with it made sense.

 

How did you get interested in Go? Did you play it before you built the bot, or did you just see it as a great use case for deep learning?
 

I first learned Go when I was a teenager, after seeing the Darren Aronofsky movie Pi. This is a weird artsy movie about a mathematician who gets caught up in a crazy conspiracy. There are several scenes where the main character is playing Go with his mentor. My friends and I were obsessed with this movie and taught ourselves to play.

 

With the Sedol Lee upset in 2016 as big a shock as Kasparov v Deep Blue in chess?
 

I think there was a split between the Go community and the ML community in their expectations for the match. I feel like ML people tended to expect AlphaGo to win, and Go players tended to expect Sedol Lee to win.

Before the big match against Lee, DeepMind (the bot developers) held five test games against Fan Hui, a Chinese pro who lived in France. Fan was maybe the top player in Europe at the time, and AlphaGo won all those games. What Go players understood is that Fan Hui is a very strong player, but Sedol Lee is on a completely different level. If you know chess, it’s like the difference between and IM and a Super-GM. So a lot of Go players studied the Fan Hui games, and concluded: this is the strongest Go AI yet, but it has some weaknesses, and Sedol Lee is going to exploit those weaknesses in a way that an average pro can’t.

But the big unknown was how much AlphaGo could improve in the six months of training between the two matches. The answer was a lot: by DeepMind’s estimates, the Sedol Lee version of AlphaGo was something like 600 Elo points stronger than the Fan Hui version.

 

Is it true that Go is a harder challenge for a computer than chess, as it requires intuition and creativity as well as intelligence? How does deep learning mimic these most human of qualities?
 

In cognitive science, there’s a concept called “chunking.” The idea is that human working memory has very limited capacity, but people can think about very complex ideas. This is possible because the brain learns to organize information into high-level “chunks.” For example, when you’re reading this text, you’re not paying attention to individual letters or even whole words: your subconscious automatically chunks those into semantic concepts, without you even noticing.

The power of deep learning is that it lets computers learn how to do something very similar to chunking. When training a deep learning model, two things happen simultaneously: first it’s learning to organize the raw input into a structured representation; and second it’s learning to make decisions from that representation. And that lets a computer deal with unstructured inputs in a way that was not possible before.

 

Comparing the Go bots to Deep Blue, how has AI has changed in the past 22 years? Was Deep Blue using a version of Deep Learning?
 

The general framework for board game AI has stayed the same. In both cases, they read out a sequence of moves and look at the board position, and then try to estimate how good the resulting position is. And they do this over and over, and ultimately pick the move that leads to the most favorable position. In this framework, you have two options for getting stronger. One is you can get faster, so you can evaluate more positions in the same time, thereby covering more of your options. Alternately, you can get more efficient, so you spend your time on the more important sequences. To oversimplify a little, Deep Blue and its successors used the first approach, while AlphaGo used the second approach.

Both of these accomplishments are pretty amazing and I think there’s a lot worth studying in how modern chess engines work. But the neat thing about the AlphaGo-style tree search is that it’s more similar to how expert humans play. Top players can read out sequences better than amateurs, but you’re talking about 2 to 3x faster, not 1000x faster. But top humans are much better than amateurs at judging which moves are worth looking at.

 

Is your book all fun and games, or does building a Go bot teach any skills developers can use at work?
 

One thing we tried to cover is how to integrate deep learning in a real application. This includes things like translating regular data structures to and from mathematical representations; saving and loading deep learning models from disk; using deep learning models inside a classical tree search algorithm, etc. I think there are very few resources on these practical aspects, so we hope this helps bridge that gap for developers!