AlphaGo’s unusual moves prove its AI prowess, experts say
AlphaGo is seen as higher on the scale in AI than Deep Blue
By John Ribeiro
Playing against a top Go player, Google DeepMind’s AlphaGo artificial-intelligence program has puzzled commentators with moves that are often described as “beautiful,” but do not fit into the usual human style of play.
Artificial-intelligence experts think these moves reflect a key AI strength of AlphaGo, its ability to learn from its experience. Such moves cannot be produced by just incorporating human knowledge, said Doina Precup, associate professor in the School of Computer Science at McGill University in Quebec, in an email interview.
“AlphaGo represents not only a machine that thinks, but one that can learn and strategize,” agreed Howard Yu, professor of strategic management and innovation at IMD business school.
AlphaGo won three games consecutively against Lee Se-dol last week in Seoul, securing the tournament and US$1 million in prize money that Google plans to give to charities. The program, however, lost the fourth game on Sunday when it made a mistake. Lee has warned that the game has some weaknesses.
The program started as a research project about two years ago to test whether a neural network using deep learning can understand and play Go, said David Silver, one of the key researchers on the AlphaGo project. Google acquired British AI company DeepMind in 2014.
The AI program uses as a guide probable human moves from its ‘policy network,’ consisting of a model of play by human experts in different situations, but may make its own move when its ‘value’ neural network evaluates the possible moves at a greater depth.
Unlike humans, the AlphaGo program aims to maximize the probability of winning rather than optimizing margins, which helps explain some of its moves, said DeepMind CEO Demis Hassabis.
Go players take turns to place black or white pieces, called “stones,” on the 19-by-19 line grid, to aim to capture the opponent’s stones by surrounding them and encircling more empty space as territory.
It was expected that it would take many years for AI systems to beat Go, which is seen as more complex than other popular strategy games like chess and has far higher “branching” or average number of possible moves per turn, Precup said.
“The field of AI is typically benchmarked using complex games and problems, in this case mastering the game of Go,” said Babak Hodjat, cofounder and chief scientist at AI company Sentient Technologies. The AlphaGo win marks “a significant high point” in the complexity of problems that can now be tackled using machine learning, Hodjat said via email.
Go involves high-level strategic choices, such as “which battle do I want to play” or “which area of the board to control”, and several battles might be running in parallel, according to Precup. “This kind of reasoning is thought to be a hallmark of human thinking,” wrote Precup. There were earlier attempts at Go programs but these were too weak compared to human players, she added.
AlphaGo follows in the footsteps of the chess-playing Deep Blue computer that beat Garry Kasporov in 1997. Another IBM computer, Watson, won in 2011 in the Jeopardy quiz show.
The DeepMind program is very different from Deep Blue as the IBM program relied mostly on searching through a very large space of positions, but otherwise contained heuristics derived from human experts, Precup said. AlphaGo also has a powerful search component, but it learns on its own how to play the game, rather than being “told” what people do, she added.
Despite all its engineering ingenuity, Deep Blue was designed to achieve a single purpose: winning a chess game, Yu of IMD said. “All of the time and energy that goes into the program wasn’t useful for solving any other problems,” he added.
Google is planning to test its AI technology in newer applications beyond gaming, such as healthcare and scientific research. “The core deep learning technology is quite good for any time series pattern classification problem,” Hodjat said. His company has used similar technology for its Sentient Aware e-commerce visual intelligence product.
The algorithms in AlphaGo are general purpose and have been deployed in many situations, Precup said. The program relies on two kinds of learning, reinforcement learning, and deep networks, both of which have been used in many applications such as human prosthesis and automated speech recognition. “One may need to tune the algorithms a bit but they are not dependent on the problem domain,” she said.
A general purpose algorithm, capable of self-learning and mimicking reinforcement learning in humans, opens “a future with new possibilities beyond the realm of a human mind,” Yu said.
AlphaGo, however, falls short in the ability to understand human natural languages, an area where IBM scores, according to Yu. “By digesting millions of pages of medical journals and patient data, Watson provides recommendations—from additional blood tests to the latest clinical trails available—to doctors and physicians,” he said.
“If one day, the self-learning property of AlphaGo can be combined with Watson’s understanding of human language and turn them into a general purpose algorithm, the human advantage will sure reach its final limit,” he added.
Concern about the loss of the human advantage has figured in the background during the contest between Lee and AlphaGo with many commenting online that the South Korean was fighting on behalf of mankind in an epic battle with a computer.
But experts think that a victory in Go, a deterministic, perfect information game with set rules, doesn’t mean the time has reached yet when machines will overtake humans. “AI is quite good now in many cognitive applications that used to be the exclusive domain of humans,” Hodjat said. But it is still years away from achieving the broad and general abstracting power of human intelligence, he added.
“One thing that we do not have yet are ‘general-purpose’ AI machines, which use the same internal brain to do many different tasks, like play Go, understand text and play violin, for example,” said Precup. This is the next frontier, but we are still quite far from it, she added.
Microsoft said on Sunday that it was working on projects in the area of general intelligence. AI researchers have been able to develop tools, to recognize words for example, but have not been able to combine the skills effortlessly as humans do, it added.