Go is a two-player board game invented in ancient China around three thousand years ago. The game is famous for its complexity as well as for the incredible number of possible moves. In 1997, IBM’s Deep Blue beat the best human player, Kasparov, in chess, but Go aficionados claimed that however intelligent the machine is, it will never be able to beat the best human player in Go. This assertion has now been disproved: in March 2016, the AI-based machine “Alpha-Go,” developed by Google Deepmind, beat the top-ranked human Go player in the world in a 4-1 landslide.

Humans have explored the strategies of Go for over three thousand years, so how can a machine, in existence for only a year or so, easily beat a human? The surprising yet fascinating secret is that it can teach itself how to play Go by watching others play, as well as by playing with itself. In chess, every move is computationally solvable, which means that computers can brutally “crack” the game by playing out every possible move. But in Go, because of the sheer complexity of the game, no computational power is great enough (yet) to “solve” the game.

Instead of programming in every possible scenario, computer scientists took a radically different approach to the problem: they used the latest technique in artificial intelligence called “deep learning” to give the machine the ability to self-evolve through training.

It is very much like how a child would learn Go. He or she would start from learning the basic set of rules, and then improve by competing with other players. Since the machine never gets tired of training and never has to rest, it can learn at an incredibly rapid pace. In a year or so, the machine plays hundreds of millions of games.

This novel approach to creating intelligent machines has already had wide applications in the real world. Take the digital assistants on your phone, for example. Instead of doing a one-to-one voice pairing with words, the software learns by starting from a loose set of rules and steadily improves its recognition by taking in more user data. Autonomously driven cars are based on the same principle: the cars “learn” from behaviors of real drivers and then gradually become better at driving. You may also be surprised at the accuracy of friend recommendations on Facebook. The algorithms extract key characteristics of your friends, the friends of your friends, your interactions with your friends, and predict who you may know.

Although a self-evolving and self-learning Artificial Intelligence may sound scary to many, its applications are in fact very limited. At this point, there is absolutely no need to worry about robots taking over the world. Current AI technology is insanely good at executing specific, perfect-information tasks, but it does not do well at tasks that involve uncertain variables and future planning. AIs do not possess any consciousness, either. In other words, the game of Go in an AI’s eyes is purely mathematical and probabilistic; the AI has no real understanding of the concepts of either “strategy” or “winning.”

In early January this year, a Go-player named “Master” mysteriously appeared on major online Go-playing websites. It had an impressive sixty-win undefeated record against all top-ranked players, including the current No. 1 player in the world, Ke Jie, from Mainland China. People speculated that it can only be an AI, but much stronger than Alpha-Go.

The name of this AI is certainly ominous: will AI become our “master” in the future?