Last October, the world of artificial intelligence achieved a milestone some 20 years in the making. Behind closed doors at Google's London HQ, the reigning three-time European champion of the Chinese game Go sat down to compete against a cloud-based computer called AlphaGo.

This event was significant for a variety of reasons. Go, an ancient game more than 2,500 years old, is considered by many to be the most complex game in the world due to the sheer number of possible variations in a single match. While artificial intelligence, or AI, has managed to conquer games like chess and checkers against human opponents, Go has proven to be an elusive prize.

Just how difficult is it? According to Google DeepMind's Demis Hassabis, there are more moves in Go "than the number of atoms in the universe." To create a program that could beat a professional Go player like champion Fan Hui would be a massive achievement. As a result, companies like Google, Facebook, and Microsoft have all been vying for the honor. Last fall, after years of developing AlphaGo, Google gave it a shot against Fan Hui.

The result? AlphaGo won all five games.

Fan HuiThree-time European Go champion Fan Hui expresses frustration after losing yet another game to AlphaGo. (Photo: Google)

In a post announcing the milestone this week, Google explains the crazy networking strategy it devised to make its program think like a human player.

"We trained the neural networks on 30 million moves from games played by human experts, until it could predict the human move 57 percent of the time (the previous record before AlphaGo was 44 percent). But our goal is to beat the best human players, not just mimic them. To do this, AlphaGo learned to discover new strategies for itself, by playing thousands of games between its neural networks, and adjusting the connections using a trial-and-error process known as reinforcement learning."

You read that right: AlphaGo actually learned how to become a better player.

According to Google, the next big test will come in March when AlphaGo will compete head-to-head against Lee Sedol, who has been the reigning world champion of Go for the last decade.

Beyond games, the search giant plans to use the lessons learned from playing Go and apply them to data analysis on real problems "from climate modeling to complex disease analysis."

Michael d'Estries ( @michaeldestries ) covers science, technology, art, and the beautiful, unusual corners of our incredible world.

Google's artificial intelligence just won a 2,500-year-old board game
The Chinese game of Go, considered the most complex board game in the world, may soon have a non-human world champion.