Computers school us on a 3,000 year old game

As technology advances its slowly becoming very clear that machines are getting better then us in every way. Take AlphaGo for instance, an AI developed by google to play (and win) a game of Go. Now you’re probably thinking, “AI have been beating us at chess for years, what’s so special about an AI that plays Go”. Well first I should probably explain what go is (You can skip this part if you know the rules).

What is GO?

First off, what is Go? Go is a board game that originated in China more then 2,500 years ago. It is played with game pieces referred to as “stones”. One player uses black stones and one player uses white stones (two players in total). Each turn you must place one stone on a vacant intersection (points) of a 19×19 grid. Once placed the stones cannot move. The aim of the game is to have more stones on the board then your opponent. You do this by surrounding a group of one or more opposing stones on all orthogonally-adjacent points with your stones, in which case the surrounded stones are removed from the board.

An example of white removing blacks stones from the game by placing a stone on the spot marked ‘A’. Image credits: Frej Bjon via Wikimidea

(Note that this is only a brief overview, the game itself is quite complicated so if u wish to learn more try this link, also keep in mind there are many different variants of the game)

Why do we care?

Go is a particularly difficult game for AI due to the incredibly large number of moves available each turn; on the first turn there is 19×19 = 361 different choices! Chess for comparison only has 18. As most game playing AI rely on the ability to “search” through future moves to determine its next best move, the large number of moves makes it very difficult to find a good move in a reasonable amount of time. Just looking 2 moves ahead from the first (one from each player) gives 129960 different possibilities to look at!

So how does it work?

At its core, Alpha go uses a concept called machine learning, more specifically an artificial neural network. These networks work similar to how your brain works, essentially a large network of “nodes” (like a network of neurons in your brain). The current state of the board is given to this network through specially designated input nodes and a decision is obtained from the output nodes. The nodes that link the inputs and outputs will perform some type of operation with the data it obtains, for example it may add the numbers its given or multiply them (Something simple). Through a large number of these simple operations the output data is transformed into a decision as to which possibilities it should explore when looking ahead in the game.

A small example network, where the green nodes are the input, the yellow nodes are the output and the blue nodes are intermediate nodes linking the input to the output. Image credit: Mysid via wikimedia

AlphaGo develops and improves this network itself, by both playing games itself and watching other games. It learns by watching and playing games.

Just how good is it?

In May 2017 at Future of Go Summit AlphaGo played Ke Jie, who is currently ranked 1st in the world at Go. He was defeated 3:0, a perfect victory. Although in the first game Ke Jie did score higher, however Chines rules require Black to score over a certain amount, of which he fell short.

Ke Jie was very impressed with it, stating that “After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong… I would go as far as to say not a single human has touched the edge of the truth of Go.”. He even proceeded to study the games Alpha Go played and incorporate it into his style!

AlphaGo is just another example of a problem, once thought to difficult for a computer to complete, solved with some very interesting and slightly unexpected results. Neural networks are a big area of study in AI at the moment, and are producing some very interesting programs (such as another google program, DeepDream, a program that creates works of art!). It will be very interesting to see what comes next.


6 Responses to “Computers school us on a 3,000 year old game”

  1. codya says:

    As for that Dota AI, that’s really cool. It probably makes use of a neural network as well

  2. codya says:

    Well so far no AI created has revolted against its creator so i think we are safe…. for now

  3. Joshua Boyte says:

    Hey, this is super fascinating. I’ve also heard of AI bots beating professionals in Dota, an online game with millions of options for the AI to make every second. Amazing how far AI has come!

  4. Matthew Graham says:

    Does this mean we should be worried about Skynet in the near future? 🙂

  5. codya says:

    Thanks for the comment! What do you mean by that exactly?

  6. Hugh Rayner says:

    Sounds awesome. Any idea of the differences in playstyle? That would be really interesting.