One independent expert called it a breakthrough for AI with potentially far-reaching consequences.
The achievement was announced to coincide with the publication of a paper, in the scientific journal Nature, detailing the techniques used.
Earlier on Wednesday,
DeepMind's chief executive, Demis Hassabis, said its AlphaGo software followed a three-stage process, which began with making it analyse 30 million moves from games played by humans.
"It starts off by looking at professional games," he said.
This video can not be played
Demis Hassabis explains how DeepMind achieved the computing milestone.
"It learns what patterns generally occur - what sort are good and what sort are bad. If you like, that's the part of the program that learns the intuitive part of Go.
"It now plays different versions of itself millions and millions of times, and each time it gets incrementally better. It learns from its mistakes.
"The final step is known as the Monte Carlo Tree Search, which is really the planning stage.
"Now it has all the intuitive knowledge about which positions are good in Go, it can make long-range plans."
Tested against rival Go-playing AIs, Google's system won 499 out of 500 matches,
And last October, DeepMind invited Fan Hui, Europe's top player, to its London office for a series of games, each of which the AI won.
"Many of the best programmers in the world were asked last year how long it would take for a program to beat a top professional, and most of them were predicting 10-plus years," Mr Hassabis said.
"The reasons it was quicker than people expected was the pace of the innovation going on with the underlying algorithms and also how much more potential you can get by combining different algorithms together."
Misuse of artificial intelligence 'could do harm'
DeepMind
Fifteen years after a volcano shut European airspace, could it happen again?
Good cops, bad cops - how Trump's shifting tariff team kept world guessing
Gone in a flush! The story of Blenheim's gold toilet theft