play go online now

Fortnite is a free-to-play Battle Royale game with numerous game modes for every type of game player. Play Free Now! Season Bug. New web project by the creators of BLA BLA and Just a Reflektor. A walk in the woods by Vincent Morisset, Philippe Lambert, Édouard Lanctôt-Benoit. Download Scrabble® GO - New Word Game and enjoy it on your iPhone, and look forward to playing with friends around the world and now I'm only playing.

: Play go online now

Play go online now
Average savings account interest 2017
Pauls valley golf course ok

Computer Go

This article is about the study of Go (game) in artificial intelligence. For the computer programming language called Go, see Go (programming language).

Not to be confused with Go software.

Field of artificial intelligence dedicated to creating a computer program that plays Go

Computer Go is the field of artificial intelligence (AI) dedicated to creating a computer program that plays the traditional board gameGo. The game of Go has been a fertile subject of artificial intelligence research for decades, culminating in 2017 with AlphaGo Master winning three of three ultimo album de jose luis perales against Ke Jie, who at the time continuously held the world No. 1 ranking for two years.[1][2]


Go is a complex board game that requires intuition, creative and strategic thinking.[3][4] It has long been considered a difficult challenge in the field of artificial intelligence (AI) and is considerably more difficult[5] to solve than chess. Many in the field of artificial intelligence consider Go to require more elements that mimic human thought than chess.[6] Mathematician I. J. Good wrote in peoples trust company enosburg falls vt on a computer? – In order to programme a computer to play a reasonable game of Go, rather than merely a legal game – it is necessary to formalise the principles of good strategy, or to design a learning programme. The principles are more qualitative and mysterious than in chess, and depend more on judgment. So I think it will be even more difficult to programme a computer to play a reasonable game of Go than of chess.

Prior to 2015,[8] the best Go programs only managed to reach amateur dan level.[9] On the small 9×9 board, the computer fared better, and some programs managed to win a fraction of their 9×9 games against professional players. Prior to AlphaGo, some researchers had claimed that computers would never defeat top humans at Go.[10]

Early decades[edit]

The first Go program was written by Albert Lindsey Zobrist in 1968 as part of his thesis on pattern recognition.[11] It introduced centerstate bank routing number alabama influence function to estimate territory and Zobrist hashing to detect ko.

In April 1981, Jonathan K Millen published an article in Byte discussing Wally, a Go program with a 15x15 board that fit within the KIM-1 microcomputer's 1K RAM.[12]Bruce F. Webster published an article in the magazine in November 1984 discussing a Go program he had written for the Apple Macintosh, including the MacFORTH source.[13]

In 1998, very strong players were able to beat computer programs while giving handicaps of 25–30 stones, an enormous handicap that few human players would ever take. There was a case in the 1994 World Computer Go Championship where the winning program, Go Intellect, lost all three games against the youth players while receiving a 15-stone handicap.[14] In general, players who understood and exploited a program's weaknesses could win with much larger handicaps than typical players.[15]

21st century[edit]

Developments in Monte Carlo tree search and machine learning brought the best programs to high dan level on the small 9x9 board. In 2009, the first such programs appeared which could reach and hold low dan-level ranks on the KGS Go Server on the 19x19 board as well.

In 2010, at the 2010 European Go Congress in Finland, Play go online now played 19x19 Go against Catalin Capital one cafe logo (5p). MogoTW received a seven-stone handicap and won.[16]

In 2011, Zen reached 5 dan on the server KGS, playing games of 15 seconds per move. The account which reached that pg one uses a cluster version of Zen running on a 26-core machine.[17]

In 2012, Zen beat Takemiya Masaki (9p) by 11 points at five stones handicap, followed by a 20-point win at four stones handicap.[18]

In 2013, Crazy Stone beat Yoshio Ishida (9p) in a 19×19 game at four stones handicap.[19]

The 2014 Codecentric Go Challenge, a best-of-five match in an even 19x19 game, was played between Crazy Stone and Franz-Jozef Dickhut (6d). No stronger player had ever before agreed to play go online now a serious competition against a go program on even terms. Franz-Jozef Dickhut won, though Crazy Stone won the first match by 1.5 points.[20]

2015 onwards: The deep learning era[edit]

Further information: AlphaGo versus Fan Hui, AlphaGo versus Lee Sedol, and AlphaGo versus Ke Jie

In October 2015, Google DeepMind program AlphaGo beat Fan Hui, the European Go champion, five times out of five in tournament conditions.[21]

In March 2016, AlphaGo beat Lee Sedol in the first three of five matches.[22] This was the first time that a 9-dan master had played a professional game against a computer without handicap.[23] Lee won the fourth match, describing his win as "invaluable".[24] AlphaGo won the final match two days later.[25][26]

In May 2017, AlphaGo beat Ke Jie, who at the time was ranked top in the world,[27][28] in a three-game match during the Future of Go Summit.[29]

In October 2017, DeepMind revealed a new version of AlphaGo, trained only through self play, that had surpassed all previous versions, beating the Ke Jie version in 89 out of 100 games.[30]

Since the basic principles of AlphaGo had been published in the journal Nature, other teams were able to produce high-level programs. By 2017, both Zen and Tencent's project Fine Art were capable of defeating very high-level professionals some of the time and the open source Leela Zero engine was released.

Obstacles to high-level performance[edit]

For a long time, it was a widely held opinion that computer Go posed a problem fundamentally different from computer chess. It was believed that methods relying on fast global search with relatively little domain knowledge would not be effective against human experts. Therefore, a large part of the computer Go development effort was during these times focused on ways of representing human-like expert knowledge and combining this with local search to answer questions of a tactical nature. The result of this were programs that handled many situations well but which had very pronounced weaknesses compared to their overall handling of the game. Also, these classical programs gained almost nothing from increases in available computing power per se and progress in the field was generally slow.

A few researchers grasped the potential of probabilistic methods and predicted that they would come to dominate computer game-playing,[31] but many others considered a strong Go-playing program something that could be achieved only in the far future, as a result of fundamental advances in general artificial intelligence technology. The advent of programs based on Monte Carlo search (started in 2006) changed this situation in many ways with the first 9-dan professional Go players being defeated in 2013 by multicore computers, albeit with four-stone handicap.

Size of board[edit]

The large board (19×19, 361 intersections) is often noted as one of the primary reasons why a strong program is hard to create. The large board size prevents an alpha-beta searcher from achieving deep look-ahead without significant search extensions or pruning heuristics.

In 2002, a computer program called MIGOS (MIni GO Solver) completely solved the game of Go for the 5×5 board. Black wins, taking the whole board.[32]

Number of move options[edit]

Continuing the comparison to chess, Go moves are not as limited by the rules of the game. For the first move in chess, the player has twenty choices. Go players begin with a choice of 55 distinct legal moves, accounting for symmetry. This number rises quickly as symmetry is broken, and soon almost all of the 361 points of the board must be evaluated. Some moves are much more popular than others and some are almost never played, but all are possible.

Evaluation function[edit]

While a material counting evaluation is not sufficient for decent play in chess, material balance and various positional factors like pawn structure, are easy to quantify.

These types of positional evaluation rules cannot efficiently be applied to Go. The value of a Go position depends on a complex analysis to determine whether or not the group is alive, which stones can be connected to one another, and heuristics around the extent to which a strong position has influence, or the extent to which a weak position can be attacked.

More than one move can be regarded as the best depending on which strategy is used. In order to choose a move, the computer must evaluate different possible outcomes and decide which is best. This is difficult due to the delicate trade-offs present in Go. For example, it may be possible to capture some enemy stones at the cost of strengthening the opponent's stones elsewhere. Whether this is a good trade or not can be a difficult decision, even for human players. The computational complexity also shows here as a move might not be immediately important, but after many moves could become highly important as other areas of the board take shape.

Combinatorial problems[edit]

Sometimes it is mentioned in this context that various difficult combinatorial problems (in fact, any NP-hard problem) can be converted to Go-like problems on a sufficiently large board; however, the same is true for other abstract board games, including chess and minesweeper, when suitably generalized to a board of arbitrary size. NP-complete problems do not tend in their general case to be easier for unaided humans than for suitably programmed computers: it is doubtful that unaided humans would be able to compete successfully against computers in solving, for example, instances of the subset sum problem.


Given that the endgame contains fewer possible moves than the opening (fuseki) or middle game, one might suppose that it is easier to play, and thus that a computer should be able to easily tackle it. In chess, computer programs generally perform well in chess endgames, especially once the number of pieces is reduced to the extent that it allows taking advantage of solved endgame tablebases.

The application of surreal numbers to the endgame in Go, a general game analysis pioneered by John H. Conway, has been further developed by Elwyn R. Berlekamp and David Wolfe and outlined in their book, Mathematical Go (ISBN 978-1-56881-032-4). While not of general utility in most playing circumstances, it greatly aids the analysis of certain classes of positions.

Nonetheless, although elaborate study has been conducted, Go endgames have been proven to be PSPACE-hard. There are many reasons why they are so hard:

  • Even if a computer can play each local endgame area flawlessly, we cannot conclude that its plays would be flawless in regard to the entire board. Additional areas of consideration in endgames include sente and gote relationships, prioritization of different local endgames, territory counting and estimation, and so on.
  • The endgame may involve many other aspects of Go, including 'life and death', which are also known to be NP-hard.[33][34]
  • Each of the local endgame areas may affect one another. In other words, they are dynamic in nature although visually isolated. This makes it difficult to reason about for computers and humans alike. This nature leads to some complex situations like Triple Ko,[35] Quadruple Ko,[36] Molasses Ko,[37] and Moonshine Life.[38]

Thus, traditional Go algorithms can't play the Go endgame flawlessly in the sense of computing a best move directly. Strong Monte Carlo algorithms can still handle normal Go endgame situations well enough, and in general, the most complicated classes of life-and-death endgame problems are unlikely to come up in a high-level game.[39]

Order of play[edit]

Monte-Carlo based Go engines have a reputation of being coldwell banker commercial fredericksburg va more willing to play tenuki, moves elsewhere on the board, rather than continue a local fight than human players. Directly calculating when a specific local move is required can be difficult.[40] This was often perceived as a weakness early in these program's existence.[41] That said, this tendency has persisted in AlphaGo's playstyle with dominant results, my target redcard pay my bill this may be more of a "quirk" than a "weakness."[42]

Tactical search[edit]

One of the main concerns for a Go player is which groups of stones can be kept alive and which can be captured. This general class of problems is known as life and death. The most direct strategy for calculating life and death is to perform a tree search on the moves which potentially affect the stones in question, and then to record the status of the stones at the end of the main line of play.

However, within time and memory constraints, it is not generally possible to determine with complete accuracy which moves could affect the 'life' of a group of stones. This implies that some heuristic must be applied to select which moves to consider. The net effect is that for any given program, there is a trade-off between playing speed and life and death reading abilities.

With Benson's algorithm, it is possible to determine the chains which are unconditionally alive and therefore would not need to f 14 tomcat grim reapers checked in the future for safety.

State representation[edit]

An issue that all Go programs must tackle is how to represent the current state of the game. For programs that use extensive searching techniques, this representation needs to be copied and/or modified for each new hypothetical move considered. This need places the additional constraint that the representation should either be small enough to be copied quickly or flexible enough that a move can be made and undone easily.

The most direct way of representing a board is as a one- or two-dimensional array, where elements in the array represent points on the board, and can take on a value corresponding to a white stone, a black stone, or an empty intersection. Additional data is needed to store how many stones have been captured, whose turn it is, and which intersections are illegal due to the Ko rule.

Most programs, however, use more than just the raw board information to evaluate positions. Data such as which stones are connected in strings, which strings are associated with each other, which groups of stones are in risk of capture and which groups of stones are effectively dead are necessary to make an accurate evaluation of the position. While this information can be extracted from just the stone positions, much of it can be computed more quickly if it is updated in an incremental, per-move basis. This incremental updating requires more information to be stored as the state of the board, which in turn can make copying the board take longer. This kind of trade-off is indicative of the problems involved in making fast computer Go programs.

An alternative method is to have a single board and make and take back moves so as to minimize the demands on computer memory and have the results of the evaluation of the board stored. This avoids having to copy the information over and over again.

System design[edit]

New approaches to problems[edit]

Historically, GOFAI (Good Old Fashioned AI) techniques have been used to approach the problem of Go AI. More recently, neural networks have been used as an alternative approach. One example of a program which uses neural networks is WinHonte.[43]

These approaches attempt to mitigate the problems of the game of Go having a high branching factor and numerous other difficulties.

Computer Go research results are being applied to other similar fields such as cognitive science, pattern recognition and machine learning.[44]Combinatorial Game Theory, a branch of applied mathematics, is a topic relevant to computer Go.[45]: 150 

Design philosophies[edit]

The only choice a program needs to make is where to place its next stone. However, this decision is made difficult by the wide range of impacts a single stone can have across the entire board, and the complex interactions various stones' groups can have with each other. Various architectures have arisen for handling this problem. The most popular use:

Few programs use only one of these techniques exclusively; most combine portions of each into one synthetic system.

Minimax tree search[edit]

One traditional AI technique for creating game playing software is to use a minimaxtree search. This involves playing out all hypothetical moves on the board up to a certain point, then using an evaluation function to estimate the value of that position for the current player. The move which leads to the best hypothetical board is selected, and the process is repeated each turn. While tree searches have been very effective in computer chess, they have seen less success in Computer Go programs. This is partly because it cod press f to pay respects traditionally been difficult to create an effective evaluation function for a Go board, and partly because the large number of possible moves each side can make each leads to a high branching factor. This makes this technique very computationally expensive. Because of this, many programs which use search trees extensively can only play on the smaller 9×9 board, rather than full 19×19 ones.

There are several techniques, which can greatly improve the performance of search trees in terms of both speed and memory. Pruning techniques such as alpha–beta pruning, Principal Variation Search, and MTD(f) can reduce the effective branching factor without loss of strength. In tactical areas such as life and death, Go is particularly amenable to caching techniques such as transposition tables. These can reduce the amount of repeated effort, especially when combined with an iterative deepening approach. In order to quickly store a full-sized Go board in a transposition table, a hashing technique for mathematically summarizing is generally necessary. Zobrist hashing is very popular in Go programs because it has low collision rates, and can be iteratively updated at each move with just two XORs, rather than being calculated from scratch. Even using these performance-enhancing techniques, full tree searches on a full-sized board are still prohibitively slow. Searches can be sped up by using large amounts of domain specific pruning techniques, such as not considering moves where your opponent play go online now already strong, and selective extensions like always considering moves next to groups of stones which are about to be captured. However, both of these options introduce a significant risk of not considering a vital move which would have changed the course of the game.

Results of computer competitions show that pattern matching techniques for choosing a handful of appropriate moves combined with fast localized tactical searches (explained above) were once sufficient to produce a competitive program. For example, GNU Go was competitive until 2008.

Knowledge-based systems[edit]

Novices often learn a lot from the game records of old games played by master players. There is a strong hypothesis that suggests that acquiring Go knowledge is a key to making a strong computer Go. For example, Tim Kinger and David Mechner argue that "it is our belief that with better tools for representing and maintaining Go knowledge, it will be possible to develop stronger Go programs." They propose two ways: recognizing common configurations of stones and their positions and concentrating on local battles. play go online now programs are still lacking in both quality and quantity of knowledge."[45]: 151 

After implementation, the use of expert knowledge has been proved very effective in programming Go software. Hundreds of guidelines and rules of thumb for strong play have been formulated by both high-level amateurs and professionals. The programmer's task is to take these heuristics, formalize them into computer code, and utilize pattern matching and pattern recognition algorithms to recognize when these rules apply. It is also important to have a system for determining what to do in the event that two conflicting guidelines are applicable.

Most of the relatively successful results come from programmers' individual skills at Go and their personal conjectures about Go, but not from formal mathematical assertions; they are trying to make the computer mimic the way they play Go. "Most competitive programs have required 5–15 person-years of effort, and contain 50–100 modules dealing with different aspects of the game."[45]: 148 

This method has until recently been the most successful technique in generating competitive Go programs on a full-sized board. Some example of programs which have relied heavily on expert knowledge are Handtalk (later known as Goemate), The Many Faces of Go, Go Intellect, and Go++, each of which has at some point been considered the world's best Go program.

Nevertheless, adding knowledge of Go sometimes weakens the program because some superficial knowledge might bring mistakes: "the best programs usually play good, master level moves. However, as every games player knows, just one bad move can ruin a good game. Program performance over a full game can be much lower than master level."[45]: 148 

Monte-Carlo methods[edit]

Main article: Monte-Carlo tree search

One major alternative to using lonely boy the black keys video knowledge and searches is the use play go online now Monte Carlo methods. This is done by generating a list of potential moves, and for each move playing out thousands of games at random on the resulting board. The move which leads to the best set of random capital one small personal loan for the current player is chosen as the best move. The advantage of this technique is that it requires very little domain knowledge or expert input, the trade-off being increased memory and processor requirements. However, because the moves used for evaluation are generated at random it is possible that a move which would be excellent except for one specific opponent response would be mistakenly evaluated as a good move. The result of this are programs which are strong in an overall strategic sense, but are imperfect tactically.[citation needed] This problem can be mitigated by adding some domain knowledge in the move generation and a greater level of search depth on top of the random evolution. Some programs which use Monte-Carlo techniques are Fuego,[46] The Many Faces of Go v12,[47] Leela,[48] MoGo,[49]Crazy Stone, MyGoFriend,[50] and Zen.

In 2006, a new search technique, upper confidence bounds applied to trees (UCT),[51] was developed and applied to many 9x9 Monte-Carlo Go programs with excellent results. UCT uses the results of the play outs collected so far to play go online now the search along the more successful lines of play, while still allowing alternative lines to be explored. The UCT technique along with many other optimizations for playing on the larger 19x19 board has led MoGo to become one of the strongest research programs. Successful early applications of UCT methods to 19x19 Go include MoGo, Crazy Stone, and Mango.[52] MoGo won the 2007 Computer Olympiad and won one (out of three) blitz game against Guo Juan, 5th Dan Pro, in the much less complex 9x9 Go. The Many Faces of Go[53] won the 2008 Computer Olympiad after adding UCT search to its traditional knowledge-based engine.

Machine learning[edit]

While knowledge-based systems have been very effective at Go, their skill level is closely linked to the knowledge of their programmers and associated domain experts. One way to break this limitation is to use machine learning techniques in order to allow the software to automatically generate rules, patterns, and/or rule conflict resolution strategies.

This is generally done by allowing a neural network or genetic algorithm to either review a large database of professional games, or play many games against itself or other people or programs. These algorithms are then able to utilize this data as a means of improving their performance. AlphaGo used this to great effect. Other programs using neural nets previously have been NeuroGo and WinHonte.

Machine learning techniques can also be used in a less ambitious context to tune specific parameters of programs that rely mainly on other techniques. For example, Crazy Stone learns move generation patterns from several hundred sample games, using a generalization of the Elo rating system.[54]


Main article: AlphaGo

AlphaGo, developed by Google DeepMind, made a significant advance by beating a professional human player in October 2015, using techniques that combined deep learning and Monte Carlo tree search.[55] AlphaGo is significantly more powerful than other previous Go programs, and the first to beat a 9 dan human professional in a game without handicaps on a full-sized board.

List of Go-playing computer programs[edit]

See also: Go software

  • AlphaGo, the first computer program to win in even matches against a professional human Go player
  • AYA[56] by Hiroshi Yamashita
  • BaduGI by Jooyoung Lee
  • Crazy Stone by Rémi Coulom (sold as Saikyo no Igo in Japan)
  • Darkforest by Facebook
  • Fine Art by Tencent
  • Fuego,[46] an open source Monte Carlo program
  • Goban,[57]MacintoshOS X Go program by Sen:te (requires free Goban Extensions)[58]
  • GNU Go, an open source classical Go program
  • Go++[59] by Michael Reiss (sold as Strongest Go or Tuyoi Igo in Japan)
  • KataGo[60], an open source Go program by David Wu with improvements[61] over AlphaGo Zero
  • Leela[48], the first Monte Carlo program for sale to the public
  • Leela Zero[48], a reimplementation of the system described in the AlphaGo Zero paper
  • The Many Faces of Go[47] by David Fotland (sold as AI Igo in Japan)
  • MyGoFriend[50] by Frank Karger
  • MoGo[62] by Sylvain Gelly; parallel version[49] by many people.
  • Pachi[63] open source Monte Carlo program by Petr Baudiš, online version Peepo[64] by Jonathan Chetwynd, with maps and comments as you play
  • Smart Go[65] by Anders Kierulf, inventor of play go online now Smart Game Format
  • Steenvreter[66] by Erik van der Werf
  • Zen[67] by Yoji Ojima aka Yamato (sold as Tencho no Igo in Japan); parallel version by Hideki Kato.

Competitions among computer Go programs[edit]

Several annual competitions take place between Go computer programs, the most prominent being the Go events at the Computer Olympiad. Regular, less formal, competitions between programs used to occur on the KGS Go Server[68] (monthly) and the Computer Go Server[69] (continuous).

Prominent go-playing programs include Crazy Stone, Zen, Aya, Mogo, The Many Faces of Go, pachi and Fuego, all listed above; and Taiwanese-authored coldmilk, Dutch-authored Steenvreter, and Korean-authored DolBaram.


The first computer Go competition was sponsored by Acornsoft,[70] and the first regular ones by USENIX. They ran from 1984 to 1988. These competitions introduced Nemesis, the first competitive Go program from Bruce Wilcox, and G2.5 by David Fotland, which would later evolve into Cosmos and The Many Faces of Go.

One of the early drivers of computer Go research was the Ing Prize, a relatively large money award sponsored by Taiwanese banker Ing Chang-ki, offered annually between 1985 and 2000 at the World Computer Go Congress (or Ing Cup). The winner of this tournament was allowed to challenge young players at a handicap in a short match. If the computer won the match, the prize was awarded and a new prize announced: a larger prize for beating the players at a lesser handicap. The series of Ing prizes was set to expire either 1) in the year 2000 or 2) when a program could beat a 1-dan professional at no handicap for 40,000,000 NT dollars. The last winner was Handtalk in 1997, claiming 250,000 NT dollars for winning an 11-stone handicap match against three 11–13 year old amateur 2–6 dans. At the time the prize expired in 2000, the unclaimed prize was 400,000 NT dollars for winning a nine-stone handicap match.[71]

Many other large regional Go tournaments ("congresses") had an attached computer Go event. The European Go Congress has sponsored a computer tournament since 1987, and the USENIX event evolved into the US/North American Computer Go Championship, held annually from 1988–2000 at the US Go Congress.

Japan started sponsoring computer Go competitions in 1995. The FOST Cup was held annually from 1995 to 1999 in Tokyo. That tournament was supplanted by the Gifu Challenge, which was held annually from 2003 to 2006 in Ogaki, Gifu. The Computer Go UEC Cup has been held annually since 2007.

Rule Formalization Problems in computer-computer games[edit]

When two computers play a game of Go against each other, the ideal is to treat the game in a manner identical to two humans playing while avoiding any intervention from actual humans. However, this can be difficult during end game scoring. The main problem is that Go playing software, which usually communicates using the standardized Go Text Protocol (GTP), will not always agree with pearl avenue library san jose to the alive or dead status of stones.

While there is no general way for two different programs to "talk it out" and resolve the conflict, this problem is avoided for the most part by using Chinese, Tromp-Taylor, or American Go Association (AGA) rules in which continued play (without penalty) is required until there is no more disagreement on the status of any stones on the board. In practice, such as on the KGS Go Server, the server can mediate a dispute by sending a special GTP command to the two client programs indicating they should continue placing stones until there is no question about the status of any particular group (all dead stones have been captured). The CGOS Go Server usually sees programs resign before a game has even reached the scoring phase, but nevertheless supports a modified version of Tromp-Taylor rules requiring a full play out.

These rule sets mean that a program which was in a winning position at the end of the game under Japanese rules (when both players have passed) could lose because of poor play in the resolution phase, but this is not a common occurrence and is considered a normal part of the game under all of the area rule sets.

The main drawback to the above system is that some rule sets (such as the traditional Japanese rules) penalize the players for making these extra moves, precluding the use of additional playout for two computers. Nevertheless, most modern Go Programs support Japanese rules against humans and are competent in both play and scoring (Fuego, Many Faces of Go, SmartGo, etc.).

Historically, another method for resolving this problem was to have an expert human judge the final board. However, this introduces subjectivity into the results and the risk that the expert would miss something the program saw.


Many programs are available that allow computer Go engines to play against each other and they almost always communicate via the Go Text Protocol (GTP).

GoGUI and its addon gogui-twogtp can be used to play two engines against each other on a single computer system.[72] SmartGo and Many Faces of Go also provide this feature.

To play as wide a variety of opponents as possible, the KGS Go Server allows Go engine vs. Go engine play as well as Go engine vs. human in both ranked and unranked matches. CGOS is a dedicated computer vs. computer Go server.

See also[edit]


  1. ^"柯洁迎19岁生日 雄踞人类世界排名第一已两年" (in Chinese). May 2017.
  2. ^"World's Go Player Ratings". 24 May 2017.
  3. ^Metz, Cade (9 March 2016). "Google's AI Wins First Game in Historic Match With Go Champion". WIRED.
  4. ^"AlphaGo victorious once again". 10 March 2016.
  5. ^Bouzy, Bruno; Cazenave, Tristan (9 August 2001). "Computer Go: An AI oriented survey". Artificial Intelligence. 132 (1): 39–103. doi:10.1016/S0004-3702(01)00127-8.
  6. ^Johnson, George (1997-07-29), "To Test a Powerful Computer, Play an Ancient Game", The New York Times, retrieved 2008-06-16
  7. ^"Go, Jack Good".
  8. ^Silver, David; Huang, Aja; Maddison, Chris J.; Guez, Arthur; Sifre, Laurent; Driessche, George van den; Schrittwieser, Julian; Antonoglou, Ioannis; Panneershelvam, Veda; Lanctot, Marc; Dieleman, Sander; Grewe, Dominik; Nham, John; Kalchbrenner, Nal; Sutskever, Ilya; Lillicrap, Timothy; Leach, Madeleine; Kavukcuoglu, Koray; Graepel, Thore; Hassabis, Demis (28 January 2016). "Mastering the game of Go with deep neural networks and tree search". Nature. 529 (7587): 484–489. Bibcode:2016Natur.529.484S. doi:10.1038/nature16961. ISSN 0028-0836. PMID 26819042. S2CID 515925.closed access
  9. ^Wedd, Nick. "Human-Computer Go Challenges". Retrieved 2011-10-28.
  10. ^"'Huge leap forward': Computer that mimics human brain beats professional at game of Go".
  11. ^Albert Zobrist (1970), Feature Extraction and Representation for Pattern Recognition and the Game of Go. Ph.D. Thesis (152 pp.), University of Wisconsin. Also published as technical report
  12. ^Millen, Jonathan K (April 1981). "Programming the Game of Go". Byte. p. 102. Retrieved 18 October 2013.
  13. ^Webster, Bruce (November 1984). "A Go Board for the Macintosh". Byte. p. 125. Retrieved 23 October 2013.
  14. ^"CS-TR-339 Computer Go Tech Report". Retrieved 28 January 2016.
  15. ^See for instance intgofed.orgArchived May 28, 2008, at the Wayback Machine
  16. ^"EGC 2010 Tampere News". Archived from the original on 14 August 2009. Retrieved 28 January 2016.
  17. ^"KGS Game Archives". Retrieved 28 January 2016.
  18. ^"Zen computer Go program beats Takemiya Masaki with just 4 stones!". Go Game Guru. Archived from the original on 2016-02-01. Retrieved 28 January 2016.
  19. ^"「アマ六段の力。天才かも」囲碁棋士、コンピューターに敗れる 初の公式戦". MSN Sankei News. Archived from the original on 24 March 2013. Retrieved 27 March 2013.
  20. ^"codecentric go challenge – Just another WordPress play go online now. Retrieved 28 January 2016.
  21. ^Gibney, Elizabeth (2016). "Google AI algorithm masters ancient game of Go". Nature News & Comment. 529 (7587): 445–446. Bibcode:2016Natur.529.445G. doi:10.1038/529445a. PMID 26819021. S2CID 4460235. Retrieved 28 January 2016.
  22. ^"Artificial intelligence: Google's AlphaGo beats Go master Lee Se-dol". can you take money off a walmart gift card News Online. 12 March 2016. Retrieved 12 March 2016.
  23. ^"Google's DeepMind defeats legendary Go player Lee Se-dol in historic victory". 9 March 2016. Retrieved 9 March 2016.
  24. ^"Artificial intelligence: Go master Lee Se-dol wins against AlphaGo program". BBC News Online. 13 March 2016. Retrieved 13 March 2016.
  25. ^"Google's AlphaGo AI beats Lee Se-dol again to win Go series 4-1". The Verge. 15 March 2016. Retrieved 15 March 2016.
  26. ^Metz, Cade play go online now. "After Win in China, AlphaGo's Designers Explore New AI". Wired.
  27. ^"World's Go Player Ratings". May 2017.
  28. ^"柯洁迎19岁生日 雄踞人类世界排名第一已两年" (in Chinese). May 2017.
  29. ^Metz, Cade (2017-05-25). "Google's AlphaGo Continues Dominance With Second Win in China". Wired.
  30. ^Silver, David; Schrittwieser, Julian; Simonyan, Karen; Antonoglou, Ioannis; Huang, Aja; Guez, Arthur; Hubert, Thomas; Baker, Lucas; Lai, Matthew; Bolton, Adrian; Chen, Yutian; Lillicrap, Timothy; Fan, Hui; Sifre, Laurent; Driessche, George van den; Graepel, Thore; Hassabis, Demis (19 October suntrust small business app. "Mastering the game of Go without human knowledge"(PDF). Nature. 550 (7676): 354–359. Bibcode:2017Natur.550.354S. doi:10.1038/nature24270. ISSN 0028-0836. PMID 29052630. S2CID 205261034.closed access
  31. ^Game Tree Searching with Dynamic Stochastic Control pp. 194–195
  32. ^"5x5 Go is solved". Retrieved 28 January 2016.
  33. ^On page 11: "Crasmaru shows that it is NP-complete to determine the status of certain restricted forms of life-and-death problems in Go." (See the following reference.) Erik D. Demaine, Robert A. Hearn (2008-04-22). "Playing Games with Algorithms: Algorithmic Combinatorial Game Theory". arXiv:cs/0106019.
  34. ^Marcel Crasmaru (1999). "On the complexity of Tsume-Go". Computers and Games. Lecture Notes in Computer Science. 1558. London, UK: Springer-Verlag. pp. 222–231. doi:10.1007/3-540-48957-6_15. ISBN .
  35. ^"Triple Ko".
  36. ^"Quadruple Ko".
  37. ^"Molasses Ko".
  38. ^"Moonshine Life".
  39. ^"Computer Go Programming".
  40. ^"example of weak play of dominion virginia power phone number richmond va computer program". Archived from the original on 2012-07-10. Retrieved 2010-08-28.
  41. ^"Facebook trains AI to beat humans at Go board game – BBC News". BBC News. 27 January 2016. Retrieved 2016-04-24.
  42. ^Ormerod, David (12 March 2016). "AlphaGo shows its true strength in 3rd victory against Lee Sedol". Go Game Guru. Archived from the original on 13 March 2016. Retrieved 12 March 2016.
  43. ^"". Archived from the original on 3 July 2007. Retrieved 28 January 2016.
  44. ^Muhammad, Mohsin. Thinking games[permanent dead link], Artificial Intelligence 134 (2002): p150
  45. ^ abcdMüller, Martin (January 2002). "Computer Go". Artificial Intelligence. 134 (1–2): 145–179. doi:10.1016/S0004-3702(01)00121-7.
  46. ^ ab"Fuego".
  47. ^ abDavid Fotland. "Dan Level Go Software – Many Faces of Go".
  48. ^ abc"Sjeng – chess, audio and misc. software".
  49. ^ ab"Archived copy". Archived from the original on 2008-08-10. Retrieved 2008-06-03.CS1 maint: archived copy as title (link)
  50. ^ ab"MyGoFriend – Gold Medal Winner 15th Computer Olympiad, Go (9x9)". Archived from the original on 2010-12-08.
  51. ^"UCT".
  52. ^"Page not found (404)". Archived from the original on 2007-11-03.
  53. ^David Fotland. "Smart Games".
  54. ^"Computing Elo Ratings of Move Patterns in the Game of Go". Retrieved 28 January 2016.
  55. ^"Research Blog: AlphaGo: Mastering the ancient game of Go with Machine Learning". Google Research Blog. 27 January 2016.
  56. ^"Page ON/サービス終了のお知らせ". Archived from the original on 2006-12-11.
  57. ^"Goban. Play Go on Mac – Sen:te". Archived from the original on 2013-05-19. Retrieved 2013-06-14.
  58. ^"Goban Extensions – Sen:te". Archived from the original on 2016-05-18. Retrieved 2013-06-14.
  59. ^"Go++, Go playing program". Archived from the original on 2003-05-25. Retrieved 2020-07-27.
  60. ^"GitHub - lightvector/KataGo: GTP engine and self-play learning in Go".
  61. ^"arXiv:1902.10565 - Accelerating Self-Play Learning in Go".
  62. ^"Archived copy". Archived from the original on 2006-11-28. Retrieved 2007-02-21.CS1 maint: archived copy as title (link)
  63. ^"Pachi – Board Game of Go / Weiqi / Baduk".
  64. ^http://www.peepo.comArchived 2011-09-04 at the Wayback Machine
  65. ^Anders Kierulf. "SmartGo".
  67. ^"Zen (go program)".
  68. ^"Computer Go Tournaments on KGS".
  69. ^"9x9 Go Server". Archived from the original on 2007-01-19. Retrieved 2007-03-25.
  70. ^"Acorn 1984 The First Computer Go Tournament".
  71. ^David Fotland. "World Computer Go Championships". Retrieved 28 January 2016.
  72. ^Using GoGUI to play go computers against each otherArchived 2011-03-09 at the Wayback Machine

Further reading[edit]

  • Co-Evolving a Go-Playing Neural Network, written by Alex Lubberts & Risto Miikkulainen, 2001
  • Computer Game Playing: Theory and Practice, edited by M.A. Brauner (The Ellis Horwood Series in Artificial Intelligence), Halstead Press, 1983. A collection of computer Go articles. The American Go Journal, vol. 18, No 4. page 6. [ISSN 0148-0243]
  • A Machine-Learning Approach to Computer Go, Jeffrey Bagdis, 2007.
  • Minimalism in Ubiquitous Interface Design Wren, C. and Reynolds, C. (2004) Personal and Ubiquitous Computing, 8(5), pages 370–374. Video of computer Go vision system in operation shows interaction and users exploring Joseki and Fuseki.
  • Monte-Carlo Go, presented by Markus Enzenberger, Computer Go Seminar, University of Alberta, April 2004
  • Monte-Carlo Go, written by B. Bouzy and B. Helmstetter from Scientific Literature Digital Library
  • Static analysis of life and death in the game of Go, written by Ken Chen & Zhixing Chen, 20 February 1999
  • article describing the techniques underlying Mogo

External links[edit]

  • video: computer Go to come
  • Extensive list of computer Go events
  • All systems Go by David A. Mechner (1998), discusses the game where professional Go player Janice Kim won a game against program Handtalk after giving a 25-stone handicap.
  • Kinger, Tim and Mechner, David. An Architecture for Computer Go (1996)
  • Computer Go and Computer Go Programming pages at Sensei's Library
  • Computer Go bibliography
  • Another Computer Go Bibliography
  • Computer Go mailing list
  • Published articles about computer Go on Ideosphere gives current estimate of whether a Go program will be best player in the world
  • Information on the Go Text Protocol commonly used for interfacing Go playing engines with graphical clients and internet servers
  • The Computer Go Room on the K Go Server (KGS) for online discussion and running "bots"
  • Two Representative Computer Go Games, an article about two computer Go games played in 1999, one with two computers players, and the other a 29-stone handicap human-computer game
  • What A Way to Go describes work at Microsoft Research on building a computer Go player.
  • Cracking Go by Feng-hsiung Hsu, IEEE Spectrum magazine (October 2007) – Why it should be possible to build a Go machine stronger than any human player
  • computer-go-dataset, SGF datasets of 1,645,958 games
You are about to leave a site operated by The Pokémon Company International, Inc.

The Pokémon Company International is not responsible for the content of any linked website that is not operated by The Pokémon Company International. Please note that these websites' privacy policies and security practices may differ from The Pokémon Company International's standards.

Click Continue to visit, our official online shop.
The privacy and security policies differ.
Report Inappropriate Screen Name administrators have been notified and will review the screen name for compliance with the Terms of Use.

Report Inappropriate Screen Name

Your request could not be completed. Please try again. If the problem persists, please contact Customer Support.

You've been awarded 0 Token(s) for watching Pokémon TV!


The whole world of CATAN in your hands

The entire CATAN universe for Android, iOS, Mac and Windows. Enjoy the original board game on your smartphone, tablet or PC – at home or on the go!


Trade – Build – Settle

Settling, trading and building are the keys to success. Tactically place your settlements and roads, squabble with your friends over resources and cleverly block tiles with the robber. Just like in the board game, it takes some serious strategic thinking to outsmart your fellow players.

CATAN – Seafarers

Build ships and set sail with your settlers on an adventure to new coasts in this expansion. Like in the well-known board game, you earn your sea legs and explore undiscovered islands and exotic treasures – just be wary of pirates!

read more 1st person point of view in literature width="368" height="299" src="ädte.png" alt="">

CATAN - Cities and Knights

The barbarians are coming! Call up the knights of CATAN to defend your cities from barbarians. With new tradeable wares and inventions, CATAN enters a new age. Whose cities will balloon into metropolises? Play go online now will save CATAN from the barbarians?

read more >


CATAN - Special Scenarios

New dangers are threatening CATAN! The only way to cure the drought is by building a canal on CATAN, and a mysterious new island promises countless treasures. But be careful! You can’t overcome these challenges alone.

read more >


CATAN – Rise of the Inkas

Lay the foundation for a healthy, expanding civilization for the Inka. New resources enrich trading – but there’s limited space for settlements and cities. Only once you strategically place settlements and cunningly build over other players will you successfully bring about the rise of the Inka.

read more >


Rivals for CATAN

Challenge your rival in the CATAN universe! Win or lose, there is no second place. Erect buildings in the settlements or cities of your territory to secure victory points. Pay close attention to the cards your rival plays and protect your territory and resources from backhanded attacks the chuo mitsui trust and banking unexpected side effects. It’s CATAN for two players!

read more >

Features of

CATAN - Universe

Multiple devices and cross platform

Play CATAN anywhere, anytime. On your smartphone, tablet, desktop or laptop computer: Play CATAN on any device. Download the game for free and start playing right away. One account, all devices: Get the app for Android, iOS, Mac and Windows.

Online and offline

Play online against your friends anywhere in the world or challenge the strong computer opponents in an offline game. No matter which variant you prefer, CATAN offers thrilling gameplay.

Original rules

Are you familiar with CATAN? Use your knowledge to strategize and make tactical moves in order to gain an advantage over your opponents. CATAN Universe uses the current official rules of the board game.

Level system and customisation

Create your own customized avatar. Level up to unlock new clothing, hairstyles and accessories for your avatar. Or replace your avatar with a cute pet in the shop.

Leaderboards and Seasons

Find out where you stand against the best players in the world. Will you earn the prestige of ranking as a grandmaster? Seasons offer the additional walmart vizio smart tv to prove yourself over and over again within the huge CATAN Universe Community and 0.5 mm to m epic extras.


Your Friends.Your Games.Your Table.

Roll20 uses cookies to improve your experience on our site. Cookies enable you to enjoy certain features, social sharing functionality, and tailor message and display ads to your interests on our site and others. They also help us understand how our site is being used. By continuing to use our site, you consent to our use of cookies. Update your cookie preferences here.


Create Free Account

Play games anywhere. Share them with anyone. With Roll20® as your virtual tabletop, your adventures are limitless.

How To Get Started

Sign Up to Icon

Sign Up

Create your pnc instant debit card locations account. Everything else is right in your browser - nothing to download or install.

Choose a Game Icon

Choose a Game

Build your own from scratch, buy a ready-to-play adventure in the Roll20® Marketplace, or join someone’s game.

Invite Friends Icon

Invite Friends

Share a link with your existing group or find a new party with the Join a Game feature.

Play Icon


Start gaming! We’ve got you covered from basic rolls to advanced calculations, turn trackers to simple markers.

Upload your own or choose from our Marketplace full of talented artists.
Hundreds of sheets to automatically track and calculate character information, or build your own.
Character Sheets<
Automate tedious game mechanics: get hundreds of options you can add with one click.
Programming Scripts<

Create Free Account

Always Evolving

It's faster, better, and easier to play on Roll20®. It’s also always improving. From the best Dice Engine on the internet to immersive features like Dynamic Lighting, Roll20® will make your game nights better than you could have imagined possible.

Create Free Account

Drag & Drop Monsters,
Characters, & NPCs

A rad wolf

Interactive Character

  • Drag & Drop Monsters, Characters, & NPCs

  • Dynamic Lighting

  • Click-to-Roll Dice

  • Video & Voice Chat

  • Interactive Character Sheets

Roll20 logo

The Roll20 team is dedicated to enabling gamers to unite across any distance via our easy-to-use gaming tools. This means we strive to lessen the technical burden on the participants, facilitate the formation of new gaming groups, and to make barriers to entry as few as possible when gathering around a table for camaraderie. To accomplish these goals we seek to create a service that is sustainable and will be a resource to the gaming community as long as it is needed.


Share this post with:

Send this post to the trash

Send this post to the lock box



All of Interland


Lock Box


Game Paused

Congrats, Internaut! But you're not done yet. It's time to put your skills to the test!


I have learned to understand {gap1} and how to report it.
used to get length of answer

Launch post

Use the left and right keys to select the correct audience

Use the left and right keys to select the correct audience

Use the spacebar to select the audience.

Use the checkbutton to select the correct audience

Use the spacebar to select the audience.

Use the checkbutton to select the correct audience

Источник: /en_us/interland/

Zynga <i>Play go online now</i> width=

Zynga is a global leader in interactive entertainment with a mission to connect the world through games. With massive global reach in more than 175 countries and regions, Zynga has a diverse portfolio of popular game franchises that have been downloaded more than four billion times on mobile including CSR RacingTM, Empires & PuzzlesTM, Golf RivalTM, Hair ChallengeTM, Harry Potter: Puzzles & SpellsTM, High Heels!TM, Merge Dragons!TM, Merge Magic!™, Queen BeeTM, Toon Blast™, Toy Blast™, Words With FriendsTM and Zynga PokerTM.  With Chartboost, a leading mobile advertising and monetization platform, Zynga is an industry-leading next-generation platform with the ability to optimize programmatic advertising and yields at scale. Founded in 2007, Zynga is headquartered in California with locations in North America, Europe and Asia. For more information, visit or follow Zynga on Twitter, Instagram, Facebook or the Zynga blog.

play go online now