Английская Википедия:Anti-computer tactics

Материал из Онлайн справочника
Версия от 15:34, 1 февраля 2024; EducationBot (обсуждение | вклад) (Новая страница: «{{Английская Википедия/Панель перехода}} {{Short description|Human methods against game-playing computers}} thumb|[[Deep Blue (chess computer)|Deep Blue, a famous chess-playing computer that beat world champion Garry Kasparov in a human–computer match]] '''Anti-computer tactics''' are methods used by humans to try to beat computer opponents at various games, most ty...»)
(разн.) ← Предыдущая версия | Текущая версия (разн.) | Следующая версия → (разн.)
Перейти к навигацииПерейти к поиску

Шаблон:Short description

Файл:Deep Blue.jpg
Deep Blue, a famous chess-playing computer that beat world champion Garry Kasparov in a human–computer match

Anti-computer tactics are methods used by humans to try to beat computer opponents at various games, most typically board games such as chess and Arimaa. They are most associated with competitions against computer AIs that are playing to their utmost to win, rather than AIs merely programmed to be an interesting challenge that can be given intentional weaknesses and quirks by the programmer (as in many video game AIs). Such tactics are most associated with the era when AIs searched a game tree with an evaluation function looking for promising moves, often with Alpha–beta pruning or other minimax algorithms used to narrow the search. Against such algorithms, a common tactic is to play conservatively aiming for a long-term advantage. The theory is that this advantage will manifest slowly enough that the computer is unable to notice in its search, and the computer won't play around the threat correctly. This may result in, for example, a subtle advantage that eventually turns into a winning chess endgame with a passed pawn. (Conversely, attempting to lure an AI into a short-term "trap", inviting the play of a reasonable-seeming to humans but actually disastrous move, will essentially never work against a computer in games of perfect information.)

The field is most associated with the 1990s and early 2000s, when computers were very strong at games such as chess, yet beatable. Even then, the efficacy of such tactics was questionable, with several tactics such as making unusual or suboptimal moves to quickly get the computer out of its opening book proving ineffective in human-computer tournaments. The rise of machine learning has also dented the applicability of anti-computer tactics, as machine learning algorithms tend to play the long game equally as well if not better than human players.

Common aspects

One aspect of designing a classic AI for games of perfect information is the horizon effect. Computer AIs examine a game tree of possible moves and counter-moves, but unless a forced win is in the tree, it needs to stop exploring new possibilities eventually. When it does, an evaluation function is called on the board state, which often uses rough heuristics to determine which side the board favors. In chess, this might be things like material advantage (extra pieces), control of the center, king safety, and pawn structure. Exploiting the horizon effect can be done by human players by using a strategy whose fruits are apparent only beyond the plies examined by the AI. For example, if the AI is examining 10 plies ahead, and a strategy will "pay off" in 12-20 plies (6-10 turns), the AI won't play around the looming threat that it can't "see", similar to a person being unable to see "over the horizon" where a ship might be hid by the natural curvature of the earth. Similarly, to keep the horizon short, human players may want to keep as complicated a board state as possible. Simplifying the board by trading pieces lets the AI look "farther" into the future, as there are fewer options to consider, and thus is avoided when trying to exploit the horizon effect.

A tactic that works best on AIs that are very "deterministic" and known to play in one specific way in response to a threat is to force a situation where the human knows exactly how the AI will respond. If the human picks a situation that they believe the AI handles poorly, this can lead to reliably luring the AI into such situations. Even if the AI can handle that particular play style well, if the human is confident that the AI will always pick it, it simplifies preparation for the human player - they can just learn this one situation very closely, knowing that the AI will always accept an invitation to play into that kind of board.

Chess

Шаблон:See also In the 1997 Deep Blue versus Garry Kasparov match, Kasparov played an anti-computer tactic move at the start of the game to get Deep Blue out of its opening book.[1] Kasparov chose the unusual Mieses Opening and thought that the computer would play the opening poorly if it had to play itself (that is, rely on its own skills rather than use its opening book).[2] Kasparov played similar anti-computer openings in the other games of the match, but the tactic backfired.[3] About the two matches, Kasparov wrote after the second game, where he chose the Ruy López, “We decided that using the same passive anti-computer strategy with black would be too dangerous. With white I could control the pace of the game much better and wait for my chances. With black it would be safer to play a known opening even if it was in Deep Blue's book especially if it was a closed opening where it would have difficulty finding a plan. The downside with this strategy as in all the games was that it wasn't my style either. While I was playing anti-computer chess I was also playing anti-Kasparov chess.”

The Brains in Bahrain was an eight-game chess match between human chess grandmaster, and then World Champion, Vladimir Kramnik and the computer program Deep Fritz 7, held in October 2002. The match ended in a tie 4–4, with two wins for each participant and four draws, worth half a point each.[4]

Anti-computer chess games

Arimaa

Arimaa is a chess derivative specifically designed to be difficult for alpha-beta pruning AIs, inspired by Kasparov's loss to Deep Blue in 1997. It allows 4 actions per "move" for a player, greatly increasing the size of the search space, and can reasonably end with a mostly full board and few captured pieces, avoiding endgame tablebase style "solved" positions due to scarcity of units. While human Arimaa players held out longer than chess, they too fell to superior computer AIs in 2015.[5]

References

Шаблон:Reflist

Шаблон:Chess

Шаблон:Game-stub