Chapter 9: Monte Carlo Tree Search, Current AI, Timeline Overview Flashcards
What are the downsides of minimax, and how can Monte Carlo methods be used for tree search?
Downsides of Minimax:
- Trees can get intractably big
- Large branching factor means minimax cannot search very deep
- Evaluation function may fail to capture intricacies of the problem.
In these cases, some kind of generic estimation technique could be useful. We can incorporate Monte Carlo in tree search by implementing Monte Carlo Tree Search. This works by simulating games from a particular node and using the results from these simulations to evaluate game states.
Explain Monte Carlo Tree Search (MCTS), including the four general steps of the algorithm.
Monte Carlo Tree Search is a any-time best-first search algorithm used in game-playing AI. The algorithm consists out of these four steps:
- Selection: the algorithm selects a new node for evaluation.
- Expansion: The algorithm expands new leaf nodes to the tree
- Simulation: The algorithm performs a random playout (sampling) from a particular leaf node.
- Backpropagation: The algorithm propagates the results of the random playout to the nodes in the path to the root node.
Explain the two main variables that a node stores within the tree in the case of MCTS.
The number of times the node was visited: n(v)
Total score obtained in the node q(v). Here the sign depends on which player is to move in v.
v is the current node.
How does MCTS select a node for evaluation?
The selection procedure starts from the root of the tree and the aim is to keep moving to a child until you reach a leaf in the tree. The way the algorithm selects a child is by maximizing the Upper Confidence Bound for Trees (UCT):
uct(w) = (q(w) / n(w))+ c * sqrt(ln(n(v)) / n(w))
q(w) / n(w) is the average reward of a (child) node
n(v) is the parent of n(w)
Parameter c > 0 determines exploration/exploitation trade-off.
Setting c = 2 is common choice, but is problem-dependent in general.
The child of the root with the maximum UCT value is selected.
Explain how the search tree is expanded in the MCTS algorithm.
The approach for expanding the game-tree in MCTS involves four steps, assuming reached n(v) is a leaf-node:
- If a leaf-node v is a terminal state, move to the Backpropagation phase
- If n(v) = 0, move to Simulation phase with node v
- If n(v) > 0, create new child nodes for the node (v) by adding states that can be legally reached from the current state of node (v)
- Move to the Simulation phase with a child of the node (v).
Explain how MCTS uses sampling to evaluate game states.
MCTS uses a simulation phase in order to obtain information on a game-state. In the simulation phase the algorithm performs a random-playout, in which it performs random (legal) moves until the game is finished. It will keep track of the score during these play-outs, and it will determine the score difference at the end. The information about who won will then be backpropagated up the tree.
It is important to note that the moves performed during the random play-out are not added to the game-tree!
Explain what is meant with backpropagation in MCTS.
After performing Simulation in leaf v, we update all nodes (including v) in the path to the root node. Each node is updated as follows:
Update n(w) = n(w) + 1
Update q(w) = q(w) + score difference (or who won, depending on the metric of choice)
How is an actual move selected in MCTS?
Max child = The child of the root that has the maximum value of q(w) / n(w) (average win rate or score difference)
Robust child = Select child w of root v to maximise n(w) (number of visits).
The robust child selection method implies that the child with the highest n(w) should be a node that has proven to be a reliable positive game-state because of the UCT selection mechanism.
What were the main consequences of increased computing power (due to the introduction of more general workstations)?
- “Old” methods were finally getting the power they needed (e.g. kasparov vs Deep blue in chess)
- Approximate expensive methods were becoming “good enough” (e.g. monte carlo methods and metaheuristics)
- Existing methods were becoming practical enough to embed in applications. (e.g. expert systems are now just small features embedded in Microsoft excel)
Which areas advanced with the increase in computing power and the shift to more approximate solution?
- Machine learning (Decision trees, SVMs, regression)
- Combinatorial problems and optimization (planning, scheduling)
- Reason under uncertainty (Bayesian networks and statistics)
- Game playing (chess, backgammon, checkers)
Name several high-profile successes as part of the resurgence of AI:
2011: IBM’s Watson defeats reigning Jeopardy! champions
2012: Google trains neural nets that “recognizes cats on YouTube”
2011-2014: Apple’s Siri, Google’s Google Now, Microsoft Cortana bring natural language AI to people’s smartphones
2016: Google DeepMind’s AlphaGo beats Go champion Lee Sedol.
What lead to the “deep learning revolution”?
- GPU-accelerated neural nets enable practical deep learning
- Specialized neural net topologies successful on certain tasks (e.g. convolution neural nets for images and video, LSTM networks for handwriting, speech and temporal data, transformers for NLP)
- Fueled by the availability of big data
- Combination with reinforcement learning breakthrough in dynamic setting. (Game playing remains relevant as “microworld”, also in e.g. robots, autonomous vehicle control, etc.
Keywords: GPU, Specialized, Big Data, Dynamics, Reinforcement