Term 2 Week 7: Mixed Strategies and Dynamic Games Flashcards
What is an issue of simple models vs dynamic models (2)
-Simple models abstract away from important real world features, as the order of play is not modelled where that could matter
-Therefore, our models must be able to account for dynamics
What do we assume with dynamic models (1)
-That we have complete information
How does the stop and go game change is it is made dynamic (1,3,4)
-(G,G) = (-5, -5), (G,S) = (10,0), (S,G) = (0,10), (S,S) = (0,0)
-Suppose the row player goes first
-The column player has 2 different decisions to make, depending on if they observe ar = go or ar = stop
-There are now 4 strategies to choose from, to go/stop when ar = go and to go/stop when ar = stop
-Say the row player chooses go
-The column player observes this, and realises stop gives them the best payoff
-This means there is no chance for miscoordinated beliefs
-However, the timing of the choice is important (who goes first)
Why are strategies in dynamic games more complex then static games (2,1)
-A players best response in a dynamic game usually changes based on what the other player does
-Players now pick conditional actions based on past observations
-A players strategy in a dynamic game is a list of which actions will be taken by them at each decision node, a ‘complete contingent plan of action’
How do we draw a game tree (5)
-Looks similar to a family tree
-Start at the top, then each player takes turns making their decisions
-Where the players make their decisions are the decision nodes
-Each new layer is a new turn
-At the end you have the terminal nodes, usually replaced with payoffs
How do we represent imperfect information on a game tree (3)
-Have a game tree where player 1 picks between A and B, player 2 picks between X and Y and then player 1 picks between C and D
-Draw a circle around the decision nodes before C and D
-This represents the information set, as player 1 doesn’t know if player 2 chose X and Y
What is a simultaneous move game (1)
-When both players move at once, and hence don’t know what the other have chosen
How can we find nash equilibrium of a simultaneous move game in normal form (go vs stop) (4,2)
-Suppose 2 players have a choice between go and stop
-For the red player, the columns are labelled go and stop
-For the blue player, the rows are labelled (go, go), (go, stop), (stop, go), (stop, stop) (go, go means go if red goes, go if red stops)
-The payoffs for each player (blue first, (G,G;G = blue plays go go, red plays go)) is (G,G;G = -5, -5), (G,S;G) = (-5, -5), (S,G;G = 0,10), (S,S;G = 0, 10), (G,G;S = 10,0), (G,S;S = 0,0), (S,G;S = 10,0), (S,S;S = 0,0)
-To find Nash equilibrium, in each column underline the highest number for each player, then NE is where both numbers are underlined
-In this case there are 3 nash equilibria, (G,G;S), (S,G;G), (S,S;G)
How can we illustrate nash equilibrium of a simultaneous move game in extensive form (go vs stop) (4)
-Draw your game tree, with the first node being the red players decision, and the second nodes representing what the blue player does in response
-The second 4 paths should have payoffs -5, -5 (G,G), 0,10 (G,S), S,G (10,0), S,S (0,0)
-Go, Go is unnatural, as it is off the equilibrium path (to draw this one draw blue going no matter what and red stopping, if they observed column going they’d want to switch their planned action)
-You can draw each action by drawing what blue would choose depending on the red node we’re at, then drawing which red node it is
What does it mean to be sequentially rational (1)
A strategy is sequentially rational for player i if it is a best response at every decision node
What is the sequentially rational option in the stop and go game (3)
-The only NE with sequential rationality is (Stop, Go), Go.
-This is because if red goes, it is optimal for blue to stop, and if red stops, it is optimal for blue to go
-How we draw this is highlight red go, then highlight blue stop at red node = go, and highlight blue go at red node = stop
What is backward induction (2)
-The sequential rationality assumption gives access to a powerful technique for solving dynamic games: backward induction
-Look forward, but reason backward by working out the best response in the final stage, then second final stage etc
What are 2 issues with backward induction (2)
-It can’t be used with backward induction
-It can’t be applied to games without a finite end point
What is a subgame (2)
-A subgame is any part of the game which begins at a single decision node and contains all successor nodes
-On our payoff tree, this is every section containing one node, and all the payoffs under it (you can get subgames within subgames)
What is a subgame perfect equilibrium (1,2)
-A strategy profile is a subgame perfect equilibrium if and only if it is a Nash equilibrium in every game
-Subgame perfect equilibria are a subset of Nash equilibria
-it is a refinement of NE, a way to select NE which is reasonable
What are ways to find subgame perfect equilibrium (2,2,2)
We can find SPE via:
-Backward induction
-The One-Shot deviation principle (checking if any player has an incentive to make a single deviation at a single informaton set)
-Backward induction is constructive
-OSPD is guess and check
-If the game can be solved by backward induction, then the backward induction outcome is a SPE
-However, SPE goes beyond this and gives us solutions even in the cases where backward induction cannot be appliede