Mixed Game Theory Flashcards

1
Q

What is the rationale behind mixed game strategy, how do you form a strategy, and what are its limitations

A

There are also cases in which there is no pure strategy Nash equilibrium. In these cases mixed strategies arise. A game in strategic form does not always have a Nash equilibrium in which each player deterministically chooses one of his strategies.

However, players may instead randomly select from among these pure strategies with certain probabilities. A mixed strategy implies using two or more strategies according to a well-defined probability. It is thus most obviously relevant for games where there is repetition. But they can also arise in one shot games.

Nash showed in 1951 that any finite strategic-form game has an equilibrium if mixed strategies are allowed. As before, an equilibrium is defined by a (possibly mixed) strategy for each player where no player can gain on average by unilateral deviation. Average (that is, expected) payoffs must be considered because the outcome of the game may be random

The least intuitive aspect of mixed equilibrium is that the probabilities depend ontheopponent’s payoffs and not on the player’s own payoffs
The idea is to make your opponent unable to know what it is you’re going to do because as soon as they know what you’re going to do their going to choose a counter strategy that is optimal for them – you want to make your opponent indifferent

The difficulties with this solution are:

  1. Games are often one shot, or not repeated with sufficient frequency, making the inferring of the probabilities of different strategies impossible to calculate. Rationality is thus hard to illustrate.
  2. In many cases a decision maker cannot, or does not, say “OK, we’ll roll a dice to see what strategy we play.” Which is effectively what random strategizing boils down to
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Setting the stage: Normandy example

A

Another example of a mixed strategy solution is in the Normandy game (or Operation Overlord from Dixit and Nalebuff p. 195) arises from the puzzle facing the German and Allied high commands in 1944. It was known that an invasion of Northern France would occur, and that there was only two real choices for where to attack: Calais and Normandy. Calais was easier to defend, but success there would be more valuable to the allies as it would shorten the war. So the game is set up the following way. C and N refer to Calais and Normandy. The Germans must decide where to focus their defence, the Allies must determine where to attack. First, the chances of success are laid out from the perspective of the allies, so percentages refer to the probability of a successful Allied landing.

For the Allies the problem is to choose probabilities of attacking Calais or Normandy that leads to a minimization of the German’s maximum, or best, outcome. Effectively this means choosing probabilities of attack that leave the German’s indifferent in expected value terms between defending one location or the other. To solve this mathematically, note that the value for the Allies can be taken as the negative of the value for the Germans.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

The intuition behind the math

A

For the Allies the problem is to choose probabilities of attacking Calais or Normandy that leads to a minimization of the German’s maximum, or best, outcome. Effectively this means choosing probabilities of attack that leave the German’s indifferent in expected value terms between defending one location or the other. To solve this mathematically, note that the value for the Allies can be taken as the negative of the value for the Germans.

The value to the Germans of defending Calais may be written as : VG(C) = pA(C)(-20) + pA(N)(-80); what’s the value to the German’s of a particular strategy choice based on what the Allies are going to do – this is the value of germans protecting caliuis = probability of alleis attacking caluis + allies attacking Normandy

The value to the Germans of defending Normandy is VG(N) = pA(C)(-100) + pA(N)(-60); the value of Germans defending Normandy = probability of Allies attacking caluis + probability of allies attacking Normandy 
To find where they intersect (or where the German’s are indifferent) set the two expressions equal to one another and solve: -20 pA (C)- 80 pA(N) = -100 pA(C) –60 pA(N) or 80 pA(C) = 20 pA(N) or pA(N) = 4 pA(C). 
Since pA(C)= 1- pA(N) (the two probabilities must add to one) substituting leads to the equation:  pA(N) 12 = 4 - 4 pA(N) or 5 pA(N) = 4 or pA(N) = 0.8. and thus pA(C) = 1 – 0.8 =  0.2
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

The intuition behind the diagram mixed game

A

The alternative solution method is by diagram (Understand the intuition behind the diagram – how you would create the diagram – and what you would do if a value of something changes), which effectively replicates the above procedure but which also makes clear the logic of choosing the minimum point of the maximum outcome – I’m randomly choosing between deciding but you choose according to the probability rule of the answer – so you would want to have a dice for example that has attacking Normandy on 8 of the 10 sides and you would then flip the coin – you have to have a random event that 80% of the time would come up to Normandy – but you fix the probability before hand and the actual outcome you do randomly before you do it – you do it probabilistically because you can’t act with certainty

the intitution behind the diagram is to compare the different values of attacking one location and the oayoffs it yields to our opponent

Probability of allies at Normandy: if it’s 0 then it means that there attacking Calais that’s the pure strategy and if it’s 1 then their attacking Normandy
The different linear lines show the value of defending Normandy or Calais

The allies are saying what’s the worst we can make the Germans do and the worst is at the intersection because if it’s at 0.9 then the Germans know and they would just protect Normandy and the one that leaves them indifferent is when the allies decide to choose attacking Normandy at 0.8 – if it’s below 0.8 then they’ll just protect Calais
I have to keep them guessing I can’t let them know that I’m doing one over the other

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How does a change in the payoff cause changes in the diagram (defending caluis becomes -100 instead of -80 on page 15)

A

What happens if the payoff to one side goes up or down and how does that change the intersection and how would that affect the probability of certain behaviours. What you would do if one payoff changes for example defending calias becomes -100 instead of -80, the line would be steeper than the intersect would move to the left and the probability of allies at Normandy would decrease

In this scenario we’re the allies looking at the values for the Germans, we would graph it by thinking what they would do if we were at Normandy - -60, and -100 that’s one line and the other is the payoff of being at Calais which is -20 and -80 and that’s the other line. The two lines would intersect somewhere and that would be the probability. The 0.8 probability is the allies being at Normandy meaning 0.2 probability of allies being at Calais.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Criticism of game theory

  1. self-defeating
  2. different explanations outside. of game theory that explain the selection of multiple equilibria + probabilistic decision making
  3. inconsistencies (backward induction) + accounting for rationality
A

Instrumental rationality can be self defeating. The simplest way to consider this problem is the prisoner’s dilemma. In PD games the purely “rational” individual should always defect, making the outcome Pareto inferior to one where they all cooperate. Thus, instrumental rationality makes people worse off, which in some sense violates the notion of rationality. At the other extreme may be the idea that norms of cooperation, or good behaviour, evolve to overcome the pitfalls of instrumental rationality. Moral imperatives (either embedded in the realization that instrumental rationality DOES lead to worse results, or in subconscious development of rules of thumb, or implicit, latent, or anticipated social enforcement of norms) mean that cooperative solutions can arise. There is an interesting debate about the capacity of humans to behave ethically or altruistically. Some would argue that humans have a higher moral conscious that allows them to put the interests of others before their own. Others suggest that altruism arises as a consequence of repetition and the understanding that there may be subsequent reciprocity. Still others have suggested that cooperative or altruistic behaviour has arisen instinctively or in evolutionary terms as a superior strategy for survival.

Indeterminacy, i.e. instrumental rationality does not always lead to a unique strategy selection. The problem with multiple equilibrium games is that in practice we are more likely to see different behaviour, there are other ways such as norms that can drive people to certain outcomes in a multiple equilibrium scenario, such as constructivist of how we approach a problem depends on how we observe different actors in the game. Others have argued that mixed strategy solutions are not really sensible, and that small differences in information or information analysis will lead to the selection of a specific equilibrium. Mixed strategy equilibria would certainly be very difficult to sell to politicians as a basis for making policy…imagine telling your minister that you are going to flip a coin to determine whether you will send your anti-terrorist squad to protect either an airport or a nuclear reactor. This problem is perhaps most important for this class, as in situations of conflict not only is there information and disinformation differences, there will be a reluctance to make a probabilistic decision rule as opposed to a single definitive strategy selection. Identifying circumstances of actual mixed strategy choice from cases where there are minor informational, analytical, or perceptual differences is likely to be impossible. For example, dealing harshly with Iraq and in a more conciliatory fashion with North Korea might be part of a single coherent mixed strategy, or they might be completely separate decisions.

There are subtle inconsistencies in game theory principles. The most apparent is seen in the backward induction approach that makes a player contemplate strategies in response to behaviour that cannot occur if rationality is universal. Effectively, game theory’s underlying assumptions need to be modified to allow for the fact that one or more players may be irrational, or may make mistakes. This kind of logic is interesting in a practical sense. While you might see yourself as playing completely rationally, you may be less sure about your opponent. If they make a move that you did not anticipate because you thought it “irrational” on their part, you need to seriously consider why. Is it because they have different (better? worse?) information than you do? Is it because they are irrational? If it is the latter, how will this affect how they respond to your behaviour? Could it be that your opponent is actually playing a “different game”; one possible argument about why Iraq failed to disclose that it did not have any weapons of mass destruction before the invasion was because it was not just playing against the US, it needed to keep up the possibility of having WMDs in order to discourage Iran from attacking it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly