Stats Flashcards

1
Q

What is an algebra over a sample space?

A

For a given Ω, there are certain conditions F must satisfy.
1. Ω ∈ F. (We must include event “something happens” in the set.)
2. A ∈ F ⇒ Ω \ A = A^C ∈ F. (If “A happens” is in set, so is “A doesn’t happen”.)
3. A, B ∈ F ⇒ A ∪ B ∈ F. (If “A happens” and “B happens” are events in set, so is “ A
and/or B happens”.)
A set F that satisfies these conditions is called an algebra (over Ω).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is an Atom?

A

Some events are indivisible and fundamental. We call these atoms. An event E ∈ F is an atom of F if:
1. E ̸= ∅
2. E ∩ A ∈ {∅, E} ∀A ∈ F
In words, each element of F contains either all of E, or none of E.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a Probability Measure?

A

We say that P : F → R is a probability measure over (Ω, F) (where Ω is finite) iff:
1. P(A) ≥ 0 ∀A ∈ F. (All probabilities are non-negative.)
2. P(Ω) = 1. (Something certainly happens.)
3. P(A ∪ B) = P(A) + P(B) for any A, B ∈ F such that A ∩ B = ∅. (Probabilities are (sub)additive).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a Probability Mass Function?

A

If F is an algebra containing finitely many atoms E1, . . . , En, a probability mass function, f, is a function defined for every atom as f(Ei) = pi with:
- pi ∈ [0, 1]
- ∑i=1n pi = 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a bet?

A

a bet, denoted b(M, A), paying reward M if A happens and nothing if A doesn’t happen.

Let m(M, A) denote the maximum you would pay for the bet (assuming you are the gambler). Equivalently m(M, A) denotes the minimum you would want to be paid in order to offer the bet (assuming you are the bookie).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is Symmetry of Bets?

A

If A1, . . . , Ak are disjoint/mutually exclusive and equally likely, with Ω = A1 ∪ . . . ∪ Ak (we say that they are collectively exhaustive events), then we should have
m(M, Ai) = m(M, Aj) and so there should be a constant value c such that
P(Ai) = c ∀i ∈ {1, . . . , k}.
We get, m(M, Ai) = M/k.
We now define the following concept of probability:
P(Ai) = m(1, Ai) = 1/k

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

When are probabilities coherent?

A

Consider a disjoint and exhaustive collection of events {A1, . . . , An}.
A collection of probabilities p1, . . . , pn for these events is coherent if:
1. ∀i ∈ {1, . . . , n} : pi ∈ [0, 1]
2. ∑i=1n pi = 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are Dutch Books?

A

A Dutch book is collection of bets which, for the seller:
- Cannot lead to a loss;
- Might lead to a profit.

A rational buyer would not accept such a collection of bets! If someone’s collection of probabilities is not coherent equivalently incoherent), it is possible to construct a Dutch book to take advantage of that person.

If an individual sets their probabilities too low (less than 1 in total) the buyer can buy everything for a guaranteed profit.
If they set it too high, they buy both bets and the seller is guaranteed a profit.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How will a rational individual set their probabilities?

A

For any event A, any rational individual must have P(A) + P(AC ) = 1.

A rational individual must set
P(A) + P(B) = P(A ∪ B)
for any A, B ∈ F with A ∩ B = ∅.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the Law of Total Probability?

A

Let B1, . . . , Bn be a partition of the sample space.
Let A ⊆ Ω be another event. We can prove that A can be written as:
A = ∪i=1n(Bi ∩ A)
Hence, by axiom 3
P(A) = ∑i=1n P(Bi ∩ A)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the Partition Theorem?

A

Let B1, . . . , Bn be a partition of the sample space.
Then
P(A) = ∑i=1n P(A|Bi)P(Bi)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is Bayes’ Law?

A

If A and B are events of positive probability, then:
P(A|B) = P(B|A)P(A) / P(B)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the Expected Value of Imperfect Information?

A

The expected value of imperfect information (EVII) is the difference in expected value of the best decision made with access to an imperfect source of information regarding the outcome of
chance events, and the best decision made where no additional knowledge is available.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the Expected Value of Perfect Information?

A

The expected value of perfect information (EVPI) is the difference in expected value of the best decision made with full knowledge of the outcome of chance events, and the best decision made where no additional knowledge is available.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are Maximin and Maximax strategies?

A

Maximin - maximise the worst case scenario
Maximax - maximise the best case scenario

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the Optimism-Pessimism Rule?

A

The maximin and maximax rules focus on the worst and possible outcome respectively, whether we are totally pessimistic or optimistic respectively. However, one might not be that extreme in their beliefs, so why not considering one’s degree of optimism. Hence we introduce the optimism pessimism rule: we consider both the best and worst outcome of each possible decision, and we then make a decision according to our individual degree of optimism or pessimism. If we denote α ∈ [0, 1] our degree of optimism, min(d) and max(d) the respective worst and best outcomes, then for each decision d, the expected reward can be written as:
R¯(d) = (1 − α) · min(d) + α · max(d)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are the Axioms of Preferences?

A

A collection of preferences is considered rational if they obey the following four axioms:
1. Completeness. For any A, B we must have one and only one of the following:
A ≻ B A ∼ B A ≺ B
2. Transitivity: assumes that preferences are consistent across any three options.
A ⪰ B and B ⪰ C ⇒ A ⪰ C
3. Independence: a preference holds independently of the possibility of a third outcome.
If A ≻ B then for any t ∈ [0, 1)
tA + (1 − t)C ≻ tB + (1 − t)C
4. Continuity: there exists a leaning point between being better than and worse than a given
middle option. If A ≻ B ≻ C, there exists t ∈ (0, 1) such that
tA + (1 − t)C ∼ B
An alternative to the continuity axiom is the Archimedean axiom. It states that for A ≻ B ≻ C, there exists (α, β) ∈ (0, 1) such that
αA + (1 − α)C ≻ B ≻ βA + (1 − β)C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How can we determine an individual’s level of risk from their utility function?

A

Let a bet certain monetary equivalent (CME) value m = f(α), with probability α receiving r* and 1-α receiving r0 . Three options:
- m < αr∗ + (1 − α)r0: Client will accept lower EMV in exchange for reduced risk - they are risk averse.
- m = αr∗ + (1−α)r0: Client considers EMV of bet to be worth of bet - they are risk neutral (and so the EMV strategy in itself is a risk neutral strategy).
- m > αr∗ + (1 − α)r0: Client will renounce to a higher EMV in exchange for an increased risk - they are risk seeking.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the Utility function?

A

We define the utility function as follows: U(m) = f^−1(m). The value U(x) for amount x is the probability α of getting reward r∗ (with the only alternative being getting reward r0) for which that bet has CME x.
We have U(r0) = 0, U(r∗) = 1 and U(r) ∈ [0, 1] for any possible reward r0 ⪯ r ⪯ r∗. We can define U(r) for any such r.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How can we tell an individual’s level of risk from their utility function?

A

U(x) is concave when α < 1, and convex when α > 1.

  • If f(x) is strictly convex,
    f(px1 + (1 − p)x2) < pf(x1) + (1 − p)f(x2)
  • If f(x) is strictly concave,
    f(px1 + (1 − p)x2) > pf(x1) + (1 − p)f(x2)

For any bet with probability p of reward r1 and probability (1 − p) of reward r2 , then, a
concave utility function with α < 1 implies that the utility of expectation is greater than the expectation of utility. The client’s utility of the EMV is higher than their expected utility for the bet. A concave utility means our client is risk averse.

Conversely, for the same bet, a convex utility function with α > 1 implies the expectation of utility greater than utility of expectation. The client’s utility of the EMV is lower than expected utility for the bet. A convex utility means the client is risk seeking.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is Risk Premium?

A

A risk premium, defined as CME -EMV , is
what a person pays for access to a bet.
(A risk-neutral client always has risk premium of £0, as their CME is equal to EMV.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

How can we find an individual’s CME?

A

The CME is x such that U(x) = Expected Utility.
ie. find the expected utility and take the inverse of the utility function with that value.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is a Purely Competitive Game?

A

A purely competitive game is one where any gain made by one player is precisely matched by the loss to the other player. Every purely competitive game is also (equivalent to) a zero-sum game.

24
Q

What is a Symmetric game?

A

Rock Paper Scissors is also a symmetric game, as is the Prisoner’s Dilemma, as
R(d, δ) = S(δ, d)
This is to say that if you swap around the player’s decisions, you also swap around the rewards
to each player - their positions are interchangeable. This is frequently the case when D = ∆, but
as the dating example shows, it’s not guaranteed.

25
Q

What is the Expected reward from a game?

A

Just as with decision problems, we can consider expected (utility of) rewards in game theory.
For Player 1, expected reward of move di
is
R-(di) = E[R(di, δ)] = Σj=1,m R(di, δj )pj
with the expectation taken over the actions of Player 2.

26
Q

When is a game separable?

A

We call a game separable if there exist functions (not necessarily unique)
r1 : D → R
s1 : D → R
r2 : ∆ → R
s2 : ∆ → R
such that
R(d, δ) = r1(d) + r2(δ)
S(d, δ) = s1(d) + s2(δ)
Or, equivalently, for any (di, δj ) ∈ D × ∆
R(di, δj ) = r1(di) + r2(δj )
S(di, δj ) = s1(di) + s2(δj )

27
Q

How can we separate player’s contributions to reward functions?

A

We can separate player’s contributions to reward functions.
R(di) = r1(di) +Σj=1,m r2(δj )pj
S(δj ) = Σi=1,n s1(di)qi + s2(δj )

28
Q

What is the strategy for separable games?

A

Separable games are simple to find good strategies for. From the last equations, Player 1 can only influence first part of R. Player 2 can only influence second part of S. So each player may as well choose the decision which maximises the term they can affect.
Player 1’s strategy should depend only on r1. They should choose a strategy from
D∗:= {d∗ ∈ D|r1(d∗) ≥ r1(di), i = 1, . . . , n}
Player 2’s strategy should depend only on s2. They should choose a strategy from
∆∗ = {δ∗ ∈ ∆|s2(δ∗) ≥ s2(δj ), j = 1, . . . , m}

29
Q

How can we tell if a game is separable?

A

For a game with a 2x2 payoff matrix,
(a b)
(c d)
the game is separable if a-c=b-d

30
Q

What is Common Knowledge?

A

common knowledge is every piece of information that a) is known by all players, b) known by all players to be known to all other players.

31
Q

What is a dominant move?

A

We call a move dominant if it is better than any other possible moves, regardless of how an opponent acts. That is, move d∗ is said to dominate all other strategies if ∀j, ∀di ̸= d∗:
R(d∗, δj ) ≥ R(di, δj)
It is said to strictly dominate those strategies if ∀j, ∀di ̸= d∗:
R(d∗, δj ) > R(di, δj )

32
Q

If a player has a dominant strategy, what is the optimal move?

A

If a game has a payoff matrix such that Player 1 has a dominant strategy d∗, then the optimal move for Player 1 is d∗, irrespective of p.

33
Q

How should a player respond to a dominant move?

A
  • Player 1, being rational, plays move d∗.
  • Player 2, knowing Player 1 is rational, knows they’ll play move d∗.
  • Player 2 can exploit this knowledge to play optimal move given Player 1 plays d∗.
  • Player 2 plays move δ∗ which is s.t
    ∀j : S(d∗, δ∗) ≥ S(d∗, δj )
  • If there are several possible δ∗, then they can choose arbitrarily.
34
Q

What is Iterated Strict Domination?

A

The idea behind iterated strict domination (ISD): remove your opponent’s obvious bad moves from consideration, as they’re not going to play them.

Basically, remove any dominated strategies from the matrix.

35
Q

What are Purely Competitive Games?

A

In a purely competitive game, one player’s reward is improved only at an equivalent cost to the other player’s reward.
This means that if
R(d′, δ) = R(d, δ) + x
then
S(d′, δ) = S(d, δ) − x

Also, called 0 sum as the sum of the two entries in the payoff matrix sum to 0.

36
Q

What is a Mixed Strategy?

A

A mixed strategy for Player 1 is a probability distribution over D. If a player has mixed strategy x = (x1, . . . , xn), they will play move di with probability xi. A mixed strategy can be defined by using a randomization device, such as a spinner to select a move.

A pure strategy is a mixed strategy in which exactly one of the xi is non-zero, and is therefore equal to 1.

37
Q

What are the expected payoffs for each case of mixed strategy games?

A

Case By Case:
1. Player 1 plays mixed strategy x and player 2 plays pure strategy δj ?
2. Player 1 plays pure strategy di and player 2 plays mixed strategy y?
3. Player 1 plays mixed strategy x and player 2 plays mixed strategy y?

In Case 1, all the uncertainty is in Player 1’s own move, so Player 1 has expectation
∑i=1,n xiR(di, δj)
In Case 2, all uncertainty for Player 1 comes from Player 2’s strategy, so Player 1 has expectation
∑j=1,m R(di, δj)yj
In Case 3, Player 1 has both their own randomness and the uncertainty of Player 2 to consider, giving expectation
∑i=1,n ∑j=1,m xiR(di, δj)yj = x^TMy
(Note here the benefit of expressing a zero-sum game as a pay-off matrix.)

38
Q

What is the value for the two players and how are they linked?

A

V1 can be seen as the highest value Player 1 can get for themselves, or the highest value Player 1 can be sure to get without knowing Player 2’s own decisions. Similar for V2.

(Von Neumann Minimax Theorem)
V1 and V2 as defined previously satisfy V1 = V2.
The unique value V := V1 = V2 is called the value of the game.

39
Q

How do we work out the probabilities of a mixed strategy?

A

This depends on the amount of moves each player has in the game.

NOTE: Always look for dominant moves

For 2x2 game, simply consider the expected payoff x^TMy with x = (x1, 1-x1) and y = (y1, 1-y1) and then choose the value of x1 and y1 that maximises the payoff given the worst possible y.

For 2x3, once again use (x, 1-x) and find expected payoff for each pure strategy player 2 has. Then choose x that maximises payoff given worst possible y. (ie. choose x so the payoff is the highest and cannot be reduced by another move by player 2).

For 3x3, the same as 2x3, but consider (x,y,1-x-y) as player 1 strategy. Then we get a system of 3 linear equations (in terms of V, x and y) that we can solve.

40
Q

What is Pareto Optimality?

A

A collection of strategies (one per player) in a game is said to be (strongly) Pareto optimal/- efficient if no change can be made which will improve one player’s rewards without harming any other player.

A collection of strategies is weakly Pareto optimal if no change can be made which will improve all players’ rewards.

In a game of pure conflict (zero-sum), all sets of pure strategies are Pareto optimal

41
Q

What is a pay-off polygon and how can we find the Pareto optimal pairs from it?

A

The pay-off polygon is a plot of all pay-outs for the players. The points on the upper right boundary are the Pareto optimal pairs.

42
Q

What is a Nash Equilibrium?

A

A collection of strategies (one per player) in a game is said to be a Nash equilibrium if no player can improve their reward by unilaterally changing their strategy.

In a two-players game, mixed strategies x and y comprise a Nash equilibrium if:
∀x′: R(x, y) ≥ R(x′, y)
∀y′: S(x, y) ≥ S(x, y’)

43
Q

What is interchangeability?

A

Two pairs (x, y) and (x′, y′) are interchangeable with respect to some property if (x′, y) and (x, y′) have the same property.
In a two-player game, that means that if x and x′ are two strategies for Player 1, y and y′ two strategies for Player 2, such that (x, y) and (x′, y′) are two Nash equilibria, then (x, y′) and (x′, y) are also Nash equilibria.

44
Q

What is solvability in the Nash sense?

A

A game is Nash solvable if all equilibrium pairs are interchangeable (with respect to being equilibrium pairs).

All zero-sum games are Nash solvable.

45
Q

What is solvability in the Strict sense?

A

A game is solvable in the strict sense if:
- amongst the Pareto optimal pairs, there is at least one equilibrium pair
- the equilibrium Pareto optimal pairs are interchangeable.
The solution to such a game is the set of equilibrium Pareto optimal pairs.

In a zero-sum game, all strategies are Pareto optimal and so this reduces to the notion of Nash solvability: all zero-sum games are solvable in the strict sense.

46
Q

What is solvability in the completely weak sense?

A

A game is solvable in the completely weak sense if after iterated elimination of dominated strategies, the reduced game is solvable in the strict sense.
The solution is then the strict solution of the reduced game.

In a reduced zero-sum game, no strategies are dominated. So this reduces to the notion of solvability in the strict sense: all zero-sum games are solvable in the completely weak sense.

47
Q

What is Transferable Utility?

A

A coalitional game is said to have transferable utility when:
- the payoffs to a coalition may be freely redistributed among its members.
- we are satisfied whenever there is a universal currency that is used for exchange in the system
- that each coalition can be assigned a single value as its payoff.

ie. the payoffs in the coalitional game can be represented by one amount, which is then distributed.

48
Q

What is a Coalitional Game with Transferable Utility?

A

A coalitional game with transferable utility is a pair (N, v), where
- N is a finite set of players, indexed by i;
- v : 2N → R is a characteristic function that associates with each coalition S ⊆ N a real valued payoff v(S) that the coalition’s members can distribute among themselves.
We assume that v(∅) = 0.

49
Q

What are the 4 Fair Payoff Axioms?

A

A fair payoff distribution must satisfy four axioms:
- Symmetry: if two agents contribute the same, they should receive the same pay-off.
Two players i and j are said to be interchangeable if they always contribute the same amount to every coalition of the other agents.
For all coalition S such that i /∈ S and j /∈ S, v(S ∪ {i}) = v(S ∪ {j}).
Such interchangeable agents should receive the same payoffs: xi = xj .
- Dummy player: agents that do not add value to any coalition should get what they earn on their own.
Agent i is called to be a dummy player if the amount that i contributes to any coalition is exactly the amount that i is able to achieve alone.
For all coalition S such that i /∈ S: v(S ∪ {i}) − v(S) = v({i}).
A dummy player should receive a payment equal to exactly the amount that they achieve on their own: xi = v({i})
- Additivity: if two games are combined, the value a player gets should be the sum of the values it gets in individual games.
Consider two different coalitional game theory problems, defined by two different characteristic functions v(1) and v(2), involving the same set of agents. If we re-model the setting as a single game in which each coalition S achieves a payoff of v(1)(S) + v(2)(S), the agents’ payments in each coalition should be the sum of the payments they would have achieved
for that coalition under the two separate games.
If x1 and x2 are payment distributions in the game (N, v(1)) and (N, v(2)), respectively, then x3i = x1i + x2i where x3
is the payment distribution in a game (N, v(1) + v(2)).
- Efficiency: the sum of all the players rewards should be equal to the overall reward of the grand coalition:
iN xi = v({1, . . . , N})

50
Q

What is the Shapley Value?

A

Given a coalitional game (N, v), there is a unique payoff division x that divides the full payoff of the grand coalition and that satisfies the Symmetry, Dummy player, Additivity and Efficiency
axioms.
This payoff division is called the Shapley value, and is defined as follows.

xi = 1/N! * ∑S⊆N{i} |S|!(|N| − |S| − 1)![v(S ∪ {i}) − v(S)]

51
Q

What is Prospect theory?

A

Instead of directly looking rewards, it looks at gains and losses.

It does so by considering a probability weighting function π(p). So unlike EUT, where the value of a bet with rewards x1, . . . , xn with probabilities (p1, . . . , pn) is:
U = ∑i=1,n pi * U(xi)
prospect theory has the equivalent value
V = ∑i=1,n π(pi) * v(xi)

52
Q

What should a weighting function be?

A

That leads to the following fourfold pattern result. A function π(·) should be such that its
graph is:
ˆ regressive - intersects diagonal from above
ˆ asymmetric - has fixed point at about 1/3
ˆ inverse s-shaped - is concave on an initial interval and convex beyond that
ˆ reflective - assigns equal weight to a given loss-probability as to a given gain-probability

We can combine this with the standard concave-convex shape of the value function to get a
fourfold pattern of risk attitudes:
ˆ Risk averse for high probability of gains
ˆ Risk averse for small probability of losses
ˆ Risk seeking when losses have high probabilities
ˆ Risk seeking when gains have small probabilities

53
Q

What is Tversky & Kahneman’s Probability Weighting Function?

A

Defined by
w(p) = p^γ / [p^γ + (1 − p)^γ]^(1/γ), 0 ≤ γ < 1, 0 ≤ p ≤ 1

see notes for example graphs.

54
Q

What is the Prelec function?

A

Probability weighting function given by
w(p) = e^[−β((− ln p)^α)], α > 0, β > 0

55
Q

What is the Tversky & Kahneman utility function?

A

v(x) = x^α if x ≥ 0
−λ(−x)^β if 0 < x