Conditional Probability Flashcards
conditional probability, the notation P(A|B)
Suppose the person who gave you the dice rolled it behind your back and told you the roll was odd. Now what is the probability that the roll was a 3?
we define the conditional probability
| of event A, given that B has occurred with the following.
P(A|B) = P(A & B)/ P(B) . P(A|B) is the probability that BOTH A and B occur divided by the probability that B occurs
Here A is a subset of B so the probability of both A AND B happening is the probability of A happening. The probability of B is the reciprocal of the number of odd
| numbers between 1 and 6 (inclusive).
The probability of this second event is conditional on this new information, so the probability of rolling a 3 is now one third.
Bayes’ Rule
P(B|A) = P(B&A)/P(A) = P(A|B) * P(B)/P(A). This is a simple form of Bayes’ Rule which relates the two conditional probabilities
Suppose we don’t know P(A) itself, but only know its conditional probabilities, that is, the probability that it occurs if B occurs and the probability that it
| occurs if B doesn’t occur. These are P(A|B) and P(A|~B), respectively. We use ~B to represent ‘not B’ or ‘B complement’.
P(B|A) = P(A|B) * P(B) / ( P(A|B) * P(B) + P(A|~B) * P(~B) )
P(D|+) = P(+|D) * P(D) / ( P(+|D) * P(D) + P(+|~D) * P(~D) )
iid
We’ll conclude with iid. Random variables are said to be iid if they are independent and identically distributed. By independent we mean “statistically unrelated
| from one another”. Identically distributed means that “all have been drawn from the same population distribution”.
Random variables which are iid are the default model for random samples and many of the important theories of statistics assume that variables are iid