discrete probability theory Flashcards
Probability vocab:
- Experiment
- Sample Space.
- event
experiment:
A procedure that yields one of the given set of possible outcomes.
Sample space and event:
Sample space: set of possible outcomes.
Event: subset of Sample space.
Laplace’s definition of the probability of an event with finitely many possible outcomes:
If S is a finite nonempty sample space of equally likely outcomes, and E is an event, that
is, a subset of S, then the probability of E is
p(E) = |E|/|S|
THM:
probability of complement
proof too
Let E be an event in a sample space S. The probability of the event E = S − E, the complementary event of E, is given by
p(E) = 1 − p(E).
Probability of Union:
proof too
Let E1 and E2 be events in the sample space S. Then
p(E1 ∪ E2) = p(E1) + p(E2) − p(E1 ∩ E2).
Probabilistic reasoning
A common problem is determining which of two events is more likely.
Think of the monty hall 3 door puzzle.
Let S be the sample space of an experiment with a finite or countable number of outcomes. We
assign a probability p(s) to each outcome s. We require that two conditions be met:
(i) 0 ≤ p(s) ≤ 1 for each s ∈ S
(ii) ∑ p(s) = 1.
s∈S
What if the sample space is infinite?
∑ s∈S p(s) is a convergent infinite series.
Integral Calculus is used to study
Probability Distribution?
The function p from the set of all outcomes of the sample space S is called a probability
distribution.
what is p? (I know)
probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment.
Unifrom distribution
probability assignment
Suppose that S is a set with n elements. The uniform distribution assigns the probability 1∕n
to each element of S.
The probability of the event E :
Definition 2 using definition 1 :
The probability of the event E is the sum of the probabilities of the outcomes in E. That is,
p(E) = ∑ p(s).
s∈E
(Note that when E is an infinite set, ∑
s∈E p(s) is a convergent infinite series.)
selecting an element of S at
random.:
The experiment of selecting an
an element from a sample space with a uniform distribution is called selecting an element of S at
random.
Probabilities of Complement and Union :
It stays valid, think in terms of Def 2 of sec 7.2.
If E1, E2, … is a sequence of pairwise disjoint events in a sample space S, then
p(⋃ Ei) is?
i
If E1, E2, … is a sequence of pairwise disjoint events in a sample space S, then
p(⋃ Ei) = ∑i p(Ei).
i
(Note that this theorem applies when the sequence E1, E2, … consists of a finite number or a
countably infinite number of pairwise disjoint events.)
Conditional Probability:
Let E and F be events with p(F) > 0. The conditional probability of E given F, denoted by
p(E ∣ F), is defined as
p(E ∣ F) = p(E ∩ F) / p(F) .
Independent Events:
The events E and F are independent if and only if
p(E ∩ F) = p(E)p(F).
When
two events are independent, the occurrence of one of the events gives no information about the
probability that the other event occurs.
Types of Independences:
The events E1, E2, … , En are pairwise independent if and only if p(Ei ∩ Ej) =p(Ei)p(Ej) for all pairs of integers i and j with 1 ≤ i < j ≤ n.
These events are
mutually independent if p(Ei1 ∩ Ei2 ∩ ⋯ ∩ Eim ) = p(Ei1)p(Ei2) ⋯ p(Eim ) whenever
in , j = 1, 2, … , m, are integers with 1 ≤ i1 < i2 < ⋯ < im ≤ n and m ≥ 2.
Bernoulli trial :
Each performance of an experiment with two possible outcomes is
called a Bernoulli trial, after James Bernoulli, who made important contributions to probability
theory. In general, a possible outcome of a Bernoulli trial is called a success or a failure.
If p
is the probability of a success and q is the probability of a failure, it follows that p + q = 1.
The probability of exactly k successes in n independent Bernoulli trials
THEOREM:
prove it too.
The probability of exactly k successes in n independent Bernoulli trials, with probability of
success p and probability of failure q = 1 − p, is
C(n, k)p^(k q)^(n−k).
Considered as a function of k, we
call this function the binomial distribution.
b(k; n, p) = C(n, k)pkqn−k.
e sum of the probabilities that there are k successes when n independent Bernoulli
trials are carried out, for k = 0, 1, 2, … , n, equals?
b(k; n, p) = (p + q)^n = 1
Random Variable :
A random variable is a function from the sample space of an experiment to the set of real
numbers. That is, a random variable assigns a real number to each possible outcome.
Remark: Note that a random variable is a function. It is not a variable, and it is not random!