Exam 2 (pt. 1) Flashcards
Chapter 4 Content
A random variable X assigns a numerical value X(s) to . . .
each possible outcome s of the experiment
The source of the randomness in a random variable is _______________, in which a sample outcome s ∈ S is chosen
according to a ___________
the experiment itself
probability function P
A random variable X is said to be discrete if there is a finite list of values a1, a2, . . . , an or an infinite list of values a1, a2, … such that . . .
P(X = aj) ∈ [0, 1]
∑j P(X = aj) = 1
If X is a discrete r.v., then the finite or countably infinite set of values x such that P(X = x) > 0 is called the ______ of X.
support
The _____________ specifies the probabilities of all events associated with the r.v
distribution of a random variable
The probability mass function (PMF) of a discrete r.v. X is the function pX given by _________. Note that this is ________
if x is in the support of X, and _________ otherwise.
pX (x) = P(X = x)
positive, zero
The cumulative distribution function (CDF) of an r.v. X is the function F(X) given by __________
F(x) = P(X ≤ x)
The expected value of a discrete r.v. X whose distinct possible values are x1, x2, . . ., is defined by . . .
The _____________ of X is a weighted average of the possible values that X can take on, weighted by their probabilities
expected value
For any r.v.s X, Y and any constant c there are 4 primary manipulations to know for the E(X):
1. E(c) =
2. E(X + Y) =
3. E(X + c) =
4. E(cX) =
The variance of an r.v. X is ________
For any r.v.s X, Y and any constant c there are 4 primary manipulations to know for the Var(X):
1. Var(X + c) =
2. Var(cX) =
3. Var(X + Y) =
4. Var(X) >= 0 when . . .
An r.v. X is said to have the Bernoulli distribution with
parameter p if . . .
P(X = 1) = p and P(X = 0) = 1 − p
where 0 < p < 1
A Bernoulli distribution is notated as . . .
X ∼ Bern(p)
where p = P(X = 1)
An experiment that can result in either a “success” or a “failure” (but not both) is called a ___________
Bernoulli trial
What is the PMF of a Bernoulli distribution?
What is the CDF of a Bernoulli distribution?
Any r.v. whose possible values are 0 and 1 has a ___________, with p the probability of the r.v. equaling 1. This number p is called the ___________ the distribution
Bern(p) distribution
parameter
Let X ∼ Bern(p). Then E(X) = ________
E(X) = 1 · p + 0 · (1 − p) = p
Let X ∼ Bern(p). Then Var(X) = ________
Var(X) = p(1 − p)
Suppose that n independent Bernoulli trials are performed, each with the same success probability p. Let X be the number of successes. The distribution of X is called the ________ distribution
Binomial
A Binomial distribution is notated as . . .
X ∼ Bin(n, p)
parameters n and p, where n is the number of trials and p is the success probability
What is the PMF of a Binomial distribution?
Let X ∼ Bin(n,p). Then E(X) = ________
E(X) = np
Let X ∼ Bin(n,p). Then Var(X) = ________
Var(X) = np(1 − p)
If X ∼ Bern(p), then by definition, X ∼ Bin(____)
(1, p)
Let X ∼ Bin(n, p). How do we denote the
failure probability of a Bernoulli trial? Then the distribution becomes _____
q = 1 - p
X ∼ Bin(n, q) to denote failures
Let X ∼ Bin(4, 1/2). How do you find the CDF F(X= 1.5) such that P(X ≤ 1.5)?
Sum the PMF over all values of the support that are less than or equal to 1.5
Let X ∼ Bin(4, 1/2). What is P(X > 1)?
Consider a sequence of independent Bernoulli trials, each with
the same success probability p ∈ (0, 1), with trials performed until a success occurs. Let X be the number of failures before the first successful trial. Then X has the_____________
Geometric distribution
A Geometric distribution is notated as . . .
X ∼ Geom(p)
with p as the success probability
What is the PMF of a Geometric distribution starting at 0?
Let X ∼ Geom(p). Then E(X) = ________
E(X) = (1 - p)/p
Let X ∼ Geom(p). Then Var(X) = ________
Var(X) = (1 - p)/p^2
In a sequence of independent Bernoulli trials with success probability p, if X is the number of failures before the rth
success, then X is said to have the ______
Negative Binomial distribution
A Negative binomial distribution is notated as . . .
X ∼ NBin(r, p)
where r is the desired number success and p is the probability of success
What is the PMF of a Negative binomial distribution?
Let X ∼ NBin(r,p). Then E(X) = ________
E(X) = (1 − p)/p
Let X ∼ NBin(r,p). Then Var(X) = ________
Var(X) = r(1 - p)/p^2
An r.v. X has the __________ with parameter λ if it explains the distribution of . . .
1. the number of successes in a particular region or _________
2. a large number of trials, each with a ___________.
Poisson distribution
interval of time
small probability of success
What is the PMF of a Poisson distribution?
A Poisson distribution is notated as . . .
X ~ Pois( λ)
Where parameter λ is interpreted as the rate of occurrence of these rare events
Let X ~ Pois(λ). Then E(X) = _________
E(X) = λ
Let X ~ Pois(λ). Then Var(X) = _________
Var(X) = λ
The Poisson paradigm is also called the law of rare events. The interpretation of “rare” is that the _____ are small, not that _______ is small.
p
λ
Given X ~ Bin(n,p) where n is large, p is small, and np is moderate, we can approximate the Bin(n, p) PMF by the
_____________
Pois(np) PMF
Let C be a finite, nonempty set of numbers. Choose one of these numbers uniformly at random (i.e., all values in C are equally likely). Call the chosen number X. Then X is said to
have the _________________
Discrete Uniform distribution
A Discrete Uniform Distribution is notated as . . .
X ~ Unif(C)
Where parameter C is the finite, nonempty set of numbers
What is the PMF of a Discrete Uniform distribution?
Let X ~ Unif(C). Then E(X) = __________
(a + b)/2
where a is the minimum of the set C and b is the maximum
Let X ~ Unif(C). Then Var(X) = __________
where a is the minimum of the set C and b is the maximum
For an experiment with sample space S, an r.v. X, and a function g : R → R, g(X) is the r.v. that maps s to _____________
g(X(s)) for all s ∈ S
Given a discrete r.v. X with a known PMF, how can we find the PMF of Y = g(X)?
Random variables X and Y are said to be independent if . . .
P(X ≤ x, Y ≤ y) = P(X ≤ x)P(Y ≤ y)
or for discrete cases
P(X = x, Y = y) = P(X = x)P(Y = y)
If X ∼ Bin(n, p), Y ∼ Bin(m, p), and X is independent of Y, then . . .
X + Y ∼ Bin(n + m, p)
Random variables X and Y are said to be conditionally independent given an r.v. Z if for all x, y ∈ R and all z in the
support of Z, if . . .
P(X ≤ x, Y ≤ y | Z = z) = P(X ≤ x | Z = z)P(Y ≤ y | Z = z)
or for discrete cases
P(X = x, Y = y | Z = z) = P(X = x | Z = z)P(Y = y | Z = z)
For any discrete r.v.s X and Z, the function P(X = x|Z = z),
when considered as a function of x for fixed z, is called the
_________________ of X given Z = z.
conditional PMF