7.4 Expected Value and Variance Flashcards
What is Expected Value?
What information does it give?
The expected value of a random variable is the sum over all elements in a sample space of the
product of the probability of the element and the value of the random variable at this element.
Consequently, the expected value is a weighted average of the values of a random variable.
It provides a central point for the distribution of values of this random variable.
Definition 1 :
Expected value :
The expected value also called the expectation or mean, of the random variable X on the
sample space S is equal to
E(X) = ∑ p(s)X(s).
s∈S
The deviation of X at s ∈ S is X(s) − E(X), the difference between the value of X and the
mean of X.
S = {x1, x2, … , xn}, E(X) = ∑n p(xi)X(xi)
i=1
When there are infinitely many elements of the sample space, the expectation is?
The expectation is defined only when the infinite series in the definition is absolutely convergent. In particular, the expectation of a random variable on an infinite sample space is finite if it exists.
THEOREM 1
what if outcomes are large? It’s inconvenient to use Definition 1.
proof:
we can find the expected value
of a random variable by grouping together all outcomes assigned the same value by the random
variable
If X is a random variable and p(X = r) is the probability that X = r, so that
p(X = r) = ∑s∈S,X(s)=r p(s) , then
E(X) = ∑r∈X(S) p(X = r)r.
proof :
Suppose that X is a random variable with range X(S), and let p(X = r) be the probability that the random variable X takes the value r. Consequently, p(X = r) is the sum of the
probabilities of the outcomes s such that X(s) = r
Thm 2:
Expected number of successes, Bernauli trial..
prove it.
The expected number of successes when n mutually independent Bernoulli trials are performed, where p is the probability of success on each trial, is np.
prove in TB.
using lineraity, trials are not necessarily independent proof, example 5.
Linearity of Expectations
expected value of sum of random variables.
Thm 3:
prove it:
how to this to solve problems?
Expected values are linear.
If Xi, i = 1, 2, … , n with n a positive integer, are random variables on S, and if a and b are
real numbers, then
(i) E(X1 + X2 + ⋯ + Xn) = E(X1) + E(X2) + ⋯ + E(Xn)
(ii) E(aX + b) = aE(X) + b.
for proof, tb.
i is inductive proof.
ii is algebra.
To solve: The key step is to express a random variable whose expectation we
wish to find as the sum of random variables whose expectations are easy to find.
Expected number of inversions in a permutation :
The ordered pair (i, j) is called an
inversion in a permutation of the first n positive integers if i < j but j precedes i in the
permutation. For instance, there are six inversions in the permutation 3, 5, 1, 4, 2; these
inversions are
(1, 3), (1, 5), (2, 3), (2, 4), (2, 5), (4, 5).
and then example 7 for further solution.
Avg case computational complexity interpret it as computing expected value of random variable :
Computing the average-case computational complexity of an algorithm can be interpreted as computing the expected value of a random variable. Let the sample space of an experiment be
the set of possible inputs aj
, j = 1, 2, … , n, and let X be the random variable that assigns to aj the
number of operations used by the algorithm when given aj as input.
Based on our knowledge of
the input, we assign a probability p(aj) to each possible input value aj.
Then, the average-case complexity of the algorithm is
E(X) = ∑n p(aj)X(aj)
j=1
Geometric Distribution :
A random variable X has a geometric distribution with parameter p if p(X = k) =(1 − p)^(k−1)p for k = 1, 2, 3, …, where p is a real number with 0 ≤ p ≤ 1.
Where does geometric distribution arise? Application?
Geometric distributions arise in many applications because they are used to study the time required before a particular event happens, such as the time required before we find an object with
a certain property, the number of attempts before an experiment succeeds, the number of times
a product can be used before it fails, and so on.
Thm 4:
expected value of X who has geometric distribution:
proof:
If the random variable X has the geometric distribution with parameter p, then E(X) = 1∕p
Proof: example 10.
Definition 3:
Independent random variable :
The random variables X and Y on a sample space S are independent if
p(X = r1 and Y = r2) = p(X = r1) ⋅ p(Y = r2),
or in words, if the probability that X = r1 and Y = r2 equals the product of the probabilities
that X = r1 and Y = r2, for all real numbers r1 and r2.
THM 5:
If X and Y are independent random variables on a sample space S,
check example 12 too.
then E(XY) =
If X and Y are independent random variables on a sample space S, then E(XY) = E(X)E(Y)
proved in tb.
Definition 4
Variance :
Let X be a random variable on a sample space S. The variance of X, denoted by V(X), is
V(X) = ∑ (X(s) − E(X))^2 p(s).
s∈S
That is, V(X) is the weighted average of the square of the deviation of X. The standard
deviation of X, denoted 𝜎(X), is defined to be √V(X).
What information does variance give?
The expected value of a random variable tells us its average value, but nothing about how widely its values are distributed.
The variance of a
random variable helps us characterize how widely a random variable is distributed. In particular,
it provides a measure of how widely X is distributed about its expected value.