7.4 Expected Value and Variance Flashcards

1
Q

What is Expected Value?

What information does it give?

A

The expected value of a random variable is the sum over all elements in a sample space of the
product of the probability of the element and the value of the random variable at this element.
Consequently, the expected value is a weighted average of the values of a random variable.

It provides a central point for the distribution of values of this random variable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Definition 1 :

Expected value :

A

The expected value also called the expectation or mean, of the random variable X on the
sample space S is equal to
E(X) = ∑ p(s)X(s).
s∈S

The deviation of X at s ∈ S is X(s) − E(X), the difference between the value of X and the
mean of X.

S = {x1, x2, … , xn}, E(X) = ∑n p(xi)X(xi)
i=1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

When there are infinitely many elements of the sample space, the expectation is?

A

The expectation is defined only when the infinite series in the definition is absolutely convergent. In particular, the expectation of a random variable on an infinite sample space is finite if it exists.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

THEOREM 1

what if outcomes are large? It’s inconvenient to use Definition 1.

proof:

A

we can find the expected value
of a random variable by grouping together all outcomes assigned the same value by the random
variable
If X is a random variable and p(X = r) is the probability that X = r, so that
p(X = r) = ∑s∈S,X(s)=r p(s) , then
E(X) = ∑r∈X(S) p(X = r)r.

proof :
Suppose that X is a random variable with range X(S), and let p(X = r) be the probability that the random variable X takes the value r. Consequently, p(X = r) is the sum of the
probabilities of the outcomes s such that X(s) = r

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Thm 2:

Expected number of successes, Bernauli trial..

prove it.

A

The expected number of successes when n mutually independent Bernoulli trials are performed, where p is the probability of success on each trial, is np.

prove in TB.

using lineraity, trials are not necessarily independent proof, example 5.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Linearity of Expectations

expected value of sum of random variables.

Thm 3:

prove it:

how to this to solve problems?

A

Expected values are linear.
If Xi, i = 1, 2, … , n with n a positive integer, are random variables on S, and if a and b are
real numbers, then
(i) E(X1 + X2 + ⋯ + Xn) = E(X1) + E(X2) + ⋯ + E(Xn)
(ii) E(aX + b) = aE(X) + b.

for proof, tb.

i is inductive proof.
ii is algebra.

To solve: The key step is to express a random variable whose expectation we
wish to find as the sum of random variables whose expectations are easy to find.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Expected number of inversions in a permutation :

A

The ordered pair (i, j) is called an
inversion in a permutation of the first n positive integers if i < j but j precedes i in the
permutation. For instance, there are six inversions in the permutation 3, 5, 1, 4, 2; these
inversions are
(1, 3), (1, 5), (2, 3), (2, 4), (2, 5), (4, 5).

and then example 7 for further solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Avg case computational complexity interpret it as computing expected value of random variable :

A

Computing the average-case computational complexity of an algorithm can be interpreted as computing the expected value of a random variable. Let the sample space of an experiment be
the set of possible inputs aj
, j = 1, 2, … , n, and let X be the random variable that assigns to aj the
number of operations used by the algorithm when given aj as input.
Based on our knowledge of
the input, we assign a probability p(aj) to each possible input value aj.
Then, the average-case complexity of the algorithm is
E(X) = ∑n p(aj)X(aj)
j=1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Geometric Distribution :

A

A random variable X has a geometric distribution with parameter p if p(X = k) =(1 − p)^(k−1)p for k = 1, 2, 3, …, where p is a real number with 0 ≤ p ≤ 1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Where does geometric distribution arise? Application?

A

Geometric distributions arise in many applications because they are used to study the time required before a particular event happens, such as the time required before we find an object with
a certain property, the number of attempts before an experiment succeeds, the number of times
a product can be used before it fails, and so on.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Thm 4:
expected value of X who has geometric distribution:

proof:

A

If the random variable X has the geometric distribution with parameter p, then E(X) = 1∕p

Proof: example 10.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Definition 3:

Independent random variable :

A

The random variables X and Y on a sample space S are independent if
p(X = r1 and Y = r2) = p(X = r1) ⋅ p(Y = r2),
or in words, if the probability that X = r1 and Y = r2 equals the product of the probabilities
that X = r1 and Y = r2, for all real numbers r1 and r2.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

THM 5:

If X and Y are independent random variables on a sample space S,

check example 12 too.

then E(XY) =

A

If X and Y are independent random variables on a sample space S, then E(XY) = E(X)E(Y)

proved in tb.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Definition 4

Variance :

A

Let X be a random variable on a sample space S. The variance of X, denoted by V(X), is
V(X) = ∑ (X(s) − E(X))^2 p(s).
s∈S
That is, V(X) is the weighted average of the square of the deviation of X. The standard
deviation of X, denoted 𝜎(X), is defined to be √V(X).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What information does variance give?

A

The expected value of a random variable tells us its average value, but nothing about how widely its values are distributed.

The variance of a
random variable helps us characterize how widely a random variable is distributed. In particular,
it provides a measure of how widely X is distributed about its expected value.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Theorem 6 provides a useful simple expression for the variance of a random variable. prove it.

A

If X is a random variable on a sample space S, then V(X) = E(X^2) − E(X)^2

17
Q

We can use Theorems 3 and 6 to derive an alternative formula for V(X) that provides some
insight into the meaning of the variance of a random variable.
COROLLARY 1
prove it.

A

If X is a random variable on a sample space S and E(X) = 𝜇, then V(X) = E((X − 𝜇)^2).

18
Q

THEOREM 7 BIENAYME’S FORMULA

Prove it:

A

If X and Y are two independent random variables
on a sample space S, then V(X + Y) = V(X) + V(Y). Furthermore, if Xi
, i = 1, 2, … , n,
with n a positive integer, are pairwise independent random variables on S, then
V(X1 + X2 + ⋯ + Xn) = V(X1) + V(X2) + ⋯ + V(Xn).

19
Q

THEOREM 8 CHEBYSHEV’S INEQUALITY

How likely is it that a random variable takes a value far from its expected value?

prove it:

A

provides an upper bound on the
probability that the value of a random variable differs from the expected value of the random
variable by more than a specified amount.

Let X be a random variable on a sample space S with
probability function p. If r is a positive real number, then
p( |X(s) − E(X)| ≥ r ) ≤ V(X) ∕ r^2