1.2,.3: Expectation, covariance and Matrix Algebra Flashcards

1
Q

What is the expected value of a random variable?

A

The expected value of a random variable X is its mean in the population

E(X) = ∑(x)x P(X1 = x)
i.e multiply the independent variable x (e.g hours of sleep) by the proportion of people that have that amount of the independent variable (e.g number of hours of sleep)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

In the equation for the expected value:
E(X) = ∑(x)x P(X1 = x)

what are the rules if….

  1. Y = bX
  2. Y = X + a
A
  1. If Y = bX then E(Y ) = E(bX) = bE(X).
    - If y = some constant times x, then the expected value of y is equal to the constant times the expected value of x
  2. If Y = X + a then E(Y ) = E(X + a) = E(X) + a.
    - If y = some constant plus x, then the expected value of y is equal to the constant plus the expected value of x

Example: if E(X) = 0.5 and Y = 2 X + 3, then
E(Y ) = 2 E(X) + 3 = 2 ·0.5 + 3 = 4

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the expected value of Y if Y = X^2?

A

No rules for other transformations, e.g., E(X2) or E[g(X)], exist.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How do we deal with these other transformations when they arise?

A

Once we arrive at one of these expressions we need to stop as we can’t go any further, this is true for any expression of g(x) unless it is composed of the previous two rules.

An example that involves E(X2) is the variance:
Var(X) = E[(X −μ)^2],

We can simplify using the rules of expectation:
Var(X) = E[(X −μ)^2]
= E(X^2 −2μX + μ^2)
= E(X^2) −2μE(X) + μ^2
= E(X^2) −μ^2.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What does the covariance of two variables quantify?

A

The covariance of two random variable X and Y quantifies how they co-vary in the population.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the equation for covariance?

A

Cov(X,Y ) = E[(X −EX)(Y −EY )]

i.e the expected value of the product of the deviations from each of the random variables from their respective population mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Also give the short hand notation and the special case for covariance.

A

• Short hand: 〈X,Y 〉
• Special case: 〈X,X〉= E[(X −EX)^2] = Var(X)
-If you take the computed covariance of a random variable with itself, it is (X - EX)(X - EX) = (X - EX)^2 which is equal to the variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Name and give the four rules of covariance

A
  1. 〈X,Y 〉= 〈Y,X〉 (symmetry)
  2. 〈X,X〉= Var(X) (variance)
  3. 〈bX,Y 〉= b〈X,Y 〉 (scalar multiplication)
  4. 〈X,Y + Z〉= 〈X,Y 〉+ 〈X,Z〉 (sum)

Suppose Y = −4X and Z = 2 + X. Then
〈Y, Z〉=〈−4X, 2 + X〉= −4〈X, 2 + X〉(rule 3)

= −4(〈X, 2〉+ 〈X, X〉) (rule 4, also covariance with a constant = 0)

= −4(0 + Var(X)) = −4Var(X) (rule 2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

We often need to compute the variance of a sum over random variables.

• Let U = X + Y + Z. What is Var(U)?

A
Var(U) = 〈U,U〉
= 〈U,X + Y + Z〉
= 〈U,X〉+ 〈U,Y 〉+ 〈U,Z〉
= 〈X + Y + Z,X〉+ 〈X + Y + Z,Y 〉+ 〈X + Y + Z,Z〉
= 〈X,X〉+ 〈X,Y 〉+ 〈X,Z〉
\+ 〈Y,X〉+ 〈Y,Y 〉+ 〈Y,Z〉
\+ 〈Z,X〉+ 〈Z,Y 〉+ 〈Z,Z〉
= 〈X,X〉+ 〈Y,Y 〉+ 〈Z,Z〉+ 2〈X,Y 〉+ 2〈X,Z〉+ 2〈Y,Z〉
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Imagine that with U = X1 + X2 + ···+ Xp! What makes it easier to carry out these computations?

A

Matrix algebra

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Why is matrix algebra often useful when it comes to data?

A

Data often comes in the form of a matrix. Much of the math is easier in terms of matrix algebra
Many multivariate techniques involve operations on covariance matrices:
•PCA & Factor Analysis
•MANOVA

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a matrix?

A

A matrix is simply a list of numbers layed out in two dimension. e.g:
( a11 a12 a13 a14)
A = ( a21 a22 a23 a24)

(meant to be large brackets instead of two on top of each other)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Describe five conventions of matrices regarding variables denoting matrices, how we call the structure of a particular matrix, how we refer to this structure, denoting the individual items and what we write rather than the full matrix

A

• variables denoting matrices are bold upper case
• we say A is a “two by four matrix” (i.e., rows first)
• number of rows and number of columns are the dimensions
• the element of A in row i and column j is denoted aij
• we often write
A = (aij )
instead of the full matrix

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what does a3. and a.8 refer to respectively?

A
  • The i-th row of A is the row matrix ai.
  • The j-th column of A is the column matrix a.j
  • We can write A = (a.1; a.2; a.3; a.4)

a3. would be the 3rd row of a matrix
a. 8 would be the 8th column of a matrix

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is meant by the transpose of A?

A

The transpose of A swaps rows and columns:
A^T = (aij )^T = (aji)

  ( a11 a12 a13 a14)      A = ( a21 a22 a23 a24)  => 

      ( a11 a21  )      A^T = ( a12 a22)  
      ( a13 a23 )
      ( a14 a24 )
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Why would we transpose a matrix?

A

Two matrices A and B can be added and subtracted if their shapes are the same therefore we can transpose a matrix to match the shape of another in order to carry out calculations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How are these calculations carried out and what allows two matrixes to be added and subtracted in this manner?

A

Addition and subtraction is element wise
A ±B |def=| (aij ±bij )

Obviously A + B = B + A and A −B = −(B −A)

E.g: B = (|1,-1|, |2,0|, |0,3|) C = (|2,3,-1|, |-1,3,-2|)
B + C^T = (|1 + 2, -1 - 1|, |2+3, 0 + 3|, |0-1, 3-2|)
= (|3, -2|, |5, 3|, |-1, 1|)

18
Q

When does matrix multiplication work?

A

Two matrices A and B can be multiplied if they are conformable: only defined if A has the same number of columns as B has rows.

19
Q

What are the dimensions of AB?

A

dimensions of AB are number of rows of A by number of columns of B

so if the dimensions are as follows:
A(p x q) x B(q x r) = C(p x r)
•note that the inner dimensions must match (q)
•the outer dimensions are the dimensions of the result

20
Q

In the following matrices describe how you would carry out one multiplication calculation

A = (|1, 4|, |2, 5|, |3, 6|) 
B = (|7, 9, 11|, |8, 10, 12|)
A
You would multiply the rows of A by the columns of B like:
          (7        8)
          (9       10)
          (11       12)
(1 2 3) ab11   ab21
(4 5 6) ab12  ab22

ab11 = 1.7 + 2.9 + 3.11 = 58

21
Q

This quickly becomes very tedious and error prone! What can we use to solve this?

A

R makes it very easy:

B = matrix(c(1,-1,2,0,0,3), 2, 3)
C = matrix(c(2,3,-1,-1,3,-2),3, 2)
B %*% C
## [,1] [,2]
## [1,] 8 5
## [2,] -5 -5
22
Q

Is matrix multiplication commutative?

A

in general
BC /= CB
• Matrix multiplication is not commutative
makes sense when you think about the rows and columns

23
Q

What is meant by an identity matrix?

A

The matrix I for which IA = AI = A for any square A is the identity matrix. It consists of all 0s except for a diagonal of 1s

You can generate one easily in r:
diag(3)

24
Q

What role does the identity matrix play in scalar algebra?

A

The identity matrix playes the role of 1 in scalar algebra:
• The inverse of number a is the number x for which
xa = ax = 1. Obviously x = a^−1 = 1/a
• Analogously, the inverse of a square matrix A is the matrix X for which
XA = AX = I.
We denote X = A−1.

25
Q

How do we get the inverse of a matrix in R?

A

A = B %*% C

solve(A) # inverse of A

26
Q

How do you get the inverse of a 2 x 2 matrix (One case worth knowing be heart)

A

(|a, c|, |b, d|)^-1 = 1 / ad - bc (|d, -c|, |-b, a|)

if ad - bc /= 0

e.g (|3, -2|, |5, 1|)^-1 = 1 / 3.1 - 4.-2 (|1, 2|, |-4, 3|)
= (|1/11, 2/11|, |-4/11, 3/11|)

27
Q

How would you solve the following simultaneous equations using matrix algebra?

3x + 4y = 3
−2x + y = 1

A

3x + 4y = 3
−2x + y = 1

= (|3, -2|, |4, 1|) . (|x, y|) = (|3, 1|)
= (|3, -2|, |4, 1|)^-1 . (|3, -2|, |4, 1|) . (|x, y|) = (|3, 1|) . (|3, -2|, |4, 1|)^-1
= (|x, y|) = (|3, 1|) . (|3, -2|, |4, 1|)^-1

28
Q

When is there a solution for simultaneous equations in a graphical sense?

A

When there is an intercept between the two lines. if they are parallel then there is no solution. If they lie on the same line then there are multiple (infinite) solutions

29
Q

How does this characteristic of simultaneous equations apply to matrices?

A

If they are solvable as a simultaneous equation then an inverse of the matrix exists

30
Q

What is the simple solution to finding out if an inverse of a matrix exists?

A

The determinant of a square matrix can tell us this. It measures the volume of the parallelogram formed by its columns
• If det A /= 0 then A−1 exists
• Very tedious to compute, but for 2 ×2 matrices:
|A| = det(|a11, a21|, |a12, a22|) = a11a22 - a12a22
e.g:
det(|3, -2|, |4, 1|) = 3 . 1 - (-2) . 4 = 3 + 8 = 11

31
Q

Inverses of matrices are only defined for what kind of matrices?

A

Square matrices

32
Q

How can you determine the determinant of a matrix in R?

A

det(A)

33
Q

What two matrices are important matrices in science?

A

Correlation and covariance matrices

34
Q

How would you get a covariance matrix of a dataset ‘ability’ in R?

A

S = ability.cov$cov

35
Q

Explain the concept of generalised variance and why it is needed

A

Because in multivariate statistics we often have many variables and we need to sometimes quantify the difference in variance in different groups, we need one central measurement of generalised variance; variance of the entire vector of dependent variables.

36
Q

How do we calculate the generalised variance?

A

Simply taking the determinant ( det(S) ) as our measure. This is taken as the collective variance of the variables put together

37
Q

Define the concepts of eigenvalues and eigenvectors

A

All p × p matrices A have associated scalars λj and vectors xj for
which
Axj = λj x

aka if you multiply A by the vector x then you get a vector that is a scalar multiple of the same vector x. This relationship defines item values and item vectors where λ is the item value and a vector proportional to x would be the item vector.

38
Q

How can you find the values in Axj = λj x?

A

Formally found by solving |A −λI|= 0. So λ is chosen in some way to multiply it by the identity and subtract that from 0 to give a determinant of 0. That is quite difficult to do so luckily computers can do it quite efficiently. We can compute it in R through:

S = ability.cov$cov[2:4, 2:4] # spatial tests
R = cov2cor(S)
eigen(R)

This returns both eigen values and eigen vectors

39
Q

How many solutions are there generally to a p x p matrix A?

A

There are at most p unique solutions for λj .

40
Q

Can you compute the eigen values from a covariance or correlation matrix?

A

Both bruh

41
Q

In what analysis are these eigen values and vectors used in? How are they used

A

The eigen vectors are used as principle components in principle component analysis and the item values are the variances in a PCA. Often this conducted in a covariance matrix instead of a correlation matrix