Proofs Flashcards

1
Q

If a1, a2, . . . , ak are mutually orthogonal vectors in V (i.e. every pair ai′aj = 0 ; i ≠ j) and all the vectors are non-zero, then the set of vectors are linearly independent.

1.7.3

A

Proof

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Let A: n × m = [ a(1) , a(2) , . . . , a(m) ]. The null space of A′ and the orthogonal complement of the column space of A are the same.

1.9.1

A

Proof

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

For any matrix A: n × m it is true that AA′ is symmetric, positive semi-definite and has the same
rank as A.

1.9.14

A

Proof

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

i) If A^c : m × n is a conditional inverse of A: n × m and the equations
Ax = y
are consistent (i.e. a solution exists for determining x ), then x1 = A^c y is a solution of
Ax = y

ii) If x1 = A^c y is a solution of the equations Ax = y for every y for which these
equations are consistent, then A^c must be a conditional inverse of A.

1.10.3

A

Proof

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Let A be an n × m matrix. If (A′A)^c is a conditional inverse of A′A, then {(A′A)^c }′ is also a conditional inverse of A′A. Conversely, if {(A′A)^c }′ is a conditional inverse of A′A, then (A′A)^c is also a conditional inverse of A′A.

1.10.4

A

Proof

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The system of equations
A′Ax = A′y (A of size n×m)
for determining x is consistent and if x is any solution, then Ax is uniquely determined.

1.11.2

A

Proof

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

For A′Ax = A′ y (A of size n × m) x is a solution if and only if Ax is the projection of y onto the vector space generated by the columns of A (or that x′A′ is the projection of y′ onto the vector space generated by the rows of A′ ).

1.11.3

A

Proof

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

For any vector y , A(A′A)^c A′ y is the projection of y onto the vector space V(A) generated by the columns of A: n × m where (A′A)^c is any conditional inverse of A′A (or y′A(A′A)^c A′ is the projection of y′ onto the vector space generated by the rows of A’).

1.11.4

A

Proof

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

If L is a linear function of k independent variables x1 , x2 , . . . , xk and L = a′x = x’a where a′ = (a1 , a2 , . . . , ak ), then ∂L/∂x = a .

1.13.1

A

Proof

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Y ~ normal(0, Ip) standard normal for multivariate

Derivation of the MGF of multivariate normal

A

Proof

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

If X: p × 1 ~ normalp(μ;Σ), then Y : q × 1 = CX +b is normalq(Cμ +b;CΣC′) distributed where C: q × p is of rank q ≤ p.

2.2.1

A

Proof

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Let X : p × 1 ~ normalp(μ; Σ), then the elements of X are statistically independently distributed if and only if (iff) the elements of X are uncorrelated, i.e. iff Σ is a diagonal matrix.

2.2.2

A

Proof

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Let X : p×1~ normal ; Σ) being partitioned as in (2.3.2). The marginal distribution of Xi is the normal( μi ; Σii ) distribution.

2.3.1

A

Proof

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Let X : p ×1 ~ normal(μ;Σ) being partitioned as in (2.3.2). The sub-vectors X1 and X2 are
statistically independently distributed iff X1 and X2 are uncorrelated, i.e.. iff Σ12 = 0: q × (p – q).

2.3.2

A

Proof

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Let X: p × 1 ~ normalp( μ ; Σ), then Q = (Xμ )′Σ–1(Xμ ) ~ χ2(p).

2.6.1

A

Proof

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Let X: p × 1 ~ normalp(0 ; σ^2 Ip) and A: p × p be a real, symmetric matrix of rank r ≤ p, then the quadratic form Q = X ′AX /σ^2 ~ χ2(r) iff A is idempotent.

2.6.2

A

Proof

17
Q

Let X: p × 1 ~ normalp(μ; Σ). Then
(i) E(X′AX) = tr(AΣ) + μ′Aμ
(ii) var(X′AX) = 2tr{(AΣ)2} + 4μ′A Σ A μ.

2.6.3

A

Proof

18
Q

Define the least squares estimates

3.3.1

A

A set of functions of y, namely ^β1 = ^β1(y),…,^βm =^βm(y) such that the values
bj = ^βj (j = 1, 2, . . . , m) minimise q defined in (3.3.1 ) for a given value of y of the vector variate, is called a set of least squares estimates for the {βj}.

19
Q

A necessary and sufficient condition for the linear function k′π of the parameters π to be linearly
estimable, is that rank (A) = rank [A, k’]^T where [A, k’]^T is the matrix obtained by adjoining the row
vector k′ to the matrix A. This condition can also be written as rank(A′) = rank[ A′, k ] or that k′
is a linear combination of the rows of A.

3.4.1

A

Proof

20
Q

A linear function, say e′Y of the variates Y1 , Y2 , . . . . , Yn , belongs to errors if and only if its expected value is zero, irrespective of the real values of the parameters π .

3.4.2

A

Proof

21
Q

If k′π is any estimable linear function of the parameters π1 , π2 , . . . , πm , then
(i) there exists a unique linear function c′Y of the variates Y1 , Y2 , . . . . , Yn
such that c belongs to the column space V(A) and c′Y is an u.l.e. of k′π;
(ii) var(c′Y) < variance of any other u.l.e. of k′π .

3.5.1

A

Proof

22
Q

If k′π is linearly estimable, then its best estimator is p’A’Y where the m-component vector p satisfies the equations p′A′A = k′.

3.5.2

A

Proof

23
Q

If k′π is linearly estimable, its best estimator is k′^Π , where is any solution of the equations A′Aπ = A′Y is.

3.5.3

A

Proof

24
Q

var(best estimator) = var(k′ ^Π )

var 1

A

Proof

25
Q

var(best estimator)
= var(k′ (A’A)^c A’Y)

var 2

A

Proof

26
Q

var(best estimator)
= var(c’Y)

var 3

A

Proof

27
Q

Cov[(K′ ) ; (K′ )′] = cov{K′(A′A)^c A′Y ; (K′(A′A)^c A′Y )′}

cov 1

A

Proof