Algebra 2 Flashcards

1
Q

What is a Vector Space?

A

A vector space is an abelian group V with an additional binary operation F × V → V, called scalar multiplication (α, v) → αv, satisfying the following axioms:
1. α(u + v) = αu + αv,
2. (α + β)v = αv + βv,
3. (αβ)v = α(βv),
4. 1v = v.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the ‘easy’ properties of a vector space?

A
  1. α0 = 0 for all α ∈ F,
  2. 0v = 0 and (−1)v = −v for all v ∈ V.
  3. −(αv) = (−α)v = α(−v), for all α ∈ F and v ∈ V.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a subspace?

A

A subspace of V is a non-empty subset W ⊆ V such that
u, v ∈ W ⇒ u + v ∈ W and v ∈ W, α ∈ F ⇒ αv ∈ W.
These two conditions can be replaced with a single condition
u, v ∈ W, α, β ∈ F ⇒ αu + βv ∈ W.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Is the Intersection of two subspaces a subspace?

A

If W1 and W2 are subspaces of V then so is W1 ∩ W2.

Proof. Let u, v ∈ W1 ∩ W2 and α ∈ F. Then u + v ∈ W1 (because W1 is a subspace) and u + v ∈ W2 (because W2 is a subspace). Hence u + v ∈ W1 ∩ W2. Similarly, we get αv ∈ W1 ∩ W2, so W1 ∩ W2 is a subspace of V.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the sum of two subspaces?

A

Let W1, W2 be subspaces of the vector space V. Then W1 + W2 is defined to be the set of vectors v ∈ V such that v = w1 + w2 for some w1 ∈ W1, w2 ∈ W2. Or, if you prefer,
W1 + W2 = {w1 + w2 | w1 ∈ W1, w2 ∈ W2}.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Is W1 + W2 a subspace?

A

If W1, W2 are subspaces of V then so is W1 + W2. In fact, it is the smallest (with respect to the order ⊆) subspace that contains both W1 and W2.

Proof. Let u, v ∈ W1 + W2. Then u = u1 + u2 for some u1 ∈ W1, u2 ∈ W2 and v = v1 + v2 for some v1 ∈ W1, v2 ∈ W2. Then u + v = (u1 + v1) + (u2 + v2) ∈ W1 + W2. Similarly, if α ∈ F
then αv = αv1 + αv2 ∈ W1 + W2. Thus W1 + W2 is a subspace of V.
Any subspace of V that contains both W1 and W2 must contain W1 + W2, so it is the smallest such subspace.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is a Vector Sequence and Linear Combination of vectors?

A

By a vector sequence we understand a finite sequence v1, v2, . . . vn of elements of a vector space V.
Vectors of the form α1v1 + α2v2 + · · · + αnvn for α1, α2, . . . , αn ∈ F are called linear combinations of v1, v2, . . . vn.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

When is a vector sequence linearly dependent?

A

Let V be a vector space over the field F. The vector sequence v1, v2, . . . vn is called linearly dependent if there exist scalars α1, α2, . . . , αn ∈ F, not all zero, such that
α1v1 + α2v2 + · · · + αnvn = 0.

The sequence v1, v2, . . . vn is called linearly independent if they are not linearly dependent. In other words, it is linearly independent if the only scalars α1, α2, . . . , αn ∈ F that satisfy the above equation are α1 = 0, α2 = 0, . . . , αn = 0.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the check for linear dependence?

A

The vector sequence v1, . . . , vn ∈ V is linearly dependent if and only if either v1 = 0 or, for some r, vr is a linear combination of v1, . . . , vr−1.

Proof. If v1 = 0 then by putting α1 = 1 and αi = 0 for i > 1 we get α1v1 + · · · + αnvn = 0, so v1, v2, . . . , vn ∈ V is linearly dependent.
If vr is a linear combination of v1, . . . , vr−1, then vr = α1v1 + · · · + αr−1vr−1 for some α1, . . . , αr−1 ∈
F and so we get α1v1 + · · · + αr−1vr−1 − 1 · vr = 0 and again v1, v2, . . . , vn ∈ V is linearly dependent.
Conversely, suppose that v1, v2, . . . , vn ∈ V is linearly dependent, and αi are scalars, not all zero, satisfying α1v1 + α2v2 + · · · + αnvn = 0. Let r be maximal with αr ≠ 0; then α1v1 + α2v2 + · · · + αrvr = 0. If r = 1 then α1v1 = 0 which is only possible if v1 = 0. Otherwise, we get
vr = −α1/αr * v1 − · · · −αr−1/αr * vr−1.
In other words, vr is a linear combination of v1, . . . , vr−1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Is the set of all linear combinations a subspace?

A

Let v1, . . . , vn be a vector sequence. Then the set of all linear combinations α1v1 + α2v2 + · · · + αnvn of v1, . . . , vn forms a subspace of V.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the span of a vector sequence and when does a sequence span the vector space?

A

The span of a vector sequence is the set of all linear combinations of that sequence.

The sequence v1, . . . , vn spans V if the span of the sequence is V. In other words, this means that every vector v ∈ V is a linear combination α1v1 + α2v2 + · · · + αnvn of v1, . . . , vn.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a Basis?

A

The vector sequence v1, . . . , vn in V forms a basis of V if it is linearly independent and spans V.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How can every vector in a vector space be written as a linear combination of a basis?

A

The vector sequence v1, . . . , vn forms a basis of V if and only if every v ∈ V can be written uniquely as v = α1v1 + α2v2 + · · · + αnvn; that is, the coefficients α1, . . . , αn are uniquely determined by the vector v.

For proof just assume the coefficients are not unique and then consider v - v = 0, giving that the coefficients are equals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the Basis Theorem?

A

Suppose that v1, . . . , vm and w1, . . . , wn are both finite bases of the vector space V. Then m = n. In other words, all finite bases of V contain the same number of vectors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the Dimension of a vector space?

A

The number n of vectors in a basis of the finite-dimensional vector space V is called the dimension of V and we write dim(V) = n.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is Sifting?

A

There is an important process, which we shall call sifting, which can be applied to any sequence of vectors v1, v2, . . . , vn in a vector space V. We consider each vector vi
in turn. If it is zero, or a linear combination of the preceding vectors v1, . . . , vi−1, then we remove it from the list. The
output of the sifting is a new linearly independent vectors sequence with the same span as the original one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How does the length of a vector sequence that spans V and one that is linearly independent differ?

A

Suppose that vector sequence v1, . . . , vn spans V and that the vector sequence w1, . . . , wm ∈ V is linearly independent. Then m ≤ n.

Proof. The idea is to place the wi one by one in front of the sequence v1, . . . , vn, sifting each time.
Since v1, . . . , vn spans V, w1, v1, . . . , vn is linearly dependent, so when we sift, at least one vj is
deleted. We then place w2 in front of the resulting sequence and sift again. Then we put w3 in from of the result, and sift again, and carry on doing this for each wi in turn. Since w1, . . . , wm are linearly independent none of them are ever deleted. Each time we place a vector in front of a sequence which spans V, and so the extended sequence is linearly dependent, and hence at least one vj gets eliminated each time. But in total, we append m vectors wi, and each time at least one vj is eliminated, so we must have m ≤ n.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How are basis related to a vector sequence that spans V?

A

Suppose that the vector sequence v1, . . . , vr spans the vector space V. Then there is a subsequence of v1, . . . , vr which forms a basis of V.

Proof. We sift the vectors v1, . . . , vr. The vectors that we remove are linear combinations of the preceding vectors, and so by Lemma 1.4.9, the remaining vectors still span V. After sifting, no vector is zero or a linear combination of the preceding vectors (or it would have been removed), so by Lemma 1.4.2, the remaining vector sequence is linearly independent. Hence, it is a basis of V.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

If a vector space can be spanned by a finite sequence, is there a basis?

A

If a vector space V is spanned by a finite sequence, then it admits a basis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How is the length of a basis related to the dimension of the vector space?

A

Let V be a vector space of dimension n over F. Then any sequence of n vectors which spans V is a basis of V, and no n − 1 vectors can span V.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

How can a linearly independent sequence of vectors be extended to a basis?

A

Let V be a finite-dimensional vector space over F, and suppose that the vector sequence v1, . . . , vr is linearly independent in V. Then we can extend the sequence to a basis v1, . . . , vn of V, where n ≥ r.

Proof. Suppose that dim(V) = n and let w1, . . . , wn be any basis of V. We sift the combined sequence v1, . . . , vr, w1, . . . , wn.
Since w1, . . . , wn spans V, the result is a basis of V by Theorem 1.4.11. Since v1, . . . , vr is linearly independent, none of them can be a linear combination of the preceding vectors, and hence none of the vi are deleted in the sifting process. Thus the resulting basis contains v1, . . . , vr.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

How many vectors are needed to form a basis?

A

Let V be a vector space of dimension n over F. Then any n linearly independent vectors form a basis of V and no n + 1 vectors can be linearly independent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is a Linear Transformation?

A

Let U, V be two vector spaces over the same field F. A linear transformation or linear map T from U to V is a function T : U → V such that
(i) T(u1 + u2) = T(u1) + T(u2) for all u1, u2 ∈ U;
(ii) T(αu) = αT(u) for all α ∈ F and u ∈ U.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What are the known properties of linear maps?

A

Let T : U → V be a linear map. Then
(i) T(0U) = 0V;
(ii) T(−u) = −T(u) for all u ∈ U.
Proof.
(i) T(0U) = T(0U + 0U) = T(0U) + T(0U), so T(0U) = 0V.
(ii) Just put α = −1 in the definition of linear map.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

How are linear maps uniquely determined by their action on basis?

A

Let U, V be vector spaces over F, let u1, . . . , un be a basis of U and let v1, . . . , vn be any sequence of n vectors in V. Then there is a unique linear map T : U → V with T(ui) = vi for 1 ≤ i ≤ n.

Proof. Let u ∈ U. Then, since u1, . . . , un is a basis of U, by Proposition 1.4.6, there exist uniquely determined α1, . . . , αn ∈ F with u = α1u1 + · · · + αnun. Hence, if T exists at all, then we must have
T(u) = T(α1u1 + · · · + αnun) = α1v1 + · · · + αnvn,
and so T is uniquely determined.
On the other hand, it is routine to check that the map T : U → V defined by the above equation is indeed a linear map, so T does exist and is unique.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What are the Operations on Linear Maps?

A

We define the operations of addition, scalar multiplication, and composition on linear maps.
Let T1 : U → V and T2 : U → V be two linear maps, and let α ∈ F be a scalar.
Addition: We define a map T1 + T2 : U → V by the rule (T1 + T2)(u) = T1(u) + T2(u) for u ∈ U.
Scalar multiplication: We define a map αT1 : U → V by the rule (αT1)(u) = αT1(u) for u ∈ U.
Now let T1 : U → V and T2 : V → W be two linear maps.
Composition: We define a map T2T1 : U → W by (T2T1)(u) = T2(T1(u)) for u ∈ U. In particular, we define T^2 = TT and T^i+1 = T^i T for i > 2.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What are the Image and Kernel of a linear map?

A

Let T : U → V be a linear map. The image of T, written as im(T) is defined to be the set of vectors v ∈ V such that v = T(u) for some u ∈ U.
The kernel of T, written as ker(T) is defined to be the set of vectors u ∈ U such that T(u) = 0V.
Or, if you prefer:
im(T) = {T(u)| u ∈ U}; ker(T) = {u ∈ U | T(u) = 0V}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

How do the Image and Kernel relate to subspaces?

A

(i) im(T) is a subspace of V;
(ii) ker(T) is a subspace of U;
(iii) T is injective if and only if ker(T) = {0}

Proof.
(i) We must show that im(T) is closed under addition and scalar multiplication. Let v1, v2 ∈ im(T). Then v1 = T(u1), v2 = T(u2) for some u1, u2 ∈ U. Then
v1 + v2 = T(u1) + T(u2) = T(u1 + u2) ∈ im(T); αv1 = αT(u1) = T(αu1) ∈ im(T), so im(T) is a subspace of V.
(ii) Similarly, we must show that ker(T) is closed under addition and scalar multiplication. Let u1, u2 ∈ ker(T). Then
T(u1 + u2) = T(0U + 0U) = T(0U) = 0V; T(αu1) = αT(u1) = α0V = 0V, so u1 + u2, αu1 ∈ ker(T) and ker(T) is a subspace of U.
(iii) The “only if” is obvious since ker(T) = T^−1(0). To prove the “if”, suppose ker(T) = {0} and T(u) = T(v). Then T(u − v) = T(u) − T(v) = 0, so u − v ∈ ker(T), then u − v = 0 and u = v.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What is the Dimension Formula?

A

Let V be a finite-dimensional vector space, and let W1, W2 be subspaces of V. Then
dim(W1 + W2) = dim(W1) + dim(W2) − dim(W1 ∩ W2).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

When are two subspaces complementary?

A

Two subspaces W1, W2 of V are called complementary if W1 ∩ W2 = {0} and W1 + W2 = V. In this case, we say that V is a direct sum of the subspaces W1 and W2 and we denote it V = W1 ⊕ W2.

Also,
If V = W1 ⊕ W2 is a finite-dimensional vector space, then dim(V) = dim(W1) + dim(W2).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What is the Rank and Nullity of a linear map?

A

(i) dim(im(T)) is called the rank of T;
(ii) dim(ker(T)) is called the nullity of T.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What is the Dimension Theorem?

A

Let U, V be vector spaces over F with U finite-dimensional,
and let T : U → V be a linear map. Then
dim(im(T)) + dim(ker(T)) = dim(U);
i.e., rank(T) + nullity(T) = dim(U)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

If T is a linear map between vector spaces with the same dimension, then what are the equivalent properties and what does it mean for T to be singular?

A

Let T : U → V be a linear map, where dim(U) = dim(V) = n. Then the following properties of T are equivalent:
(i) T is surjective;
(ii) rank(T) = n;
(iii) nullity(T) = 0;
(iv) T is injective;
(v) T is bijective;

Proof. T is surjective ⇔ im(T) = V, so clearly (i) ⇒ (ii). But if rank(T) = n, then dim(im(T)) = dim(V) so (by Corollary 1.4.15) a basis of im(T) is a basis of V, and hence im(T) = V.
Thus (i) ⇔ (ii).
(ii) ⇔ (iii) follows directly from Theorem 2.2.7.
(iii) ⇔ (iv) is part (iii) of Proposition 2.2.2.
Finally, (v) is equivalent to (i) and (iv), which are equivalent to each other.

If the conditions in the above are met, then T is called a non-singular linear map. Otherwise, T is called singular.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What is the Change of Basis Matrix?

A

Let f1, . . . ,fn be a basis of F^n. Write each element of the standard basis e1, . . . , en in this basis. The change of basis matrix is the matrix P = (e1, . . . , en) ∈ F^n,n

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What is a Euclidean Form?

A

A euclidean form on V is a map τ : V × V → R such that
(i) τ(α1v1 + α2v2, w) = α1τ(v1, w) + α2τ(v2, w) for all v1, v2, w ∈ V and α1, α2 ∈ R,
(ii) τ(v, w) = τ(w, v) for all v, w ∈ V,
(iii) τ(v, v) > 0 for all v ∈ V \ {0}.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What is a Euclidean Space?

A

A euclidean space is a pair (V, τ) where V is a vector space over R and τ is a euclidean form in V.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What is length in a Euclidean Space and what is the inequality?

A

Let (V, τ) be a euclidean space. For v ∈ V, we define its length by ||v|| = √τ(v, v).

|τ(v, w)| ≤ ||v|| · ||w|| .

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What is an angle in a Euclidean Space?

A

The angle ϕ between any two non-zero vectors v and w is given by
τ(v, w) = ||v|| · ||w|| · cos ϕ or
ϕ = arccos[τ(v, w)/||v|| · ||w||].

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

What is an Orthonormal vector sequence?

A

A vector sequence v1, . . . , vn of a euclidean space (V, τ) is called orthonormal, if ||vi|| = 1 for all i and the angle between each vi and vj with i ≠ j is equal to π/2. An orthonormal basis is a basis, which is an orthonormal sequence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

What are the properties of an orthonormal sequence?

A

Suppose v1, . . . , vn is an orthonormal sequence in a euclidean space (V, τ).
1. If v = α1v1 + . . . + αnvn, then αi = τ(v, vi).
2. The sequence v1, . . . , vn is linearly independent.

Proof.
If v = α1v1 + . . . + αnvn, then
τ(v, vi) = τ(α1v1 + . . . + αnvn, vi) = α1τ(v1, vi) + . . . + αnτ(vn, vi) = αi τ(vi, vi) = αi.
The second statement follows immediately: if α1v1 + . . . + αnvn = 0 is a linear dependency, then all αi = 0 by the first statement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What is the Gram-Schmidt Process?

A

Let V be a euclidean space of dimension n. Suppose that,
for some r with 0 ≤ r ≤ n, f1, . . . ,fr is an orthonormal sequence. Then f1, . . . ,fr can be extended to an orthonormal basis f1, . . . ,fn of V

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

How does a matrix map vectors in different vector spaces?

A

Let T : U → V be a linear map with matrix A = (αij). Then T(u) = v if and only if Au = v.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

What is the matrix for composition of linear maps?

A

Let T1 : V → W be a linear map with l × m matrix A = (αij) and let T2 : U → V be a linear map with m × n matrix B = (βij). Then the matrix of the composite map T1T2 : U → W is
AB.

44
Q

What is the Change of Basis Matrix?

A

We consider a vector space V with two bases: the
“old” basis e1, . . . , en and the “new” basis f1, . . . ,fn.
Write each element of the old basis e1, . . . , en in the new basis. The change of basis matrix is the matrix P = (e1, . . . , en) ∈ F^n,n.

45
Q

How does a vector map with a new basis?

A

Let v ∈ V, v~ ∈ F^n its coordinate vector in the old basis, v_ ∈ F^n its coordinate vector in the new basis. Then
Pv~= v_

46
Q

How is the change of basis matrix invertible?

A

The change of basis matrix is invertible. More precisely, if P is the change of basis matrix from the basis of ei-s to the basis of fi-s and Q is the change of basis matrix from the basis of fi-s to the basis of ei-s then P = Q−1.

Proof. By Proposition 4.1.2, Pv~ = v_ and Qv_ = v~ for all v ∈ V. It follows that PQu = u and QPu = u for all u ∈ F^n. Hence, In = QP = PQ.

47
Q

What is the main change of basis theorem?

A

Let us set up all notation to state the main theorem. Let T : U → V be a linear map, where dim(U) = n, dim(V) = m. Choose an “old” basis e1, . . . , en of U and an “old” basis f1, . . . ,fm
of V. Let A be the matrix of T in these bases.
Now choose new bases e’1, . . . , e’n of U and f’1, . . . ,f’m of V. Let B be the matrix of T in these bases.
Finally, let the n × n matrix P be the basis change matrix from {ei} to {e’i}, and let the m × m matrix Q be the basis change matrix from {fi} to {f’i}.
Theorem 4.1.4. With the above notation, we have QA = BP, or equivalently B = QAP−1.

Proof. Fix u ∈ U. Let, v = T(u). By Proposition 3.3.1, we have Au~ = v~ and Bu_ = v_. By Proposition 4.1.2,
Pu~ = u_ and Qv~ = v_. Hence,
QAu~ = Qv~ = v_ = Bu_ = BPu~e.
Since this is true for all column vectors u~ ∈ F^n,1, this implies that QA = BP. We know that P is invertible from Corollary 4.1.3, so multiplying on the right by P^−1 gives B = QAP^−1.

48
Q

When is a linear operator orthogonal?

A

A linear operator T:V → V is said to be orthogonal if it preserves the scalar product on V. That is, if τ(T(v), T(w)) = τ(v, w) for all v, w ∈ V.

Since length and angle can be defined in terms of the scalar product, an orthogonal linear map preserves distance and angle, so geometrically it is a rigid map. In R2, for example, an orthogonal map is either rotation about the origin, or a reflection about a line through the origin.

49
Q

How is an orthogonal linear map invertible?

A

An orthogonal linear operator T is invertible.

Proof. By Corollary 2.2.8, it suffices to show that ker(T) = 0. Pick v ∈ ker(T). Then |v| = |T(v)| = |0| = 0 and v = 0.

50
Q

What does an orthogonal linear operator do to an orthonormal basis?

A

Let e1, . . . , en be an orthonormal basis of V. A linear operator T is orthogonal if and only if T(e1), . . . , T(en) is an orthonormal basis of V.

51
Q

When is a matrix orthogonal?

A

A matrix A ∈ F^n,n is called orthogonal if A^T A = I

52
Q

When is a linear operator orthogonal if its matrix is orthogonal?

A

A linear operator T : V → V on a euclidean space is orthogonal if and only if its matrix A (with respect to an orthonormal basis of V) is orthogonal.

Proof. Let c1, c2, . . . , cn be the columns of the matrix A. Then cT1, . . . , cTn are the rows of A^T.
Hence, the (i, j)-th entry of A^T A is cTi cj = ci * cj.
Let e1, e2, . . . , en be an orthonormal basis. The ci = T(ei) and everything follows from Proposition 4.2.3: T is orthogonal if and only if T(e1), . . . , T(en) is an orthonormal sequence if and only if c1, . . . , cn is an orthonormal sequence if and only if A is orthogonal.

53
Q

What is Smith’s Normal Form of a Matrix?

A

By applying elementary row and column operations, a matrix A ∈ F^m×n can be brought into the block form
(Is 0s,n−s)
(0m−s,s 0m−s,n−s),
where Is denotes the s × s identity matrix, and 0kl the k × l zero matrix.

This matrix is said to be in Smith Normal Form.

To put a matrix in SNF:
-Start in pivot (1,1)
-Make (1,1) = 1 by elementary row and column operations.
-Make all entries below and to the right of the pivot = 0.
-Move pivot to (2,2) and carry on.

54
Q

What is the rank of a linear map?

A

Let T : U → V be a linear map, where dim(U) = n, dim(V) = m. Let e1, . . . , en be a basis of U and let f1, . . . ,fm be a basis of V.

Rank(T) is the size of the largest linearly independent subset of T(e1), . . . , T(en).

Proof. im(T) is spanned by the vectors T(e1), . . . , T(en), and by Theorem 1.4.11, some subset of these vectors forms a basis of im(T). By definition of basis, this subset has size dim(im(T)) = rank(T), and by Corollary 1.4.15 no larger subset of T(e1), . . . , T(en) can be linearly independent.

55
Q

How are left and right multiplication by a matrix linear maps?

A

w let A =
(r1)
(.)
(.)
(rm)
= (c1 . . . cn)
be an m × n matrix over F with rows r1,r2, . . . ,rm ∈ F^1,n and columns c1, c2, . . . , cn ∈ F^m. Left and right multiplications by A are linear maps
LA = A : F^n → F^m, LA(x) = Ax, RA = A : F^1,m → F^1,n,
RA(y) = yA.

56
Q

What is the row-space, row rank and column-space, column rank of a matrix?

A

(i) The row-space of A is the image of RA: it is the subspace of F^1,n spanned by the rows r1, . . . ,rm of A. The row rank of A is equal to
* the dimension of the row-space of A,
* or the rank of RA
* or, by Lemma 5.1.3, the size of the largest linearly independent subset of r1, . . . ,rm.
(ii) The column-space of A is the image of LA: it is the subspace of F^m spanned by the columns c1, . . . , cn of A. The column rank of A is equal to
* the dimension of the column-space of A,
* or the rank of LA
* or, by Lemma 5.1.3, the size of the largest linearly independent subset of c1, . . . , cn.

57
Q

How can we find the row/column rank of a matrix?

A

We can calculate the row and column ranks by applying the sifting process to the row and column vectors, respectively.

58
Q

How do elementary row/column operations effect the row/column rank?

A

Applying R1, R2 or R3 to a matrix A does not change the row or column rank of A. The same is true for C1, C2 and C3.

59
Q

How are the row/column rank related to the SNF of a matrix?

A

Let s be the number of non-zero rows in the Smith normal form of a matrix A. Then both row rank of A and column rank of A are equal to s.

Proof. Since elementary operations preserve ranks, it suffices to find both ranks of a matrix in the Smith form. Obviously, it is s.

60
Q

How are the row rank and column rank related?

A

As both ranks can be calculated from the SNF of a matrix, and so will both be equal,
Row Rank = Column Rank.

We will go on to just refer to this as the Rank of the matrix.

61
Q

When are two matrix equivalent?

A

Two m × n matrices A and B are said to be equivalent if there exist invertible P and Q with B = QAP; that is, if they represent the same linear map

62
Q

What are the equivalent properties about equivalent matrices?

A

Let A, B ∈ F^m,n. The following conditions on A and B are equivalent.
(i) A and B are equivalent.
(ii) A and B represent the same linear map with respect to different bases.
(iii) A and B have the same rank.
(iv) B can be obtained from A by application of elementary row and column operations.

63
Q

How is any matrix related to the SNF matrix?

A

Any m × n matrix is equivalent to the matrix Es, where s = rank(A).

Es =
(Is 0s,n−s)
(0m−s,s 0m−s,n−s)

64
Q

What is a system of linear equations?

A

We can write Ax = b or LA(x) = b
Where A = (αij) ∈ F^m,n be the m × n matrix of coefficients.
x =
(x1)
(.)
(.)
(.)
(xn)
and b =
(β1)
(.)
(.)
(.)
(βm)
and where the coefficient A is a matrix, the right-hand side b is a given vector in F^m, the unknown x is a vector in F^n and LA is the linear map from

65
Q

What is the nullspace of a matrix?

A

We have Ax = b or LA(x) = b
The case when b = 0, or equivalently when βi = 0 for 1 ≤ i ≤ m, is called the homogeneous case. Here the set of solutions is
ker(LA) = {x ∈ F^n,| LA(x) = 0} = {x ∈ F^n| Ax = 0},
which is sometimes called the nullspace of the matrix A.

66
Q

What is the Augmented matrix?

A

For a system Ax = b of m equations in n unknowns, the augmented matrix is defined to be,
A~ = (A | b)
=
(α11 α12 . . . α1n | β1)
(α21 α22 . . . α2n | β2)
.
.
.
.
(αm1 αm2 . . . αmn | βm)

67
Q

How do we solve a system of equations with the augmented matrix.

A

Simply apply ROW operations (NOT COLUMN) to the augmented matrix in order to reduce as many variables as possible.
Try to put it in SNF ish.

68
Q

What is a matrix in row echelon form?

A

A matrix satisfying these properties is said to be in row echelon form:

Let A = (αij) be an m × n matrix over the field F. For i-th row let αi,c(i) be the first (leftmost) non-zero entry in this row. In other words, αi,c(i) /= 0 while αij = 0 for all j < c(i). By
convention, c(i) = ∞ if αij = 0 for all j.
(i) All zero rows are below all non-zero rows.
(ii) Let r1, . . . ,rs be the non-zero rows. Then each ri with 1 ≤ i ≤ s has 1 as its first non-zero entry. (In other words, αi,c(i) = 1.)
(iii) c(1) < c(2) < · · · < c(s).
(iv) αk,c(i) = 0 for all k > i.

69
Q

What is a matrix in row reduced echelon form?

A

A matrix is said to be in row reduced echelon form if it is in row echelon form and satisfies,
There is a stronger version of the last property
(v) αk,c(i) = 0 for all k /= i.

70
Q

What is the intuition behind row echelon form and the reduced version?

A

Here is the intuition behind these forms:
* The number of non-zero rows in a row echelon form of A is the rank of A
* The row reduced echelon form of A solves the system of linear equations.

71
Q

How do we put a matrix in row reduced echelon form?

A

At any stage of the procedure, we are looking at the entry αij in a particular position (i, j) of the matrix. (i, j) is called the pivot position, and αij the pivot entry. We start with (i, j) = (1, 1) and proceed as follows.
1. If αij and all entries below it in its columns are zero (i.e. if αkj = 0 for all k ≥ i), then move the pivot one place to the right, to (i, j + 1) and repeat Step 1, or terminate if j = n.
2. If αij = 0 but αkj /= 0 for some k > j then apply R2 and interchange ri and rk.
3. At this stage αij /= 0. If αij /= 1, then apply R3 and multiply ri by α^−1ij .
4. At this stage αij = 1. If, for any k /= i, αkj /= 0, then apply R1, and subtract αkj times ri from rk.
5. At this stage, αkj = 0 for all k /= i. If i = m or j = n then terminate. Otherwise, move the pivot diagonally down to the right to (i + 1, j + 1), and go back to Step 1.

ONLY ROW OPERATIONS.

72
Q

What is the Row Reduced Echelon form of a square matrix?

A

The row reduced echelon form of an invertible n × n matrix A is In.

73
Q

How are invertible matrices a product of elementary operations?

A

An invertible matrix is a product of elementary matrices.

Proof. The sequence of row operations in the proof of Proposition 5.3.3 can be written as
A ← E1A ← E2E1A ← . . . ← ErEr−1 . . . E1A = In.
Since elementary matrices are invertible and their inverses are also elementary,
A = (ErEr−1 . . . E1)^−1 = E1^−1E2^−1. . . Er^−1.

74
Q

How do we find the inverse of a matrix using row operations?

A

We have a matrix A ∈ F^n,n
Consider the Matrix
(A | In), this is a matrix of size n,2n.
Now reducing the A-side of this matrix to row reduced echelon form, at the end the In-side will be the inverse of A.

See example.

75
Q

What is the formula for the determinant?

A

The determinant of a n × n matrix A = (aij) is the scalar quantity
det(A) = ∑φ∈Sn sign(φ) a1φ(1) a2φ(2) . . . anφ(n).
Where sign(φ) = +1 if the permutation is even and -1 if odd.

76
Q

What are staircase matrices?

A

They are matrices with all 0s in one corner of the matrix and n square blocks along the diagonal.
Upper staircase is when the 0s are at the bottom left, lower staircase when 0s are top right.

77
Q

What is the determinant of a staircase matrix?

A

det(Aupper) = det(Alower) = det(B1) · det(B2)· · · det(Bn).
Where the Bi are the blocks along the diagonal.

78
Q

When is a matrix triangular?

A

A triangular matrix is one that is staircase with blocks of size 1.

79
Q

What is the determinant of a triangular matrix?

A

If A = (aij) is upper (or lower) triangular, then det(A) = a11a22 · · · ann is the product of the entries on the main diagonal of A.

80
Q

How do we compute the determinant of a matrix?

A

Use elementary row/column operations to get the matrix into an easier form to find the determinant, e.g. triangular.
For a 2 x 2 matrix it is easy enough to find anyway.

81
Q

How do row and column operations affect the value of the determinant?

A

Elementary row and column operations affect the determinant of a matrix as follows.
(i) det(In) = 1.
(ii) Let B result from A by applying R2 or C2 (swap). Then det(B) = − det(A).
(iii) If A has two equal rows or columns, then det(A) = 0.
(iv) Let B result from A by applying R1 or C1 (addition). Then det(B) = det(A).
(v) Let B result from A by applying R3 or C3 (multiplication). Then det(B) = λ det(A).

82
Q

What is a minor of a matrix?

A

Let A = (aij) be an n × n matrix. Let Aij be the (n − 1) × (n − 1) matrix obtained from A by deleting the i-th row and the j-th column of A. Now let Mij = det(Aij). Then Mij is called the
(i, j)-th minor of A.

e.g. if If A =
(2 1 0)
(3 −1 2)
(5 −2 0),
then A12 =
(3 2)
(5 0)

83
Q

What is a cofactor of a matrix?

A

We define the (i, j)-th cofactor of A as
cij = (−1)^i+j * Mij = (−1)^i+j * det(Aij).
In other words, cij is equal to Mij if i + j is even, and to −Mij if i + j is odd.

84
Q

What does it mean to expand a determinant by a row or column?

A

Essentially just taking the minors with respect to that row.

e.g. normally when computing the determinant of a 3 x 3 matrix you would expand with respect to the first row.

85
Q

What is the Adjoint Matrix?

A

Let A ∈ F^n,n be an n × n matrix. We define the adjoint matrix adj(A) of A to be the n × n matrix of which the (i, j)-th element is the cofactor cji. In other words, it is the transpose of the matrix of cofactors.

86
Q

How does a matrix and is adjoint multiply?

A

A adj(A) = det(A)In = adj(A)A

87
Q

What is the determinant of a transposed matrix?

A

Let A = (aij) be an n × n matrix. Then det(A^T) = det(A).

88
Q

What is the determinant of the product of an elementary matrix with any matrix?

A

If E is an n × n elementary matrix, and B is any n × n matrix, then det(EB) = det(E) det(B).

Proof. E is one of the three types E(n)λ,ij^1, E(n)ij^2 or E(n)λ,i^3, and multiplying B on the left by E has the effect of applying R1, R2 or R3 to B, respectively. Hence, by Theorem 6.2.1, det(EB) = det(B), − det(B), or λ det(B), respectively. But by considering the special case B = In, we see that det(E) = 1, −1 or λ, respectively, and so det(EB) = det(E) det(B) in all
three cases.

89
Q

When does a matrix have a determinant of 0?

A

For an n × n matrix A, det(A) = 0 if and only if A is singular.

Proof. A can be reduced to Smith normal form by using elementary row and column operations. By Theorem 5.1.5, none of these operations affect the rank of A, and so they do not affect whether A is singular or not singluar. By Theorem 6.2.1, they do not affect whether det(A) = 0 or det(A) /= 0. So we can assume that A is in Smith normal form.
Then rank(A) is the number of non-zero rows of A and, by Proposition 6.1.4, det(A) = a11a22 · · · ann. Thus, det(A) = 0 if and only if ann = 0 if and only if rank(A) < n.

90
Q

What is the determinant of a product of matrices?

A

For any two n × n matrices A and B, we have det(AB) = det(A) det(B).

Proof. Suppose first that det(A) = 0. Then we have rank(A) < n by Theorem 7.1.3. Let T1, T2 : V → V be linear maps corresponding to A and B, where dim(V) = n. Then AB
corresponds to T1T2 (by Theorem 3.4.2). By Corollary 2.2.8, rank(A) = rank(T1) < n implies that T1 is not surjective. But then T1T2 cannot be surjective, so rank(T1T2) = rank(AB) < n.
Hence det(AB) = 0 so det(AB) = det(A) det(B).
On the other hand, if det(A) /= 0, then A is nonsingular, and hence invertible, so by Theorem 5.3.4, A is a product E1E2 . . . Er of elementary matrices Ei. Hence det(AB) = det(E1E2 . . . ErB).
Now the result follows from the above lemma, because
det(AB) = det(E1) det(E2 . . . ErB) = det(E1) det(E2) det(E3 . . . ErB) = det(E1) det(E2). . . det(Er) det(B) = det(E1E2 . . . Er) det(B) = det(A) det(B)

91
Q

What is the determinant of an inverse matrix?

A

If A ∈ F^n,n is invertible, then det(A^−1) = det(A)^−1

Proof. AA−1 = In implies that 1 = det(In) = det(AA−1) = det(A) det(A^−1).

92
Q

What is the determinant of an orthogonal matrix?

A

If A ∈ R^n,n is orthogonal, then det(A) = ±1.

Proof. Since det(A) = det(A^T) and A^T = A^−1, det(A) = det(A^−1) = det(A)^−1. Hence,
det(A)^2 = 1 and det(A) = ±1.

93
Q

What is the Trace of a matrix?

A

The trace of the matrix A is the sum of its diagonal
coefficients:
Tr(aij) = ∑i aii.

94
Q

What are eigenvectors and eigenvalues?

A

Let T : V → V be a linear operator, where V is a vector space over F. Suppose that for some non-zero vector v ∈ V and some scalar λ ∈ F, we have T(v) = λv. Then v is called an eigenvector of T, and λ is called the eigenvalue of T corresponding to v.

Note that the zero vector is not an eigenvector. (This would not be a good idea, because T0 = λ0 for all λ.) However, the zero scalar 0F may sometimes be an eigenvalue.

Also, let A be an n × n matrix over F. Suppose that, for some non-zero column vector x ∈ F^n and some scalar λ ∈ F, we have Ax = λx. Then x is called an eigenvector of A, and λ is called the eigenvalue of A corresponding to x.

95
Q

How do we find the eigenvalues of a matrix?

A

Let A be an n × n matrix. Then λ is an eigenvalue of A if and only if λ is a root of the characteristic polynomial det(A − x).

Proof. Suppose that λ is an eigenvalue of A. Then Ax = λx for some non-zero x ∈ F^n. This is equivalent to Ax = λInx, or (A − λIn)x = 0. But this says exactly that x is in the kernel of LA.
By Corollary 2.2.8, the matrix A − λIn is singular. By Theorem 7.1.3,
0 = det(A − λIn) = det(A − x)|x=λ .
Conversely, if λ is a root of det(A − x) then det(A − λIn) = 0 and A − λIn is singular. By Corollary 2.2.8, there exists a non-zero x ∈ F^n with (A − λIn)x = 0, which is equivalent to
Ax = λInx = λx, and so λ is an eigenvalue of A.

96
Q

What is the theorem for diagonal matrices?

A

Let T : V → V be a linear map. Then the matrix of T is diagonal with respect to some basis of V if and only if V has a basis consisting of eigenvectors of T.
Equivalently, let A be an n × n matrix over F. Then A is similar to a diagonal matrix if and only if the space F^n has a basis of eigenvectors of A.

97
Q

What are the eigenvalues for a triangular matrix?

A

Suppose that the matrix A is upper (or lower) triangular. Then the eigenvalues of A are just the diagonal entries aii of A.

Proof. In this case, A − x is also upper triangular. By Corollary 6.1.4,
det(A − x) =∏i=1,n (aii − x)
so the eigenvalues are the aii.

98
Q

Are eigenvectors linearly independent?

A

Let λ1, . . . , λr be distinct eigenvalues of T : V → V, and let v1, . . . , vr be corresponding eigenvectors. (So T(vi) = λivi
for 1 ≤ i ≤ r.) Then v1, . . . , vr are linearly independent.

99
Q

If a linear map has the same number of eigenvalues as its dimension, how is it diagonalisable?

A

If the linear map T : V → V (or equivalently the n × n matrix A) has n distinct eigenvalues, where n = dim(V), then T (or A) is diagonalisable.

100
Q

What is the square of a 2 x 2 matrix related to?

A

If A ∈ F^2,2, then A^2 = Tr(A)A − det(A)I2.

for proof just compute RHS.

101
Q

What is the adjoint linear map?

A

The unique linear map S, shown below, is called the adjoint of T. We write this as T∗. We say T is self-adjoint if T∗ = T.

Let T : V → V be a linear operator. There exists a unique linear operator S such that for all v, w ∈ V
τ(T(v), w) = τ(v, S(w)).

102
Q

If we have an orthonormal basis in V and A the matrix of T, what is the matrix of T*?

A

Fix an orthonormal basis f1, . . . ,fn of V. Let A be the matrix of a linear operator T in this basis. Then A^T is the matrix of the adjoint operator T∗ in the same basis.

103
Q

What are the eigenvalues of a symmetric matrix?

A

Let A be an n × n real symmetric matrix. Then A has an eigenvalue in R, and all complex eigenvalues of A lie in R.

104
Q

What is the Spectral Theorem?

A

Let V be a euclidean space of dimension n. If T : V → V is a self adjoint linear operator, there is an orthonormal basis f1, . . . ,fn of V consisting of eigenvectors of T.
An n × n real symmetric matrix is orthogonally similar to a diagonal matrix.

105
Q

How are eigenvectors orthogonal?

A

Let A be a real symmetric matrix, and let λ1, λ2 be two distinct eigenvalues of A, with corresponding eigenvectors v1, v2. Then v1 * v2 = 0.

106
Q

What is the main SVD theorem?

A

Suppose T : (V, τV) → (W, τW) is a linear map of rank n
between euclidean spaces. Then there exist unique positive numbers γ1 ≥ γ2 ≥ . . . ≥ γn > 0, called the singular values of T, and orthonormal bases of V and W such that the matrix of T with respect to these bases is
(D | 0)
(0 | 0)
where D =
(γ1)
(_.)
(__.)
(___.)
(____γn)