Linear Algebra Flashcards
Define a field
A set F with two binary operations + and × is a field if both (F, +, 0) and
(F \ {0}, ×, 1) are abelian groups and the distribution law holds:
(a + b)c = ac + bc, for all a, b, c ∈ F.
Define the characteristic of F
The smallest integer p such that
1 + 1 + · · · + 1 (p times) = 0
is called the characteristic of F. If no such p exists, the characteristic of F is defined to be zero.
If such a p exists, it is necessarily prime.
Define a vector space V over a field F in terms of groups
A vector space V over a field F is an abelian group (V, +, 0) together with a scalar multiplication F × V → V such that for all a, b ∈ F, v, w ∈ V :
(1) a(v + w) = av + aw
(2) (a + b)v = av + bv
(3) (ab)v = a(bv)
(4) 1.v = v
Let V be a vector space over F
Define a set S ⊆ V being linearly independent
(1) A set S ⊆ V is linearly independent if whenever a1, · · · , an ∈ F, and
s1, · · · , sn ∈ S,
a1s1 + · · · + ansn = 0 ⇒ a1 = · · · = an = 0.
Let V be a vector space over F
Define what it means for a set S ⊆ V to be spanning
(2) A set S ⊆ V is spanning if for all v ∈ V there exists a1, · · · , an ∈ F and s1, · · · , sn ∈ S with
v = a1s1 + · · · + ansn
Let V be a vector space over F
Define what it means for a set S ⊆ V to be a basis of V
(3) A set B ⊆ V is a basis of V if B is spanning and linearly independent. The size of B is the
dimension of V
Define a linear map/transformation
Suppose V and W are vector spaces over F. A map T : V → W is a linear
transformation (or just linear map) if for all a ∈ F, v, v′ ∈ V ,
T(av + v’) = aT(v) + T(v’)
What is a bijective linear map called?
an isomorphism of vector spaces.
What is the assignment T → ᵦ’[T]ᵦ ?
Meant to be fancy B’ subscript
an isomorphism of vector spaces from Hom(V, W)
to the space of (m×n)-matrices over F. It takes composition of maps to multiplication of matrices.
In particular, if T : V → V and B and B′ are two different bases with ᵦ’[Id]ᵦ the change of basis matrix then:
ᵦ’[T]ᵦ’ = ???
all Bs are meant to be fancy Bs
ᵦ’[T]ᵦ’ = ᵦ’[Id]ᵦ ᵦ[T]ᵦ ᵦ[Id]ᵦ’ with ᵦ’[Id]ᵦ ᵦ[Id]ᵦ’ = I the identity matrix
Define a ring
A non-empty set R with two binary operations + and × is a ring if (R, +, 0) is an
abelian group, the multiplication × is associative and the distribution laws hold: for all a, b, c ∈ R,
(a + b)c = ac + bc and a(b + c) = ab + ac.
Define a commutative ring
The ring R is called commutative if for all a, b ∈ R we have ab = ba.
Define a ring homomorphism
A map φ : R → S between two rings is a ring homomorphism if for all
r, r′ ∈ R:
φ(r + r’) = φ(r) + φ(r’) and φ(rr’) = φ(r)φ(r’).
Define a ring isomorphism
A bijective ring homomorphism is called a ring isomorphism.
Define an ideal
A non-empty subset I of a ring R is an ideal if for all s, t ∈ I and r ∈ R we have s − t ∈ I and sr, rs ∈ I.
What is the first isomorphism theorem? (rings)
The kernel Ker(φ) := φ⁻¹(0) of a ring homomorphism φ : R → S is an ideal, its image Im(φ) is a subring of S, and φ induces an isomorphisms of rings R/Ker(φ) ∼= Im(φ)
Prove the first isomorphism theorem (rings)
Exercise
What is the “division algorithm” for polynomials?
Let f(x), g(x) ∈ F[x] be two polynomials with g(x) ≠ 0. Then there exists q(x), r(x) ∈ F[x] such that f(x) = q(x)g(x) + r(x) and deg r(x) < deg g(x).
Prove the “division algorithm” for polynomials
If deg f(x) < deg g(x), put q(x) = 0, r(x) = f(x). Assume now that deg f(x) ≥ deg g(x)
and let
f(x) = aₙxⁿ + aₙ₋₁xⁿ⁻¹ + … + a₀
g(x) = bₖxᵏ + bₖ₋₁xᵏ⁻¹ + … + b₀
Then
deg( f(x) - aₙ/bₖ xⁿ⁻ᵏg(x) ) < n
By induction on deg f − deg g, there exist s(x), t(x) such that
f(x) - aₙ/bₖ xⁿ⁻ᵏg(x) = s(x)g(x) + t(x) and deg g(x) ? deg t(x)
Hence put q(x) = aₙ/bₖ xⁿ⁻ᵏ + s(x) and r(x) = t(x)
For all f(x) ∈ F[x] and a ∈ F,
f(a) = 0 ⇒ ???
For all f(x) ∈ F[x] and a ∈ F,
f(a) = 0 ⇒ (x − a)|f(x).
For all f(x) ∈ F[x] and a ∈ F,
f(a) = 0 ⇒ (x − a)|f(x).
Prove it
By division alg for polyn there exist q(x), r(x) such that
f(x) = q(x)(x − a) + r(x)
where r(x) is constant (as deg r(x) < 1). Evaluating at a gives
f(a) = 0 = q(a)(a − a) + r = r
and hence r = 0.
Assume f ≠ 0. If deg f ≤ n then f has [ ] roots
Assume f ≠ 0. If deg f ≤ n then f has at most n roots.
Assume f ≠ 0. If deg f ≤ n then f has at most n roots.
Prove it
Follows from
For all f(x) ∈ F[x] and a ∈ F,
f(a) = 0 ⇒ (x − a)|f(x).
and induction
Let a(x), b(x) ∈ F[x] be two polynomials. Let c(x) be a monic polynomial of highest degree dividing
both a(x) and b(x) and write c = gcd(a, b) (also wrote less commonly hcf(a, b)).
Let a, b ∈ F[x] be non-zero polynomials and let gcd(a, b) = c. Then there exist
s, t ∈ F[x] such that:
a(x)s(x) + b(x)t(x) =
Let a, b ∈ F[x] be non-zero polynomials and let gcd(a, b) = c. Then there exist
s, t ∈ F[x] such that:
a(x)s(x) + b(x)t(x) = c(x)
Let a(x), b(x) ∈ F[x] be two polynomials. Let c(x) be a monic polynomial of highest degree dividing
both a(x) and b(x) and write c = gcd(a, b) (also wrote less commonly hcf(a, b)).
Let a, b ∈ F[x] be non-zero polynomials and let gcd(a, b) = c. Then there exist
s, t ∈ F[x] such that:
a(x)s(x) + b(x)t(x) = c(x)
Prove it
If c ≠ 1, divide a and b by c. We may thus assume deg(a) ≥ deg(b) and gcd(a, b) = 1, and
will proceed by induction on deg(a) + deg(b).
By the Division Algorithm there exist q, r ∈ F[x] such that
a = qb + r with deg(b) > deg(r).
Then deg(a) + deg(b) > deg(b) + deg(r) and gcd(b, r) = 1.
If r = 0 then b(x) = λ is constant since gcd(a, b) = 1. Hence
a(x) + b(x)(1/λ)(1 − a(x)) = 1.
Assume r ≠ 0.Then by the induction hypothesis, there exist s’, t′ ∈ F[x] such that
bs′ + rt′ = 1.
Hence,
bs′ + (a − qb)t′ = 1 and at′ + b(s’ − qt’) = 1
So, we may put t = t’ and s = s’− qt′
Let A ∈ Mn(F) and f(x) = aₖxᵏ + · · · + a₀ ∈ F[x]. Then
f(A) := [ ]
aₖAᵏ + · · · + a₀I ∈ Mn(F).
Let A ∈ Mn(F) and f(x) = aₖxᵏ + · · · + a₀ ∈ F[x]. Then
f(A) := aₖAᵏ + · · · + a₀I ∈ Mn(F).
Since AᵖAʳ = AʳAᵖ and λA = Aλ for p, q ≥ 0 and λ ∈ F, then for all f(x), g(x) ∈ F[x] we have that
f(A)g(A) =
Av = λv ⇒
f(A)g(A) = g(A)f(A); Av = λv ⇒ f(A)v = f(λ)v
For all A ∈ Mn(F), there exists a non-zero polynomial f(x) ∈ F[x] such that f(A) = ?
For all A ∈ Mn(F), there exists a non-zero polynomial f(x) ∈ F[x] such that f(A) = 0
For all A ∈ Mn(F), there exists a non-zero polynomial f(x) ∈ F[x] such that
f(A) = 0
Prove it
Note that the dimension dim Mn(F) = n × n is finite. Hence {I, A, A², · · · , Aᵏ} as a subset of Mn(F) is linearly dependent for k ≥ n². So there exist scalars ai ∈ F, not all zero, such that aₖAᵏ + · · · + a₀I = 0, and f(x) = aₖxᵏ + · · · + a₀ is an annihilating polynomial
For any (n × n)-matrix A,
the assignment f(x) 7→ f(A) defines a ring homomorphism
Eₐ: F[x] → ???
Capital subscript A
Eₐ: F[x] → Mₙ(F)
For any (n × n)-matrix A, the assignment f(x) 7→ f(A) defines a ring homomorphism Eₐ: F[x] → Mₙ(F)
“For all A ∈ Mn(F), there exists a non-zero polynomial f(x) ∈ F[x] such that f(A) = 0” tells us what about the kernel?
The kernel is non-zero
For any (n × n)-matrix A, the assignment f(x) 7→ f(A) defines a ring homomorphism Eₐ: F[x] → Mₙ(F)
As F[x] is commutative so is the [ ]
As F[x] is commutative so is the Eₐ, that is f(A)g(A) = g(A)f(A) for all polynomials f and g.
What is the minimal polynomial of A?
The minimal polynomial of A, denoted by mₐ(x), is the monic polynomial
p(x) of least degree such that p(A) = 0.
should be a capital subscript A
Thm: If f(A) = 0 then mₐ divides ... Furthermore mₐ is [ ] (hence showing that mₐ is well-defined)
If f(A) = 0 then mₐ|f Furthermore mₐ is unique hence showing that mₐ is well-defined)
If f(A) = 0 then mₐ|f Furthermore mₐ is unique hence showing that mₐ is well-defined)
Prove it
By the division algorithm, , there exist polynomials q, r with deg r < deg mₐ such that
f = qmA + r.
Evaluating both sides at A gives r(A) = 0. By the minimality property of mₐ,
r = 0 and mₐ divides f.
To show uniqueness, let m be another monic polynomial of minimal degree and m(A) = 0. Then by the above mₐ|m. Also m and mₐ must have the same degree, and so
m = amₐ for some a ∈ F. Since both polynomials are monic it follows that a = 1 and m = mₐ
Define the characteristic polynomial of A
The characteristic polynomial of A is defined as
χA(x) = det(A − xI).
χA(x) = (-1)ⁿxⁿ + …..????
χA(x) = (-1)ⁿxⁿ + (-1)ⁿ⁻¹tr(A)xⁿ⁻¹ + … + det(A)
Proof see lin alg 2 (prelims)
λ is an eigenvalue of A
⇔ ?? (χA(x))
⇔ ???(mₐ(x))
λ is an eigenvalue of A
⇔ λ is a root of χA(x)
⇔ λ is a root of mₐ(x)
λ is an eigenvalue of A
⇔ λ is a root of χA(x)
⇔ λ is a root of mₐ(x)
Prove it
χA(λ) = 0 ⇔ det(A − λI) = 0 ⇔ A − λI is singular ⇔ ∃ v ≠ 0 : (A − λI)v = 0 ⇔ ∃ v ≠ 0 : Av = λv ⇒ mₐ(λ)v = mₐ(A)v = 0 ⇒ mₐ(λ) = 0 (as v ≠ 0)
Conversely, assume λ is a root of mₐ. Then mₐ(x) = g(x)(x − λ) for some polynomial g. By minimality of mₐ, we have g(A) ≠ 0. Hence there exists w ∈ Fⁿ such that g(A)w ≠ 0. Put
v = g(A)w then
(A − λI)v = mₐ(A)w = 0,
and v is a λ-eigenvector for A.
Let C, P, A be (n × n)-matrices such that C = P⁻¹AP. Then m𝒸(x) = mₐ(x) for:
f(C) = f(P⁻¹AP) = P⁻¹f(A)P
for all polynomials f. Thus
0 = m𝒸(C) = [ ] and so m𝒸(A) = 0, and mₐ|m𝒸. Likewise m𝒸|mₐ and therefore mₐ = [ ] as both are monic.
Let C, P, A be (n × n)-matrices such that C = P⁻¹AP. Then m𝒸(x) = mₐ(x) for:
f(C) = f(P⁻¹AP) = P⁻¹f(A)P
for all polynomials f. Thus
0 = m𝒸(C) = [P⁻¹m𝒸(A)P] and so m𝒸(A) = 0, and mₐ|m𝒸. Likewise m𝒸|mₐ and therefore mₐ = [m𝒸] as both are monic.
Let V be a finite dimensional vector space and T : V → V a linear transformation. Define the minimal polynomial
Define the minimal polynomial of T as
mₜ(x) = mₐ(x)
where A = ᵦ[T]ᵦ with respect to some basis B of V . As mₐ(x) = mP⁻¹AP (x) the definition of
mₜ(x) is independent of the choice of basis.
For a linear transformation T : V → V define its characteristic polynomial
define its characteristic polynomial
as χT (x) = χA(x)
where A = ᵦ[T]ᵦ with respect to some basis B of V . As χA(x) = χP ⁻¹AP (x) the definition of χT (x) is independent of the choice of basis.
What does it mean for a field to be algebraically closed?
A field F is algebraically closed if every non-constant polynomial in F[x] has a root in F.
What is the fundamental theorem of algebra
The field of complex numbers C is algebraically closed.
What is an algebraic closure of F?
An algebraically closed field F¯ containing F with the property that there does not
exist a smaller algebraically closed field L with
F¯ ⊇ L ⊇ F
is called an algebraic closure of F
F¯ is F bar (all fancy Fs)
Every field has an algebraic [ ]
Every field F has an algebraic closure F¯.
Let V be a vector space over a field F and let U be a subspace.
What is the quotient space?
The set of cosets V /U = {v + U | v ∈ V } with the operations (v + U) + (w + U) := v + w + U a(v + U) := av + U for v, w ∈ V and a ∈ F is a vector space, called the quotient space
The set of cosets V /U = {v + U | v ∈ V } with the operations (v + U) + (w + U) := v + w + U a(v + U) := av + U for v, w ∈ V and a ∈ F is a vector space, called the quotient space Prove
We need to check that the operations are well-defined. Assume v + U = v' + U and w + U = w' + U. Then v = v' + u, w = w'+ ˜u for u, u˜ ∈ U. Hence: (v + U) + (w + U) = v + w + U = v' + u + w'+ ˜u + U as u + ˜u ∈ U = v'+ w' + U = (v' + U) + (w' + U). Similarly, a(v + U) = av + U = av′ + au + U as au ∈ U = av′ + U = a(v' + U) That these operations satisfy the vector space axioms follows immediately from the fact that the operations in V satisfy them.
Let E be a basis of U, and extend E to a basis B of V (we assume this is possible, which we certainly know to be the case at least for V finite dimensional). Define B¯ := {e + U | e ∈ B\E} ⊆ V /U. Fancy B bar What is the set B¯ a basis of?
V/U
Let E be a basis of U, and extend E to a basis B of V (we assume this is possible, which we certainly know to be the case at least for V finite dimensional). Define B¯ := {e + U | e ∈ B\E} ⊆ V /U. Fancy B bar The set B¯ is a basis of V/U Prove it
pg 12
Let U ⊂ V be vector spaces, with E a basis for U, and F ⊂ V a set of vectors
such that
{v + U : v ∈ F} is a basis for the quotient V /U. Then the union E ∪ F is a basis for ??
V
Let U ⊂ V be vector spaces, with E a basis for U, and F ⊂ V a set of vectors
such that
{v + U : v ∈ F} is a basis for the quotient V /U. Then the union E ∪ F is a basis for V
Prove it
Exercise
If V is finite dimensional then
dim(V ) = dim(U) + [ ]
dim(V ) = dim(U) + dim(V /U).
Let T : V → W be a linear map of vector spaces
over F. Then….
What is the first isomorphism theorem?
Then
T¯ : V /Ker(T) → Im(T)
v + Ker(T) → T(v)
is an isomorphism of vector spaces.
Let T : V → W be a linear map of vector spaces over F. Then T¯ : V /Ker(T) → Im(T) v + Ker(T) → T(v) is an isomorphism of vector spaces.
Prove it
Proof. It follows from the first isomorphism theorem for groups that T¯ is an isomorphism of
(abelian) groups. T¯ is also compatible with scalar multiplication. Thus T¯ is a linear isomorphism.
If T : V → W is a linear transformation and V is finite dimensional, then
(Rank-nullity theorem)
dim(V ) = dim(Ker(T)) + dim(Im(T)).
If T : V → W is a linear transformation and V is finite dimensional, then dim(V ) = dim(Ker(T)) + dim(Im(T)).
prove it
Use dim(V ) = dim(U) + dim(V /U), with U = ker(T). Then
dim(V ) = dim(Ker(T)) + dim(V /Ker(T)).
By the First Isomorphism Theorem also:
dim(V /Ker(T)) = dim(Im(T)).
Let T : V → W be a linear map and let A ⊆ V, B ⊆ W be subspaces
The formula T¯(v + A) := T(v) + B gives a well-defined linear map of quotients
T¯ : V /A → W/B if and only if [ ]
T(A) ⊆ B.
Let T : V → W be a linear map and let A ⊆ V, B ⊆ W be subspaces
The formula T¯(v + A) := T(v) + B gives a well-defined linear map of quotients
T¯ : V /A → W/B if and only if T(A) ⊆ B.
Prove it
Assume T(A) ⊆ B. Now T¯ will be linear if it is well-defined. Assume v + A = v′ + A. Then
v = v′ + a for some a ∈ A. So
T¯(v + A) = T(v) + B by definition
= T(v′ + a) + B
= T(v′) + T(a) + B as T is linear
= T(v’) + B as T(A) ⊆ B
= T¯(v’ + A).
Hence T¯ is well-defined. Conversely, assume that T¯ is well-defined and let a ∈ A. Then
B = 0𝓌/ᵦ= T¯(0ᵥ/ₐ) = T¯(A) = T¯(a + A) = T(a) + B.
Thus T(a) ∈ B, and so T(A) ⊆ B.
Assume now that V and W are finite dimensional. Let B = {e₁, · · · , eₙ} be a basis for V with E = {e₁, · · · , eₖ} a basis for a subspace A ⊆ V (so k ≤ n). Let B′ = {e′₁, · · · , e′ₘ} be a basis for W with E′ = {e′₁, · · · , e′ℓ} a basis for a subspace B ⊆ W. The induced bases for V /A and W/B are given by
B¯ =
B¯’ =
B¯ = eₖ₊₁ + A, · · · , eₙ + A and B¯′ = e'ₗ₊₁ + B, · · · , e′ₘ + B
Let T : V → W be a linear map such that T(A) ⊆ B. Then T induces a map T¯ on quotients by Lemma 3.7 and restricts to a linear map
T|ₐ : A → B with T|ₐ(v) =[ ]
T(v) for v ∈ A.
What is the block matrix decomposition for ᵦ’[T]ᵦ?
Top left: ɛ’[T|ₐ]ɛ
Top right: *
Bottom left: 0
Bottom right: ᵦ¯’[T¯]ᵦ¯
where ᵦ¯’[T¯]ᵦ¯ = (aᵢⱼ)
for l+1≤i≤m, k+1≤j≤n
Prove the block matrix decomposition for ᵦ’[T]ᵦ
For j ≤ k, T(eⱼ) ∈ B and hence aᵢⱼ = 0 for i > ℓ and aij is equal to the (i, j)-entry of ɛ′ [T|ₐ]ɛ for i ≤ ℓ. To identify the bottom right corner of the matrix, note that
T¯(eⱼ + A) = T(eⱼ) + B
= a₁ⱼe’₁ + … + aₘⱼe’ₘ + B
= aₗ₊₁,ⱼ(e’ₗ₊₁ + B) + … + aₘⱼ(e’ₘ + B)
Let T : V → V be a linear transformation.
What does it mean for a subspace to be T-invariant?
A subspace U ⊆ V is called T-invariant if T(U) ⊆ U.
By the result of the previous section, such a T induces a map T¯ : V /U → V /U
Let T : V → V be a linear transformation. Let S : V → V be another linear map. If U is T- and S-invariant, then U is also invariant under the following maps: 1. the zero map 2. [ ] 3. aT, ∀a ∈ F 4. [ ] 5. [ ]
- the zero map
- the identity map
- aT, ∀a ∈ F
- S + T
- S ◦ T
Let T : V → V be a linear transformation.
Let S : V → V be
another linear map.
If U is T- and S-invariant, the U is invariant under any polynomial p(x) evaluated at [ ].
p(T) indices a map of quotients [ ]
evaluated at T
p(T)¯ : V /U → V /U
the whole p(T) is barred
Let T : V → V be a linear transformation and assume U ⊆ V is T-invariant.
Then
χT (x) =
Note that this formula does not hold for the minimal polynomial
χT (x) = χT|U (x) × χT⁻(x)
Note that this formula does not hold for the minimal polynomial
Let T : V → V be a linear transformation and assume U ⊆ V is T-invariant.
Then
χT (x) = χT|U (x) × χT⁻(x)
Prove it
Middle pg 16
Extend a basis E for U to a basis B of V
Let V be a finite-dimensional vector space, and let T : V → V be a linear map such that its characteristic polynomial is a product of linear factors. Then, there exists a basis B
of V such that ᵦ[T]ᵦ is [ ]
Upper triangular
Let V be a finite-dimensional vector space, and let T : V → V be a linear map such that its characteristic polynomial is a product of linear factors. Then, there exists a basis B
of V such that ᵦ[T]ᵦ is Upper triangular
Prove it
By induction on the dimension of V
End pg 16
If A is an n×n matrix with a characteristic polynomial that is a product of linear
factors, then there exists an (n × n)-matrix P such that P⁻¹AP is [ ]
upper triangular.
Let A be an upper triangular (n × n)-matrix with diagonal entries λ₁, . . . , λₙ.
Then
ⁿ∏ᵢ₌₁ ( A - λᵢI) = ??
ⁿ∏ᵢ₌₁ ( A - λᵢI) = 0
Let A be an upper triangular (n × n)-matrix with diagonal entries λ₁, . . . , λₙ.
Then
ⁿ∏ᵢ₌₁ ( A - λᵢI) = 0
Prove it
Let e₁, . . . , eₙbe the standard basis vectors for F n. Then (A − λnI)v ∈ for all v ∈ Fⁿ and more generally (A − λᵢI)w ∈ for all w ∈ . Hence, since Im(A − λₙI) ⊆ Im(A − λₙ₋₁I)(A − λnI) ⊆ and so on, we have that ⁿ∏ᵢ₌₁ ( A - λᵢI) = 0 as required.
What is the Cayley-Hamilton theorem?
If T : V → V is a linear transformation and V is a finite dimensional vector space, then χT (T) = 0. Hence, in particular, mT (x) | χT (x).
Prove the Cayley-Hamilton theorem
Bottom pg 19
V is a vector space
What does it mean for V to be the direct sum of subspaces W1, …, Wr
V = W1 ⊕ … ⊕ Wr
if every vector v ∈ V can be written uniquely as a sum
v = w1 + · · · + wr with wi ∈ W
If V is the direct sum of the subspaces W1, … Wr. Describe the basis of V in terms of the bases of Wi/
For each i, let Bi be a basis for Wi. Then
B = ∪ᵢBᵢ
is a basis for V.
If V is the direct sum of the subspaces W1, … Wr. Assume from now on that V is finite dimensional. If T : V → V is a linear map such that each Wi is T-invariant, then the matrix of T with respect to the basis B is block diagonal ….
(What does ᵦ[T]ᵦ look like?)
What is the relationship between Xₜ(x) in terms of the characteristic polynomial of the subspaces?
pg 21 top
Assume f(x) = a(x)b(x) with gcd(a, b) = 1 and f(T) = 0. Then V = Ker(a(T)) ⊕ [ ] is a T-invariant direct sum decomposition
V = Ker(a(T)) ⊕ Ker(b(T))
Assume f(x) = a(x)b(x) with gcd(a, b) = 1 and f(T) = 0. Then
V = Ker(a(T)) ⊕ Ker(b(T))
is a T-invariant direct sum decomposition. Furthermore, if f = mT is the minimal polynomial of T and a and b are monic, then
mₜ|ₖₑᵣ₍ₐ₍ₜ₎₎ (x) =
mₜ|ₖₑᵣ₍ᵦ₍ₜ₎₎ (x) =
mₜ|ₖₑᵣ₍ₐ₍ₜ₎₎ (x) = a(x)
mₜ|ₖₑᵣ₍ᵦ₍ₜ₎₎ (x) = b(x)
Assume f(x) = a(x)b(x) with gcd(a, b) = 1 and f(T) = 0. Then
V = Ker(a(T)) ⊕ Ker(b(T))
is a T-invariant direct sum decomposition. Furthermore, if f = mT is the minimal polynomial of T and a and b are monic, then
mₜ|ₖₑᵣ₍ₐ₍ₜ₎₎ (x) = a(x)
mₜ|ₖₑᵣ₍ᵦ₍ₜ₎₎ (x) = b(x)
Prove it
pg 21
What is the Primary Decomposition Theorem?
Let mT be the minimal polynomial and write
it in the form
mT (x) = fᵃ¹₁(x)· · · fᵃʳᵣ(x) where the fᵢ are distinct monic irreducible polynomials. Put Wi
:= Ker(fᵃᶦᵢ(T)). Then
1) ) V = W1 ⊕ · · · ⊕ Wr
2) Wi is T-invariant
3) mₜ|𝓌ᵢ = fᵃᶦᵢ
Prove the primary decomposition theorem
pg 22
There exists unique distinct irreducible monic polynomials f1, … fr ∈ F[x]
and positive integers ni ≥ ai > 0 (1 ≤ i ≤ r) such that
mT(x) =
and
XT =
mT(x) = fᵃ¹₁(x)· · · fᵃʳᵣ(x) XT = ± fⁿ¹₁ ... fⁿʳᵣ
There exists unique distinct irreducible monic polynomials f1, … fr ∈ F[x]
and positive integers ni ≥ ai > 0 (1 ≤ i ≤ r) such that
mT(x) = fᵃ¹₁(x)· · · fᵃʳᵣ(x)
and
XT = ± fⁿ¹₁ … fⁿʳᵣ
Prove it
Mid pg 22
T is triangularisable (over a given field)
⇐⇒ χT [ ]
⇐⇒ [ ]
⇐⇒ mT [ ]
⇐⇒ χT factors as a product of linear polynomials (over that field)
⇐⇒ each fi is linear
⇐⇒ mT factors as a product of linear polynomials
T is diagonalisable ⇐⇒ mT [….]
mT factors as a product of distinct linear polynomials
T is diagonalisable ⇐⇒ mT factors as a product of distinct linear polynomials.
Prove it
Bottom pg 22
Let V be finite dimensional and T : V → V be a linear transformation
What does nilpotent mean?
If Tⁿ = 0 for some n > 0
then T is called nilpotent.
If T is nilpotent, then its minimal polynomial has the form mT (x) = [ ]
If T is nilpotent, then its minimal polynomial has the form mT (x) = xᵐ for some m
If T is nilpotent, then its minimal polynomial has the form mT (x) = xᵐ for some m and there exists a basis B of V such that:
ᵦ[T]ᵦ =
pg 24 top
If T is nilpotent, then its minimal polynomial has the form mT (x) = xᵐ for some m and there exists a basis B of V such that:
ᵦ[T]ᵦ = (matrix, mostly 0s, one diag of 1s and 0s) pg 24
Prove it
Pg 24-26:
Very long
Let V be finite dimensional and T : V → V be a linear transformation. Assume
mT (x) = (x−λ)ᵐ for some m. Then, there exists a basis B of V such that ᵦ[T]ᵦ is block diagonal with blocks of the form:
Jᵢ(λ) := λIᵢ + Jᵢ = [matrix, 0 bottom right, lambdas down the diagonal, 1s down the next diag, 0 top right]
and 1≤i≤m
Let V be finite dimensional and T : V → V be a linear transformation. Assume
mT (x) = (x−λ)ᵐ for some m. Then, there exists a basis B of V such that ᵦ[T]ᵦ is block diagonal with blocks of the form:
Jᵢ(λ) := λIᵢ + Jᵢ = [matrix, 0 bottom right, lambdas down the diagonal, 1s down the next diag, 0 top right]
and 1≤i≤m
Prove it
pg 27 middle
Let V be finite dimensional and let T : V → V be a linear map with minimal
polynomial
mT (x) = (x − λ1)ᵐ¹· · ·(x − λr)ᵐʳ Then there exists a basis B of V such that ᵦ[T]ᵦ is a [ ] and each diagonal block is of the form [ ]
block diagonal
Ji(λj ) for some 1 ≤ i ≤ mj and 1 ≤ j ≤ r.
Let V be a vector space over F. What is a dual?
Its dual V’ is the vector space of linear maps
from V to F, i.e V’ = Hom(V, F).
Let V be a vector space over F. What is a linear functional?
Its dual V’ is the vector space of linear maps
from V to F, i.e V’ = Hom(V, F). Its elements are called linear functionals
Let V be finite dimensional and let B = {e1, . . . , en} be a basis for V . Define the dual e'i of ei (relative to B) by e'i(ej) = δij. What is the dual basis? Describe the assignment ei → e'i dim V = ?
Then B′ := {e’1, . . . , e′n} is a basis for V’, the dual basis. In particular, the assignment ei
→ e’i defines an isomorphism of vector spaces. In particular, dim V = dim V’
Let V be finite dimensional and let B = {e1, . . . , en} be a basis for V . Define the
dual e’i of ei (relative to B) by
e’i(ej) = δij.
Then B′ := {e’1, . . . , e′n} is a basis for V’, the dual basis. In particular, the assignment ei
→ e’i defines an isomorphism of vector spaces. In particular, dim V = dim V’
Prove it
Bottom pg 29
Let V be a finite dimensional vector space. Then, V → (V’)’ =: V’’ defined by v → Eᵥ is a natural linear [ ]
How is Eᵥ defined?
What does natural mean?
isomorphism
Ev(f) := f(v) for f ∈ V’
“Natural” here means independent of a choice of basis
Let V be a finite dimensional vector space. Then, V → (V’)’ =: V’’ defined by v → Eᵥ is a natural linear isomorphism.
Ev(f) := f(v) for f ∈ V’
Prove it
pg 30
When V has dimension n, the kernel of a non-zero linear functional f : V → F is of dimension n − 1.
The preimage f⁻¹({c}) for a constant c ∈ F is a called [ ] (not necessarily containing zero) of dimension n − 1.
hyperplane
When V has dimension n, the kernel of a non-zero linear functional f : V → F is of dimension n − 1.
The preimage f⁻¹({c}) for a constant c ∈ F is a called hyperplane (not necessarily containing zero) of dimension n − 1.
When V = Fⁿ (column vectors) every hyperplane
is defined by an equation:
a1b1 + · · · + anbn = c
for a fixed scalar c and fixed b = (b1, . . . , bn) ∈ (Fⁿ)ᵗ(row vectors)
Let U ⊆ V be a subspace of V
What is an annihilator of U?
Define the annihilator of U to be:
U⁰ = {f ∈ V’: f(u) = 0 for all u ∈ U}.
Annihilators:
f ∈ V’ iff [ ]
f |ᵤ = 0
Annihilators:
[ ] is a subspace of V
U⁰
Prove that U⁰ is a subspace of V’
Top pg 31
Let V be finite dimensional and U ⊆ V be a subspace. Then dim(U⁰) = [ ]
dim(V ) − dim(U)
Let V be finite dimensional and U ⊆ V be a subspace. Then dim(U⁰) = dim(V ) − dim(U)
Prove it
mid pg 31
Let U, W be subspaces of V . Then
1) U ⊆ W ⇒ [ ]
2) (U + W)⁰= [ ]
3) U⁰ + W⁰ ⊆ ([ ])⁰ and equal if [ ]
1) U ⊆ W ⇒ W⁰ ⊆ U⁰
2) (U + W)⁰= U⁰ ∩ W⁰
3) U⁰ + W⁰ ⊆ (U ∩ W)⁰ and equal if dim(V ) is finite.
Let U, W be subspaces of V . Then
1) U ⊆ W ⇒ W⁰ ⊆ U⁰
2) (U + W)⁰= U⁰ ∩ W⁰
3) U⁰ + W⁰ ⊆ (U ∩ W)⁰ and equal if dim(V ) is finite.
Prove it
pg 32
Let U be a subspace of a finite dimensional vector space V . Under the natural
map V → V’’(:= (V’)’) given by v → Eᵥ, how is U mapped?
It is mapped isomorphically to
U⁰⁰ (:= (U⁰)⁰)
Let U be a subspace of a finite dimensional vector space V . Under the natural
map V → V’’(:= (V’)’) given by v → Eᵥ, U is mapped isomorphically to
U⁰⁰ (:= (U⁰)⁰)
Prove it
Bottom pg 32
Let U ⊆ V be a subspace. Then there exists a natural isomorphism
U⁰ ≃ (V/U)’ given by f → f¯ where f¯(v + U) :=
f¯(v + U) := f(v) for v ∈ V
Let U ⊆ V be a subspace. Then there exists a natural isomorphism
U⁰ ≃ (V/U)’ given by f → f¯ where f¯(v + U) := f(v) for v ∈ V
Prove it
pg 33
What is a dual map?
Let T : V → W be a linear map of vector spaces. Define the dual map by
T’: W′ → V’, f → f ◦ T
Note that f ◦ T : V → W → F is linear, and hence f ◦ T ∈ V’
The dual map T’ is a [ ] map
linear
Prove that T’ is a linear map
Let f, g ∈ W′, λ ∈ F. We need to show T'(f + λg) = T'(f) + λT′(g) (an identity of functionals on V ). So let v ∈ V . Then, T'(f + λg)(v) = ((f + λg) ◦ T)(v) = (f + λg)(T v) = f(T v) + λg(T v) = T'(f)(v) + λT′(g)(v) = (T'(f) + λT′(g))(v), as required.
Let V and W be two finite dimensional vector spaces. The assignment T → T’ defines a natural isomorphism from [ ] to [ ]
hom(V, W)
hom(W’, V’)
Let V and W be two finite dimensional vector spaces. The assignment T → T’ defines a natural isomorphism from hom(V, W) to hom(W’, V’)
Prove it
pg 34
Let V and W be finite dimensional, and let B𝓌 and Bᵥ be bases for W and V .
Then, for any linear map T : V → W
(ᵦ𝓌[T}ᵦᵥ)ᵗ =
(ᵦ𝓌[T}ᵦᵥ)ᵗ = ᵦ’ᵥ[T]ᵦ’𝓌
where B’𝓌 and B’ᵥ are the dual bases
Let V and W be finite dimensional, and let B𝓌 and Bᵥ be bases for W and V .
Then, for any linear map T : V → W
(ᵦ𝓌[T}ᵦᵥ)ᵗ = (ᵦ𝓌[T}ᵦᵥ)ᵗ = ᵦ’ᵥ[T]ᵦ’𝓌
where B’𝓌 and B’ᵥ are the dual bases
Prove it
end pg 34