Linear Algebra 2 Flashcards
What is a determinantal mapping?
A mapping D : Mn(R) → R is determinantal if it is
(a) multilinear in the columns:
D[· · · , bᵢ + cᵢ, · · · ] = D[· · · , bᵢ, · · · ] + D[· · · , cᵢ, · · · ]
D[· · · , λaᵢ, · · · ] = λD[· · · , aᵢ, · · · ] for λ ∈ R
(b) alternating:
D[· · · , aᵢ, aᵢ₊₁, · · · ] = 0 when aᵢ = aᵢ₊₁
(c) and D(Iₙ) = 1 for In the n × n identity matrix.
Let D : Mn(R) → R be a determinantal map. Then
1) ) D[· · · , aᵢ , aᵢ₊₁ · · · ]
(2) D[· · · , aᵢ, · · · , aⱼ · · · ] = 0 when
3) D[· · · , aᵢ, · · · , aⱼ · · · ] = - D[
(1) D[· · · , aᵢ, a₊₁ · · · ] = −D[· · · , a₊₁, aᵢ, · · · ]
(2) D[· · · , aᵢ, · · · , aj · · · ] = 0 when ai = aj , i ≠ j.
(3) D[· · · , aᵢ, · · · , aⱼ · · · ] = −D[· · · , aⱼ , · · · , ai , · · · ] when i ≠ j
Let n ∈ N. What is a permutation? What is Sn?
A permutation σ is a bijective map from the set {1, 2, · · · , n} to itself. The set of all such permutations is denoted Sn
What is a transposition?
An element σ ∈ Sn which switches two elements 1 ≤ i < j ≤ n and fixes the others is called a transposition
For each n ∈ N there exists a unique determinantal function D : Mn(R) → R and
it is given explicitly by the expansion [ ]
We write this unique function as det(·) or sometimes | · |
D[a1, · · · , an] = Σσ∈Sn
sign(σ)aσ(1),1 · · · aσ(n),n
σ(n),n part is subscript
For σ ∈ Sn, we have sign(σ) = sign(σ⁻¹)
Prove it
Follows since σ ◦ σ⁻¹ is the identify map, which can be written as a sequence of 0 transpositions, an even number
Write det(A) in terms of Aᵀ. Prove it
det(A) = det(Aᵀ)
Proof:
Σσ∈Sn sign(σ)a1,σ(1) · · · an,σ(n) = Σσ∈Sn sign(σ)aσ⁻¹(1),1 · · · aσ⁻¹(n),n = Σσ⁻¹∈Sn sign(σ⁻¹)aσ⁻¹(1),1 · · · aσ⁻¹(n),n = det(A)
The map det : Mn(R) → R is [ ] and alternating in the rows of a matrix.
multilinear
det(A) = Σσ∈Sn [ ]
Σσ∈Sn sign(σ)a1,σ(1) · · · an,σ(n)
Let A, B ∈ Mn(R). Then
(i) det(A) ≠ 0 ⇔
(ii) det(AB) =
(i) det(A) ≠ 0 ⇔ A is invertible
ii) det(AB) = det(A) det(B
Let A ∈ Mn(R). For such an elementary matrix E we have det(EA) = [ ]
det(E) det(A)
What is the determinant of an upper triangular matrix?
The product of its diagonal entries
Let V be a vector space of dimension n over R.
Let T : V → V be a linear transformation, B a basis for V , and Mᴮᵦ (T) the matrix for T with respect to initial and final basis B. We define
det(T) := [ ]
det(T) := det(Mᴮᵦ (T))
Let V be a vector space of dimension n over R.
Let T : V → V be a linear transformation, B a basis for V
The determinant of T is independent of [ ]
the choice of basis B
Let S, T : V → V be linear transformations. Then
i) [ ] ⇔ T is invertible.
(ii) [ ] = det(S) det(T
(i) det(T) ≠ 0 ⇔ T is invertible.
ii) det(ST) = det(S) det(T
Define eigenvector
Let V be a vector space over R and T : V → V be a linear transformation
A vector v ∈ V is called an eigenvector of T if v ≠ 0 and T v = λv for some
λ ∈ R.
Define eigenvalue
We call λ ∈ R an eigenvalue of T if T v = λv for some nonzero v ∈ V
λ is an eigenvalue of T ⇔ [ ]
λ is an eigenvalue of T ⇔ Ker(T − λI) ≠ {0}
λ is an eigenvalue of T ⇔ Ker(T − λI) ≠ {0}
Prove it
λ is an eigenvalue of T ⇔ ∃v ∈ V, v ≠ 0, T v = λv
⇔ ∃v ∈ V, v ≠ 0, (T − λI)v = 0 ⇔ Ker(T − λI) ≠ {0}
The following statements are equivalent:
(a) λ is an eigenvalue of T
(b) Ker(T − λI) ≠ [ ]
(c) T − λI is not [ ]
(d) det(T − λI) = [ ]
(a) λ is an eigenvalue of T
(b) Ker(T − λI) ≠ {0}
(c) T − λI is not invertible
(d) det(T − λI) = 0.
For A ∈ Mn(R). What is the characteristic polynomial of A?
the characteristic polynomial of A is defined as det(A−xIₙ)
For T : V → V a linear transformation, let A be the matrix for T with respect to some basis B
What is the characteristic polynomial of T?
We denote the characteristic polynomial of T by χT (x), and of a matrix A by χA(x)
The characteristic polynomial of T is defined as det(A − xIₙ).
Describe the link between eigenvalues and characteristic polynomials
Let T : V → V be a linear transformation. Then λ is an eigenvalue of T if and
only if λ is a root of the characteristic polynomial χT (x) of T
Let T : V → V be a linear transformation. Then λ is an eigenvalue of T if and
only if λ is a root of the characteristic polynomial χT (x) of T
Prove it
(⇒) Suppose λ is an eigenvalue of T. Then by λ is an eigenvalue of T ⇒ det(T − λI) = 0, we have det(T − λ1) = 0. Thus det(A − λIn) = 0 for any matrix A for T. (If A is a matrix for T, then
A − λIn is the corresponding one for T − λI.) So λ is a root of χT (x) = det(A − xIₙ).
(⇐) Suppose λ is a root of χT (x) = det(A − xIₙ) for some matrix (all matrices) A for T. Then det(A − λIₙ) = 0, and so det(T − λI) = 0. Thus by det(T − λI) = 0 ⇒ λ is an eigenvalue of T, λ is an
eigenvalue of T.
For T : V → V a linear transformation the trace tr(T) is defined to be [ ]
For T : V → V a linear transformation the trace tr(T) is defined to be tr(A)
where A is any matrix for T.
For A ∈ Mn(R),
χA(x) = (−1)ⁿxⁿ + (−1)ⁿ⁻¹tr(A)xⁿ⁻¹ + …. + [ ]
Similarly
χT (x) =
χA(x) = (−1)ⁿxⁿ + (−1)ⁿ⁻¹tr(A)xⁿ⁻¹ + …. + detA
χT(x) = (−1)ⁿxⁿ + (−1)ⁿ⁻¹tr(T)xⁿ⁻¹ + …. + detT
Let A ∈ Mn(C) have eigenvalues λ1, λ2, · · · , λn ∈ C (not necessarily distinct).
Then tr(A) =
detA =
tr(A) = λ1 + λ2 + · · · + λn and det(A) = λ1 · · · λn
Let λ1, · · · , λm (m ≤ n) be the distinct eigenvalues of T and v1, · · · , vm be
corresponding eigenvectors. Then v1, · · · , vm are linearly [ ]
independent
When is a linear map T : V → V diagonalisable?
if V has a basis consisting of
eigenvectors for T
When is a matrix A ∈ Mn(R) diagonalisable?
if the map it defines by acting on (column) vectors in Rⁿ is diagonalisable.
A matrix A ∈ Mn(R) is diagonalisable if and only if there exists an invertible
matrix P such that [ ]
B := P⁻¹AP is a diagonal matrix (in which case, the diagonal entries in B are
the eigenvalues, and the columns in P the corresponding eigenvectors)
A matrix A ∈ Mn(R) is diagonalisable if and only if there exists an invertible
matrix P such that B := P⁻¹AP is a diagonal matrix
Prove it
Assume A is diagonalisable and let v1, . . . , vn be the basis of eigenvectors and λ1, . . . , λn the eigenvalues (possibly with repetition of eigenvalues). Using the notation in Section 1, define
P = [v1, · · · , vn] and B the diagonal matrix with entries λ1, · · · , λn. Then P is invertible since its
columns are linearly independent, and the equation [λ1v1, · · · , λnvn] = [Av1, · · · Avn] is the same as P B = AP, that is B = P⁻¹AP.
Conversely, given that B := P⁻¹AP is diagonal, the columns of P must be n linearly eigenvectors
of A and entries of B corresponding eigenvalues (since P B = AP)
Let V be a vector space of dimension n. Suppose a linear map T : V → V (matrix
A ∈ Mn(R), respectively) has n distinct eigenvalues. The T (A, respectively) is [ ]
diagonalisable
Let V be a vector space of dimension n. Suppose a linear map T : V → V (matrix
A ∈ Mn(R), respectively) has n distinct eigenvalues. The T (A, respectively) is diagonalisable
Prove it
Assume T has n distinct eigenvalues. For each of the n distinct eigenvalues λi there is at least one eigenvector vi (by definition). The n eigenvectors v1, · · · , vn are linearly independent, and thus form a basis for V . (The statement for matrices A follows by viewing A as a
map on Rⁿ.)
Suppose χT (x) (χA(x), respectively) has n distinct roots in R. Then T (A,
respectively) is [ ]
diagonalisable over R
How do you diagonalise a matrix?
Let A ∈ Mn(R).
(1) Compute χA(x) = det(A − xI) and find its roots λ ∈ R (real eigenvalues).
(2) For each eigenvalue λ, find a basis for Ker(A − λI) using, for example, row-reduction (this
gives you linearly independent eigenvectors for each eigenvalue).
(3) Collect together all these eigenvectors. If you have n of them put them as columns in a matrix
P, and the corresponding eigenvalues as the diagonal entries in a matrix B. Then B = P⁻¹AP and you have diagonalised A. If you have < n eigenvectors you cannot diagonalise A (over R).
let T : V → V be a linear transformation
What is an eigenspace?
Let λ be an eigenvalue for T. Then
Eλ := Ker(T − λI) = {v ∈ V : T v = λv}
is called the eigenspace for λ. (This is just the set of all eigenvectors of T with eigenvalue λ, along with the zero vector.)
Is Eλ(eigenspace) a subspace of V?
Yes since it is the kernel of the map T − λI
Define geometric multiplicity
Let λ be an eigenvalue of T. The dimension of Eλ is called the geometric
multiplicity of λ
Define algebraic multiplicity
The multiplicity of λ as a root of the characteristic polynomial χT (x) is called
the algebraic multiplicity of λ
Let λ be an eigenvalue of T. The geometric multiplicity of λ is [ ] to the algebraic multiplicity of λ
less than or
equal
Let λ be an eigenvalue of T. The geometric multiplicity of λ is less than or
equal to the algebraic multiplicity of λ
Prove it
Let’s denote these multiplicities gλ and aλ respectively.
Extend a basis for Eλ to one for V . Then the matrix for T with respect to this basis for V looks like (λI𝓰λ * )
( 0 * )
Hence the matrix for T − xI looks like
( (λ − x)I𝓰λ * )
( 0 • )
and so det(T −xI) = (λ−x)^gλ h(x)
for some h(x) := det(•) ∈ R[x]. We must then have gλ ≤ aλ
Let λ1, · · · , λr (r ≤ n) be the distinct eigenvalues of T. Then the eigenspaces
Eλ1, · · · , Eλr form a [ ]
a direct sum Eλ1 ⊕ · · · ⊕ Eλr
The Gram-Schmidt procedure
see pg17 of the notes
Let A ∈ Mn(R) be a symmetric matrix, that is AT = A. Now A may be thought of as a linear
transformation on Cⁿ and so in particular has (counting multiplicities) n eigenvalues in C
The eigenvalues of A all lie in [ ]
The eigenvalues of A all lie in R
Let A ∈ Mn(R) be a symmetric matrix, that is AT = A. Now A may be thought of as a linear
transformation on Cⁿ and so in particular has (counting multiplicities) n eigenvalues in C
The eigenvalues of A all lie in R
Prove it
Let λ ∈ C be an eigenvalue of A with eigenvector v ∈ Cⁿ. So Av = λv with v ≠ 0. Now
(AV)ᵀ (vbar) = vᵀAᵀ(vbar) = (as Aᵀ = A) = vᵀA(vbar) = (as A(vbar) = (Av)bar = (λbar)(vbar)) = λbar vᵀ vbar
(AV)ᵀ (vbar) = (as Av = λv) = λvᵀvbar
Writing vᵀ = (v1, · · · , vn) we see
vᵀvbar = v1v1bar + … + vnvnbar = |v1|² + · · · + |vn|² > 0
Since v ≠ 0.
Thus we can cancel vᵀvbar and one gets λbar = λ, that is λ ∈ R
Let A ∈ Mn(R) be symmetric. Then the space Rⁿ has an orthonormal basis consisting of eigenvectors of A. That is, there exists an orthogonal real matrix R (so Rᵀ = R⁻¹) such that R⁻¹AR is [ ]
diagonal with real entries.
What is the Spectral theorem for real symmetric matrices?
A real symmetric matrix A ∈ Mn(R) has real eigenvalues and there exists an orthonormal basis for Rⁿ consisting of eigenvectors for A
Now let V be a real vector space with inner product . We call a linear map T : V → V self-adjoint (or symmetric) if….
= <u> for all u, v ∈ Rⁿ</u>
What is the Spectral theorem for self-adjoint operators on a real inner product space?
A self-adjoint map T on a finite dimensional real inner product space V has real eigenvalues and there
exists an orthonormal basis for V consisting of eigenvectors of T.
What is a quadric form in n variables x1, …, xn over R?
a homogeneous degree 2
polynomial
Q(x1, · · · , xn) = ⁿΣᵢ,ⱼ₌₁ aᵢⱼxᵢxⱼ = (x1 … xn)A(x1 …. xn)ᵀ, A = (aᵢⱼ)
with real coefficients. We can and do assume A is symmetric we can find an orthogonal change of variable
(y1 · · · yn) = (x1 · · · xn)P, Pᵀ = P⁻¹ so that
Q(y1, · · · , yn) = λ₁y₁² + · · · + λₙyₙ²
where λ1, · · · , λn ∈ R are the (all real) eigenvalues of the symmetric matrix A.
What is a quadric?
A quadric is the set of points in R³satisfying a degree 2 equation
f(x₁,x₂,x₃) = ³Σᵢ,ⱼ₌₁ aᵢⱼxᵢxⱼ + ³Σᵢ₌₁ bᵢxᵢ + c = 0
with A = (aᵢⱼ)∈ M3(R) symmetric and non-zero, and b1, b2, b3, c ∈ R
Classify a quadric to be:
an ellipsoid
µ₁Y₁² + µ₂Y₂² + µ₃Y₃² = 1
Classify a quadric to be:
∅
µ₁Y₁² + µ₂Y₂² + µ₃Y₃² = - 1
Classify a quadric to be:
{0}
µ₁Y₁² + µ₂Y₂² + µ₃Y₃² = 0
Classify a quadric to be:
1-sheet Hyperboloid
µ₁Y₁² + µ₂Y₂² - µ₃Y₃² = 1
Classify a quadric to be:
2-sheet Hyperboloid :
µ₁Y₁² + µ₂Y₂² - µ₃Y₃² = - 1
Classify a quadric to be:
Cone
µ₁Y₁² + µ₂Y₂² - µ₃Y₃² = 0