Midterm 3 Flashcards
Diagonal matrix
matrix where the only non-zero entries are on the diagonal
similar matrices
a matrix A is similiar to a matrix D if A = PDP^-1
P is an invertible matrix
A and D have the same eigenvalues and determinants!!!
- they have the same characteristic polynomial and therefore the same eigenvalues
If two matrices have the same eigenvalues, does that necessarily mean they are similar to each other?
FALSE, only the converse is true
Diagonalization
Splitting up matrix A into a diagonal matrix D and an invertible matrix P
- very useful for computing A^k with large ks
A^k = PD^kP^-1
Algebraic multiplicity
the number of repeats for an eigenvalue
Geometric multiplicity
the number of eigenvectors for a given eigenvalue
Dimension of the Null(A- λI) for a specific λ
Singular
NOT INVERTIBLE
free variables, linearly dependent columns
Nonsingular = invertible!
Diagonalization Formula
A = PDP^-1
P: set of all linearly independent eigenvectors
D: the corresponding eigenvalues (in order)
Allows us to solve A^k for large k
A^k = PD^kP^-1
The Diagonalization Theorem
An nxn matrix A is diagonalizable if and only if A has n linearly independent eigenvectors
Dimension of A = Dimension of P
A is diagonalizable if and only if there are enough eigenvectors to form a basis of Rn : eigenvector basis
Steps to Diagonalize a Matrix
- find the eigenvalues using the characteristic polynomials
det(A - λI) = 0 - find the linearly independent eigenvectors of A
(A - λI)v = 0, plug in λ
- solve the null space in parametric vector form
IF the number of total eigenvectors is NOT equal to the number of columns in A, then A is not diagonalizable - Construct P from the eigenvectors
- Construct D using the corresponding eigenvalues
Theorem - Eigenvalues and Diagonalizable
An nxn matrix with n distinct eigenvalues is diagonalizable
- if vi …vn are eigenvectors correspond to n distinct eigenvalues of matrix A. Then {vi … vn} is linearly independent, therefore A is diagonalizable
BUT it is not necessary for a nxn matrix to have n distinct eigenvalues to be diagonalizable
Theorem - Matrices whose Eigenvalues are Not Distinct
Geometric multiplicity must be less than or equal to algebraic multiplicity of λ
A matrix is diagonalizable IF AND ONLY IF the sum of the dimensions of the eigenspaces (Nul(A -λI)) equals n (the number of columns)
Total geometric multiplicity = number of columns in matrix A THEREFORE geometric multiplicity has to equal algebraic multiplicity
characteristic polynomial of A factors completely into linear factors - can be real or imaginary
DIAGONALIZABILITY AND INVERTIBILITY
they have no correlation with each other
-a matrix can be diagonalizable but not invertible because it can have a eigenvalue of 0
- a matrix can be invertible but not diagonalizable
1,1
0,1
Complex number
a + bi
i = sqrt(-1)
Complex eigenvalue
eigenvalue that is a complex number a + bi
if b = 0, then λ is a real eigenvalue
Complex eigenvector
an eigenvector subsisting of a complex eigenvalue
Complex number Space ℂn
the space of all complex numbers
ℂ2
complex number space with 2 entries
at least one entry is a complex number
Conjugate of a complex number
the conjugate for (a+bi) is (a-bi)
Complex conjugate of a vector x
x with a bar on top of it
Re x
the real parts of a complex vector x
an entry CAN be 0
Im x
the imaginary parts of a complex vector x
an entry can be 0
We can identify ℂ with R2
a + bi <-> (a,b)
we can add and multiply complex numbers
Add: like normal (2-3i)+(-1+i) = 1-2i, similiar to matrix addition
Multiply: FOIL!!! - no matrix multiplication
absolute value of a complex number a + bi
sqrt(a^2 + b^2)
we can write complex numbers in polar form
(a,b) = a + ib = r(cosφ + isinφ)
a is the real part and b is the imarginary part
Argument of lambda = a + bi
the angle φ produced by a and b on their respective Re x and Im x axis
Finding complex eigenvalues and complex eigenvectors
- det(A - λI) = 0 to get the eigenvalues λ; the complex roots are the complex eigenvalues
- Solve (A-λI)x = 0 for x to get the eigenvectors
should get one “free variable” - Find the other eigenvector
- Find the conjugate of the other eigenvector!
Re x and Im x
xbar = vector whose entries are the complex conjugates of the entires in x
for example:
(3-1,i, 2) => (3,0,2) + i(-1,1,0)
Re x is the first, Im x is the second
xbar = (3+i,-i,2)
Properties of Complex Conjugate Matrixes
you can find the conjugates first and then multiply together for:
rx Bx BC rB ???
r being scalars
uppercase being matrixes and x being vectors
conjugate of (x + y) = xbar + ybar
conjugate of Av is equal to Avbar
Im(xxbar) = 0
(xy)bar = xbarybar
Complex Eigenvalues and Complex Eigenvector Come in Pairs!!!
no such thing as an odd number of complex eigenvalues
Rotation Dilation Matrix
matrix in the form of
a,-b
b,a
the eigenvalues are: a + bi, a - bi
the length of the eigenvalue (r) is sqrt(a^2+b^2)
the angle of the eigenvalue is tan^-1(b/a)
Euler’s Formula
e^(iφ) = cosφ + isinφ
multiplying two complex numbers =
r1*r2e^(i(φ1+φ2))
Complex numbers and Polynomials
if lambda is a complex root of characteristic polynomial, then lambda bar is also a root of that real polynomial
lambda bar is an eigenvalue of A as well with an eigenvector of vbar
Inner Product or Dot Product
a scalar
u*v
uTv
Vector Length
||v|| = sqrt(v*v) = sqrt(v1^2 + v2^2 +…+vn^2)
Unit Vector
vector whose length is 1
Vector normalization
Dividing a nonzero vector by its length to make it a unit vector
(1/||v||)*v
Distance between two vectors
dist(u,v) = || u - v||
Orthogonal vectors
Two vectors are orthogonal if their dot products equals 0
Orthogonal complements
A set of vectors that are all orthogonal to a subspace W
Representation as a line or plane depends on the nullspace of W
For a subspace to be in Rn
subspace (contains the zero vector and is closed under addition and multiplication) has n entries for each vector in it (dimension n)
R1 means that the vectors have one entry
Span of just [1]