Linear Algebra Flashcards
Properties of regular Markov Chains
There is a unique SSV or SSPV (If the Markov Chain is not regular then there can be multiple steady-state vectors)
All initial state vectors xo go towards the unique SSV x where xk –> x as k approaches infinity
As k approaches infinity the regular transition matrix P^k approaches a transition matrix where the columns are the SSPV
(Only for regular Markov Chains)
Regular Markov Chain
A Markov Chain where the transition matrix is regular
Regular Transition Matrix
A regular transition matrix is a stochastic matrix that for some power of the matrix, the matrix is positive. (Cannot include 0, to be positive it must be greater than 0)
When must you add a parameter when solving
When there is no leading entry for that column
Why do all Ax = λx not have unique solutions
From the definition of an eigenvector and eigenvalue, Aλ = λx if and only if (A-λI)x = 0. We know that A-λI is invertible if and only if (A-λI)x = 0 has a unique solution x = 0. But x cannot be equal to 0 as it contradicts the definition of an eigenvector. Therefore there is no unique solution to (A-λI)x = 0
Or det(A-λI) = 0. This means that A-λI is not invertible. This means that (A-λI)x = 0 does not have a unique solution x = 0. So since A-λI is not invertible, then (A-λI)x = 0 does not have unique solutions.
Diagonalisation Theorem
A is diagonalisable if D = P^-1AP. A has n linearly independent eigenvectors. Each eigenvalue of A has algebraic multiplicity equal to its geometric multiplicity.. All the statements are equivalent.
What does it mean to diagonalise a matrix
It is too express a diagonalisable matrix A in the form A = PDP^-1
Why is diagonalisation useful
Because it makes calculating the powers of matrices more effective. This is because if A is diagonalisable with D = P^-1AP then for all k>/= 1 A^k = PD^kP^-1
When is a matrix diagonalisable
An n by n matrix A is diagonalisable if there exists a diagonal matrix D and an invertible matrix P so that D = P^-1AP
Can you scale an eigenspace
Yes. It is useful when the eigenvector that you calculated is not a whole number and it being in a whole number would make later calculations easier. A good time to do this is when you find the matrix P with matrix A’s eigenvectors. It works as P^-1 is changed correspondingly.
Stochastic Matrix
An n by n matrix P is a stochastic matrix if its columns are probability vectors.
Can diagonal matrices have zeros on the diagonals
Yes
Eigenvalues of a trianglular matrix and powers of a diagonal matrix
D^k has eigenvalues (d11)^k, (d22)^k, …, (dnn)^k.
This is because D^k is also diagonal.
When are matrices similar
Let A and B be n by n matrices. A is similar to B if there is an invertible matrix P so that P^-1AP = B. If P is invertible and AP = PB holds we have P^-1AP = B meaning that A is similar to B
Properties of similar matrices
If A is similar to B then B is similar to A
If A is similar to B and B is similar to C then A is similar to C
A is similar to A
Determinant of the inverse matrix
det(A^-1) = 1/det(A)
Properties of matrices that are similar
IF A is similar to B then
det(A) = det(B)
det(A-λI) = det(B-λI)
This means A and B have the same eigenvalues as they have the same characteristic polynomial.
A is invertible <=> B is invertible
If A is similar to B they have the same trace
What is proof by contradiction
You prove why the opposite is not true to say that something is true.
For example, we want to prove that something is not diagonalisable. We first assume it is diagonalisable but then from there we can see something that makes it not diagonalisable. Then we can say that it is not diagonalisable.
Properties of triangular matrices
The determinant of a triangular matrix is the product of its diagonal entries. This implies that the eigenvalues of a triangular matrix are its diagonal entries. Or the roots of det(A-λI) are the diagonal entries of A. (The roots of the characteristic equation are the the eigenvalues)
Trace of a matrix
It is the sum of all the eigenvalues including multiplicities of an n by n matrix
Determinant of a matrix from eigenvalues
The product of all eigenvalues including multiplicities gives the determinant of an n by n matrix
How do row operations affect determinants
If B is obtained from A by swapping 2 rows then det(B) = -det(A)
If B is obtained from A by multiplying one row by a scalar c then det(B) = cdet(A)
If B is obtained from A by adding a multiple of one row of A to another row of A then det(B) = det(A)
Do you need to bracket when expanding coefficients for determinants
Yes. 3 |A| = 3(ad - bc)
N distinct eigenvalues and diagonalisability
If an n by n matrix has n distinct eigenvalues then A is diagonalisable. (no multiplicities)
Proof: If A has n distinct eigenvalues then the corresponding eigenvectors are linearly independent and if A has n linearly independent eigenvectors then the matrix A is diagonalisable.
Note: A matrix can be diagonalisable even if its eigenvalues are not all distinct.
Definition of trace
It is the sum of the diagonal entries of an n by n matrix