Linear Algebra Flashcards
Properties of regular Markov Chains
There is a unique SSV or SSPV (If the Markov Chain is not regular then there can be multiple steady-state vectors)
All initial state vectors xo go towards the unique SSV x where xk –> x as k approaches infinity
As k approaches infinity the regular transition matrix P^k approaches a transition matrix where the columns are the SSPV
(Only for regular Markov Chains)
Regular Markov Chain
A Markov Chain where the transition matrix is regular
Regular Transition Matrix
A regular transition matrix is a stochastic matrix that for some power of the matrix, the matrix is positive. (Cannot include 0, to be positive it must be greater than 0)
When must you add a parameter when solving
When there is no leading entry for that column
Why do all Ax = λx not have unique solutions
From the definition of an eigenvector and eigenvalue, Aλ = λx if and only if (A-λI)x = 0. We know that A-λI is invertible if and only if (A-λI)x = 0 has a unique solution x = 0. But x cannot be equal to 0 as it contradicts the definition of an eigenvector. Therefore there is no unique solution to (A-λI)x = 0
Or det(A-λI) = 0. This means that A-λI is not invertible. This means that (A-λI)x = 0 does not have a unique solution x = 0. So since A-λI is not invertible, then (A-λI)x = 0 does not have unique solutions.
Diagonalisation Theorem
A is diagonalisable if D = P^-1AP. A has n linearly independent eigenvectors. Each eigenvalue of A has algebraic multiplicity equal to its geometric multiplicity.. All the statements are equivalent.
What does it mean to diagonalise a matrix
It is too express a diagonalisable matrix A in the form A = PDP^-1
Why is diagonalisation useful
Because it makes calculating the powers of matrices more effective. This is because if A is diagonalisable with D = P^-1AP then for all k>/= 1 A^k = PD^kP^-1
When is a matrix diagonalisable
An n by n matrix A is diagonalisable if there exists a diagonal matrix D and an invertible matrix P so that D = P^-1AP
Can you scale an eigenspace
Yes. It is useful when the eigenvector that you calculated is not a whole number and it being in a whole number would make later calculations easier. A good time to do this is when you find the matrix P with matrix A’s eigenvectors. It works as P^-1 is changed correspondingly.
Stochastic Matrix
An n by n matrix P is a stochastic matrix if its columns are probability vectors.
Can diagonal matrices have zeros on the diagonals
Yes
Eigenvalues of a trianglular matrix and powers of a diagonal matrix
D^k has eigenvalues (d11)^k, (d22)^k, …, (dnn)^k.
This is because D^k is also diagonal.
When are matrices similar
Let A and B be n by n matrices. A is similar to B if there is an invertible matrix P so that P^-1AP = B. If P is invertible and AP = PB holds we have P^-1AP = B meaning that A is similar to B
Properties of similar matrices
If A is similar to B then B is similar to A
If A is similar to B and B is similar to C then A is similar to C
A is similar to A
Determinant of the inverse matrix
det(A^-1) = 1/det(A)
Properties of matrices that are similar
IF A is similar to B then
det(A) = det(B)
det(A-λI) = det(B-λI)
This means A and B have the same eigenvalues as they have the same characteristic polynomial.
A is invertible <=> B is invertible
If A is similar to B they have the same trace
What is proof by contradiction
You prove why the opposite is not true to say that something is true.
For example, we want to prove that something is not diagonalisable. We first assume it is diagonalisable but then from there we can see something that makes it not diagonalisable. Then we can say that it is not diagonalisable.
Properties of triangular matrices
The determinant of a triangular matrix is the product of its diagonal entries. This implies that the eigenvalues of a triangular matrix are its diagonal entries. Or the roots of det(A-λI) are the diagonal entries of A. (The roots of the characteristic equation are the the eigenvalues)
Trace of a matrix
It is the sum of all the eigenvalues including multiplicities of an n by n matrix
Determinant of a matrix from eigenvalues
The product of all eigenvalues including multiplicities gives the determinant of an n by n matrix
How do row operations affect determinants
If B is obtained from A by swapping 2 rows then det(B) = -det(A)
If B is obtained from A by multiplying one row by a scalar c then det(B) = cdet(A)
If B is obtained from A by adding a multiple of one row of A to another row of A then det(B) = det(A)
Do you need to bracket when expanding coefficients for determinants
Yes. 3 |A| = 3(ad - bc)
N distinct eigenvalues and diagonalisability
If an n by n matrix has n distinct eigenvalues then A is diagonalisable. (no multiplicities)
Proof: If A has n distinct eigenvalues then the corresponding eigenvectors are linearly independent and if A has n linearly independent eigenvectors then the matrix A is diagonalisable.
Note: A matrix can be diagonalisable even if its eigenvalues are not all distinct.
Definition of trace
It is the sum of the diagonal entries of an n by n matrix
Geometric multiplicity
The geometric multiplicity of an eigenvalue is the number of distinct parameters appearing in its eigenspace
Invertibility and eigenvalues
An n by n matrix A is invertible if and only if 0 is not an eigenvalue of A. In other words, A is only invertible if all the eigenvalues are non zero.
Since det(A-0I) cannot equal 0 then 0 is not a solution to det(A-λI)=0 and therefore 0 is not an eigenvalue
Eigenvalue and eigenvector defintion
Let A be an n by n matrix. A scalar λ is called an eigenvalue of A if there is a non-zero vector x so that Ax = λx. Such a vector x is called an eigenvector of A corresponding to λ
Properties of transpose
The transpose of a transpose gives the original matrix
The transpose of the sum of two matrices is the sum of the individual transposes of the matrices
(kA)^T = kA^T
(AB)^T = B^T A^T
(A^m)^T = (A^T)^m
det(A) = det(A^T)
(This also means that the characteristic polynomial is the same, ie they have the same eigenvalues)
Remember that each of these matrices can apply to a larger more complex matrix.
Symmetric matrix
A is symmetric if A=A^T ie A equals its own transpose
Further properties of determinants
det(cA) = c^n det(A)
det(AB) = det(A)det(B)
If A is invertible, then det(A^-1) = 1/det(A)
det(A) = det(A^T)
Invertibility equivalent theorems
A is invertible Ax=b has a unique solution for every bERn Ax = 0 has unique solution x = 0 The reduced row echelon form of A is In A is a product of elementary matrices
What do we need to know when a proof references a unique something/solution
Reference its invertibility (main thing) Whether the Ax = b has a unique solution for all bERn Whether Ax = 0 has unique solution x = 0 The reduced row echelon form of A is In A is a product of elementary matrices
How to prove that something can or can’t equal 0 or when numbers are distinct or indistinct
Factor things like (a2-a1)(a3-a1)(a2-a3). If a2, a1, and a3 are distinct then (a2-a1)(a3-a1)(a2-a3) cannot equal 0.
Methods of finding the trace
It is the sum of the diagonal entries of an n by n matrix, it is the sum of the eigenvalues of an n by n matrix
Diagonal entries and eigenvalues
To clear things up, the product of the eigenvalues including multiplicities of any n by n matrix is its determinant.
The diagonal entries of a triangular matrix ARE its eigenvalues.
How to find SSPV and SSV
Since a transition matrix for a Markov chain will always have 1 as an eigenvalue. We find the 1-eigenspace for that transition matrix. Then we scale it by a suitable scalar. For SSV we need a given population. The eigenvalue needs to add to the population. For SSPV we need a probability vector so the entries in the eigenvector must add to 1.
Consistent or inconsistent
A system is consistent if it has solutions (A unique solution or infinitely many in 1 line). A system is inconsistent is it has no solutions. (No points of intersection or there is no point where all lines meet)
The inverse of an elementary matrix
It comes from doing the inverse row operation on the identity matrix. It itself is an elementary matrix as it has had a row operation done to it.
Row echelon form
Any rows which consist entirely of 0s are at the bottom
In each row that isn’t all zeros, the first non-zero entry in that row is in a column to the left of any leading entries in rows further down the matrix.
Reduced row echelon form
It is in row echelon form
The leading entry in each non-zero row is 1
Each column containing a leading 1 has zeros everywhere else.
Length of vectors identities
||v|| = √(v.v) ||cV|| = |c| ||V||
Proofs may involve combining both of these identities and squaring both sides of an equation as we can deal with non- negative numbers
Equations for planes
Normal form n.(x-p) = 0 All letters should have tildes under them
General form is normal form expanded out in the form ax + by + cz = d
Vector form x = p + t(direction vector) + s(direction vector). Note that the direction vectors cannot be parallel.
Parametric form is just the vector form expanded
Coincide
It is when two lines are perfectly on top of each other meaning they have infinite solutions.
When are there no solutions to a system
When there is a zero equal to a non-zero number in one of the rows of an augmented matrix/in one of the linear equations in the system
Inverse properties
(cA)^-1 = c^-1 A^1 (AB)^-1 = B^-1 A^-1 (ABC)^-1 = C^-1 B^-1 A^-1 A^T is invertible and (A^T)^-1 = (A^-1)^T (A^n)^-1 = (A^-1)^n
When is a set of vectors linearly dependent
A (the entire set) set of vectors in Rn is linearly dependent if and only if at least one of them can be expressed as a linear combination of the others.
Definition of linear independence
A set of vectors v1, v2, vk is linearly independent if the only solution to the equation c1v1 + c2v2 + … + ckvk = 0 is when c1 = c2 = … = ck = 0. (All of the scalars and equal and equal to 0)
Definition of a span
If set S = {v1, v2, …vk} is a set of vectors (not points) in Rn, then the set of all linear combinations of v1, v2,…, vk is denoted by span(v1, v2, …, vk) or span(S)
How to prove that a point p is in a span
Are there scalars c1 and c2 up to cn such that p = c1v1 +c2v2 +… + cnvn. (If there is any linear combination of any of the vectors in the set that can represent the point, then the point is in the span.
An alternative way of saying how two vectors are linearly independent
Two vectors v1 and v2 in Rn are linearly independent if and only if they are not scalar multiples are each other.
This relates back to the formal definition of linear independence.
eg. Suppose c1v1 = v2 s.t. cER
Then c1v1 - v2 = 0
Since the coefficient of v2 is non zero they are linearly dependent
(The coefficients have to be all zero)
Are the direction vectors of the vector form of a plane linearly independent
Yes because the direction vectors are parallel to the plane but cannot be scalar multiples of each other. This means that they are linearly independent
What is a homogenous system and why can they never be inconsistent
A system of linear equations is homogenous if all constant terms are 0. (In other words all the numbers in the augmented column are zeros)
Homogenous systems always have at least one as x1 = 0, x2 = 0 x3 = 0 can all equal 0. Either infinite or a unique solution.
When are there unique, infinite, and no solutions
If each column has a leading entry then there is a unique solution
If there is a zero equal to a non zero then there are no solutions
If there is a column with no leading entries then there are infinite solutions
(There are unique solutions if the system cannot have infinite nor no solutions, process of elimination)
If there is a zero vector in the set of vectors is the set linearly independent
No because any number can go in front of the zero vector even while all the other coefficients are zero.
Row reducing when solving
Don’t forget to apply the row operation to the augmented part of the matrix
Scalars of row operations
The scalar cannot be 0. When you multiply a row by a scalar the scalar cannot be 0.
Matrix Properties
Associativity A(BC) = (AB)C Distributivity A(B+C) = AB + AC k(AB) = (kA)B (A+B)C = AB + AC
What does having n distinct eigenvalues mean for an n by n matrix
It means that the corresponding eigenvectors are linearly independent.
(Compare diagonalisation theorem
A is diagonalisable <=> A has n linearly independent eigenvectors)
When does AP = PD hold
If P is invertible AP = PD <=> D = P^-1AP
Relationship between diagonalisability and similarity
A matrix A is diagonalisable if it is similiar to a diagonal matrix as D = P^-1AP
Powers of stochastic and regular matrices
If P is stochastic then P^m is stochastic
If P is positive then P^m is positive
Definition of a steady state vector
Let P be the transition matrix of a Markov chain. stead-state vector is any vector x so that Px=x with non-negative entries summing to the total number of objects in the Markov chain. (Ie it can sum to 1 if its objects are probabilities or it can sum to a population)