Prelim 2 (2.3-5.5) Flashcards
Invertible Linear Transformation Theorem
T is invertible if A is invertible
3 conditions for a subspace
- Includes the zero vector
- Closed under vector addition
- Closed under scalar multiplication
Theorem for Nul A
Nul A is a subspace of R^n (# of columns)
Nul A
set of all solutions to Ax=0
Theorem of Col A
Col A is a subspace of R^m (# of rows)
Col A
set of all linear combinations of the columns of A
to check if u is in Nul A
Au must equal 0
to check if v is in Col A
Ax=v must be consistent
kernel / null space
set of all u in vector space V such that T(u)=0 (equal the zero vector in vector space W)
range
set of all vectors in W (T(x)) for some vector x in V
Theorem for linearly dependency
a set {v1,…,vp} is linearly dependent if some vj (j > 1) is a linear combination of the preceding vectors {v1…v(j-1)}
Theorem for basis of Col A
the pivot columns of A form the basis for Col A
- row reduce to echelon form and match the pivot columns to those in original matrix A
Spanning Set Theorem
Let S be a set of vectors {v1…vp} and H be Span{v1…vp}
- If one vector in S that is a linear combination of the others is removed, the set still spans H.
- If H=/={0}, some subset of S forms basis for H.
The Uniqueness Representation Theorem
Let B={b1…bn} be a basis for a vector space V. For each x in V, there exists a unique set of scalars c1…cn such that c1b1+…+cnbn=x
coordinate mapping theorem
the coordinate mapping x->[x]b is a one to one linear transformation from Rn to Rn
Change of Basis Theorem
Let B={b1…bn} and C={c1…cn} be basis for vector space V. There exists a unique n X n matrix such that [x]c=P(C
Determinant and Invertible Matrices
if detA = 0, then A is not invertible (one of the entires on the main diagonal is 0 so A can’t be row equivalent to identity matrix)
triangular matrix
- detA is the product of the entires on the main diagonal
- eigenvalues are on the main diagonal
determinant of transpose of A
= determinant of A
Area with Transformation
Let T be a linear transformation determined by a 2x2 matrix A. If S is a parallelogram in R2, then {area of T(S)} = (area of S) * |det A|
Volume with Transformation
Let T be a linear transformation determined by a 3x3 matrix. If S is a parallelepiped in R3, then {volume of T(S)} = (volume of S) * |det A|
eigenvector
a NONZERO VECTOR x such that Ax=λx
Eigenvectors & linear dependency
If v1…vr are eigenvectors that correspond to distinct eigenvalues, then the set {v1…vr} is linearly independent.
The Rank Theorem
rank + dim Nul A = n
rank
dimension of the column space of A
Row Space (Row A)
the set of all linear combinations of the rows of A
- reduce to echelon form and take the nonzero rows
- Row A = rank (transpose of A)
- if A and B are row equivalent, they have the same row space
characteristic equation
det(A-λI)=0
- forms the characteristic polynomial
- has n roots, counting complex roots and multiplicities
similar matrices
A is similar to B if there is an invertible matrix such that A = PB(P-1)
similar matrices and eigenvalues
If A is similar to B, then they share the same characteristic polynomial and therefore the same eigenvalues with the same multiplicity.
even if two matrices have the same eigenvalues
they are not necessarily similar
Diagonalization Theorem
n x n matrix A is diagonalizable if it has n linearly independent eigenvectors.
For A=PD(P-1), the columns of P are n linearly independent eigenvectors of A. The diagonal entries of D are the corresponding eigenvalues to the eigenvectors in P.
matrix A is said to be diagonalizable if
it is similar to diagonal matrix D
Steps to diagonalize
- Find the eigenvalues.
- Plug in the eigenvalues and solve (A-λI)x=0. Get a good x (integers) and those are the eigenvectors.
- Check eigenvectors are linearly independent. (there must be n of them)
- The eigenvectors form the columns of P.
- Construct D by having the diagonal be the eigenvalues corresponding to the columns of P.
stochastic matrix
matrix whose columns are possibility vectors (all entries are positive)
possibility vector
a vector whose entries are positive and add up to 1
finding steady state vector for P
- Solve (A-I)x=0.
- Choose a simple basis (want integers)
- To turn into possibility vector, add up the entries and divide the vector by that sum.
a stochastic matrix is regular if
some matrix power P^k contains only positive entries
Casorati matrix
Uk Vk W
Uk+1 Vk+1 Wk+1
Uk+2 Vk+2 Wk+2
prove a solution form a basis (signals)
- not multiplies of each other, linearly independent
- (Casorati Matrix)(c1, c2, c3)=(0,0,0)
plug in solutions for U, V, W and show that the casorati matrix is invertible after setting k=0 (columns are linearly independent)