Midterm 2 Flashcards
invertible matrix
a nxn matrix where AA-1 = I
inverse of matrix
A^-1 where AA^-1 = I
singular
not invertible
determinant of a 2x2 matrix
ad - bc
elementary matrix (E)
matrix performed by doing ONE elemtary row operation on an IDENTITY matrix
ALL elementary matrices are invertible because all row operations are reversible
– the inverse of an elementary matrix is another elementary matrix that’ll turn E back into I
Row Equivalent Matrices
matrices that can turn into one another through a sequence of elementary row operations
nonsingular vs singular
invertible vs not invertible
inverse of a 2x2 matrix
(1/ad-bc)* [d -b ]
-c a
original: a b
c d
if determinant is 0, then A is NOT invertible because we can’t make an inverse!!
Ax = b can be rewritten using inverses…
IF AND ONLY IF A is invertible
Ax = b
A^-1Ax = A^-1b
x = A^-1b
row reduction method is probably easier when it comes to finding the inverse of bigger matrices!
For all b in Rn, x = A^-1b is a unique solution
-invertible matrices have NO free variables
-has to be a unique solution
product of nxn invertible matrices ARE
INVERTIBLE
inverse of product is…
the product of the inverses in reverse order
elementary row operations performed on mxn matrix - the resulting matrix can be written as EA - what about multiple elementary row matrix?
Ek…E2E1A
method to find the inverse
row reduce A to the identity matrix while performing the same row operations on the identity matrix at the same time
○ [A | I] => [I | A-1]
matrix is invertible if and only if
ROW equivalent to the identity matrix == pivots in every row and column == onto and one-to-one (remember they are square matrices)
linear transformations
mapping between two vector spaces (Rns) that preserves all vector addition and scalar properties
invertible linear transformations
T: Rn -> Rn (square matrix?)
if there is another linear transformation S: Rn -> Rn
WHERE
S(T(x)) = x for all x in Rn
T(S(x)) = x for all x in Rn
equivalent to saying that
A-1Ax = Ix
Invertible Matrix Theorem (18) given A is a square nxn matrix, then the following statements are ALL equivalent
a) A is an invertible matrix
b) the columns of A form a linearly independent set
c) the columns of A span all of Rn
d) the transformation T: Rn -> Rn defined by T(x) = Ax OR the linear transformation x |-> Ax is one-to-one
e) the transformation T: Rn -> Rn defined by T(x) = Ax is onto
OR the linear transformation x |-> Ax maps Rn onto Rn
f) A has n pivot positions
g)A is row equivalent to the n x n identity matrix
h) Ax = 0 only has the trivial solution
i) the equation Ax = b has at least one solution for each b in Rn (doesn’t it also only have the unique solution for each b?)
j) There is an nxn matrix C such that CA = I
k) there is an nxn matrix D such that AD = I
l) A^T is an invertible matrix
m) columns of A form a basis of Rn
n) Col A = Rn
o) dimColA = n
p) rank A = n
q) Nul A = {0}
r) dim Nul A = 0
s) the number 0 is not an eigenvalue of A
t) the determinant of A is not 0
Let A and B be square matrices: if AB = I, then…
A and B are both invertible
B = A^-1 & A = B^-1
how many inverses can a matrix have
ONE - inverses of matrices are unique
How to determine if a linear transformation is invertible?
Let a matrix A represent the linear transformation
- if A is invertible, then the linear transformation is invertible!!
reflection through the y-axis is invertible but a projection is NOT
Adding 2 partitioned matrices A and B
A and B must be the same size, partitioned in the exact same way
add block by block
scaling partitioned matrices
scale block by block
multiplying 2 partitioned matrices A and B
column partition of A must equal row partition of B
OR the number of columns in partition A = number of rows in partition B
- multiply like regular matrices
so a 2x3 matrix times a 3x1 matrix will give you a 2x1 matrix
Inverses of Partitioned Matrices
A B ] [ X Y] = [In 0
0 C ] [Z W] 0 In]
AX + BZ = In
AY + BW = 0
CZ = 0
CW = In
Factorization of a matrix
expressing a matrix as the product of two or more matriches
row interchanges
swapping rows when row reducing
lower triangular matrix
entries above the main diagonal are all 0s
upper triangular matrix
entries below the main diagonal are all 0s
Algorithm for LU Factorization
- Reduce A to echelon form U by a sequence of row replacement operations, if possible
- Place entries in L such that the same sequence of row operations reduces L to I
Why do we use LU Factorization
more efficient to solve a sequence of equations with the same coefficient matrix by LU factorization rather than row reducing the equations every single time
Let A be an mxn matrix that can be row reduced to echelon form WITHOUT row exchanges !!!(interchanges?) - what is L and U?
L is a mxm lower triangular matrix with one’s on the main diagonal
U is mxn matrix that is the echelon form of A
Rewriting Ax = b using A = LU
Ly = b
Ux = y
Ax = b -> L(Ux) = b
How do we get U?
Row reduce A to echelon form using only row replacements that add a multiple of one row to another BELOW it
Condition for LU Factorization
can be row reduced to echelon form without row exchanges??
How do we get L?
take the row replacement operations you did on A when getting ecehlong form (find the elementary matrices that transform A into U)
and reverse the signs and input them in their respective spots on the mxm identity matrix
– replacing the 0s of identity matrix with the row replacement “coefficients”
– basically: after finding all the elementary row matrices, take their inverses
Ep…E1A = U
A = (Ep….E1)^-1U = LU
L = (Ep…E1)^-1
Using LU Decomposition
- Forward solve for y in Ly = b
– modify row below using above rows - Backwards solve for x in Ux = y
– modify rows above using below rows
Subset of Rn
any collection of vectors that are in Rn
Subspace of Rn
A subset in Rn that has 3 properties:
- the zero vector is in H
- u +v in H (closed under addition)
■ cu in H (closed under scalar multiplication)
SUBSPACE can be written as the Span{} of some amount of linearly independent vectors (always linearly independent vectors? All combinations of linear vectors? can there be dependent vectors/)
Column Space of a Matrix A (mxn)
ColA
the subspace of Rm spanned by {a1…an}
essentially all the pivot columns!!
Null Space of a Matrix A (mxn)
NullA
the subspace of Rn spanned by the set of all vectors x that solve Ax = 0
Basis for a Subspace H of Rn
A linearly independent set in H that spans H
– DOES NOT contain the zero vector (because it is linearly independent) unlike the span
Standard Basis for Rn
{e1…en}
if v1 and v2 are in Rn and H = Span {v1, v2}…
H is a subspace of Rn
v1 and v2 must be in Rn for this relation to work
For v1 ….vp in Rn, the set of all linear combinations of v1…vp
subspace of Rn
Span{v1….vp} = subspace spanned by v1….vp
is b in the column space of A?
same as asking:
is b a linear combination of A?
is B in the Span of A?
is H a subspace of Rn? Or basis?
Asking if H has n linearly independent columns SO does H have no free variables?
Subspaces vs Bases
Subspaces => Span{v1…vn}
– includes the 0 vector
Bases => {v1…vn}
Defining a basis for column Space A
number of entries for each vector = number of rows in matrix A
number of vectors in the basis = number of pivot columns
what vectors can you include in the basis?
– scalar multiples??
– the identity matrix columns only if every column is pivotal in A
Finding the Column Space
Row reduce the matrix
– row operations don’t depend on linear dependence relations!!
determine the pivot columns
create a basis/subspace using the pivot columns in the original matrix NOT THE ROW REDUCED ONE
– if every columns is linearly independent, then the elementary vectors are included in the column space, linear combinations of elementary vectors can get you any column of the original matrix!
Find the Null Space
Determine all the free variables
rewrite the system in parametric vector form
vectors created in parametric vector form generate the null space
Coordinates
weights that map our vectors to get to some point in the span of the vectors
coordinate vector
suppose the set B = {b1….bp} is a basis for the subspace H. For each x in H, the coordinates of x relative to the basis B are the weights c1 … cp, such that x = c1b1 +… + cpbp and the vector in Rp
[x]_B = [ c1
.
.
.
cp ]
this is the coordinate vector of x (relative to B) or the B-coordinate vector of x
Dimensions of a Subspace
dimH: the number of vectors in a basis of H
dim{0} = 0
Rank of Matrix A
dimension of the column space of A
number of pivots in A
Why we choose to write bases
Each vector in H can be written in only one way as a linear combination of the basis vectors
A plane through 0 in R3 is
TWO-dimensional
3x3 matrix has 2 pivots
A line through 0 in R2
ONE DIMENSIONAL
- is it important that the plane/line is through 0?
2x2 matrix A having one pivot
Any two choices of bases of a non-zero subspace H have…
the same dimension
what is a bases??
dim Rn
dim(Col A)
dim (Null A)
n
number of pivots
number of free variables
dim(Col A)
rank A
Rank Theorem
If A has n columns, then
rank A + dim(NulA) = n
number of pivots + number of free variables = number of columns
Basis Theorem
-any two bases for a subspace have the same dimension (cardinality)
-many choices for the basis of a subspace
Aij submatrix
delete the ith row and jith column of matrix A and the new submatrix are the remaining elements
Determinant for a 2x2
|a b |
|c d |
ad - bc
c d |
Cofactor expansion
find determinants of square matrices that are 3x3 and greater!
Signs of cofactor expansion
depends on position of element aij in the matrix
Think checkerboard!! OR if i + j is even, then it’s positive!
shortcut for finding determinants
row reducing to REF BUT be aware of effects of row operations on determinants!!
BECAUSE if you have a triangular matrix, you can just multiple all the numbers on the diagonal
Columns Operations
same effect on determinants as row operations BECAUSE The determinant of A = determinant of A^T (transpose)
Row operation effects on determinants
row replacement: nothing
row swap: multiple the determinant by negative one
row scale: multiply determinant by scale
Summary of elementary matrices’ determinants
det EA = (detE) (detA)
det E is 1 if row replacement
-1 if row exchange
r if E is scaled by r
example: if a row is divided by k, then the determinant is multiplied by 1/k
if A is invertible (terms of determinants)
det A != 0 because every column is pivotal!!
det A = (-1)^r times the products of the pivot in U when A is invertible
if A is not invertible
det A = 0
at least one entry on the main diagonal of REF is 0
det A = 0 when A is not invertible
the rows are linearly dependent!! If A is a square, then the columns are ALSO linearly dependent!!
Some properties of determinants
det A = det A^T
det AB = (detA) (detB)
det A^-1 = 1/ (detA)
Parallelepiped
a parallelogram in Rn where n >2
if A is a 2x2 matrix (area/volume)
area of the parallelogram determined by the columns of A is absolute value of determinant of A
if A is a 3x3 matrix (area/volume)
area of the parallelepiped determined by the columns of A is absolute value of determinant of A
row and column swaps and replacement DON’T
affect the absolute value of the determinant
linear transformations on a parallelepiped/parallelogram
Area of T(S) = |det A| * {area/volume of S}
S: shape or figure
T: linear transformation determined by matrix A
Probability vector
vector with NONNEGATIVE entries that sum to 1
stochastic matrix
SQUARE matrix whose columns are probability vectors
Markov chain
sequence of probability vectors together with a stochastic matrix {P} such that x1 = Px0, xk+1 = Pxk
Steady State vector
A probability q such that Pq = q
EVERY stochastic matrix has a steady state vector
regular stochastic matrix
stochastic matrix where some power of it will contain only STRICTLY positive entries
P^k where all entries >0
How to find the next outcome of a Markov chain?
multiply P by xk to find xk+1
How to find a steady state vector?
Pq = q
Pq - q =0
(P- I)q = 0
After finding a basis for the null space of (P- I)q = 0, remember to make sure that the column sum is 1
steady state vector IS a probability vector
the initial state has what effect on the long term behavior of the Markov Chain
NO EFFECT
Eigenvector of an nxn matrix A
nonzero vector x such that Ax = λx for some scalar λ
eigenvalue of A
A scalar λ where there is a nontrivial solution x of Ax = λx
eigenspace of an eigenvalue
contains the zero vector and all eigenvectors corresponding to λ
- any relaiton to A??
Determine of a vector x is an eigenvector
1) NONZERO
2) Ax => see if the product is a scalar multiple of x
Finding the eigenvector from an eigenvalue(7)
Solve (A - 7I)x = 0
do the parametric vector form of what you have left?? like solve for x??
finding the eigenvalue λ
Solve (A -λI)x = 0 for a NONTRIVIAL solution
find the set of all solutions to the null space of (A - λI)
Eigenvalues of a triangular matrix are….
the entries on the main diagonal!!
0 is an eigenvalue of A if and only if A is …
NOT invertible!!
Ax = 0x
Ax = 0; x is a nontrivial solution if A is not invertible
Eigenvectors that correspond to distinct eigenvalues are linearly independent
OPPOSITE IS NOT ALWAYS TRUE!!!
the identity matrix = eigenvectors are linearly independent but have the same eigenvalue
Characteristic Polynomial
det(A - λI)
Characterisitic Equation
det(A - λI) = 0
Trace
sum of the diagonal entries in a matrix
Algebraic Multiplicity of an Eigenvalue
number of times the eigenvalue shows up as roots of the characteristic polynomial
Geometric Multiplicity of an Eigenvalue
dimension of Null (A - λI) for a given eigenvalue λ
How to find eigenvalues?
Solve (A - λI) x = 0 for a nontrivial solution!
find the set of all solutions to the null space of (A - λI)
finding the characteristic polynomial using trace and determinant for a characteristic polynomial of 2
λ^2 - λ(trace) + detA
what do row operations do to eigenvalues
how can we determinant eigenvalues from its reduced forms?
THEY CHANGE THEM
WE CANT!!
Properties of Invertible Matrices
(A^-1)^-1 = A
(AB)^-1 = B^-1A^-1
(A^T)^-1 = (A^-1)^T
Transpose Equivalence for Determinants
If A is an nxn matrix, then det A^T = det A
Eigenvectors for Distinct Eigenvalues
If v1….vr are eigenvectors that correspond to distinct eigenvalues λ1….λr of an nxn matrix A, the the set {v1…..vr} is linearly independent