Final Exam Flashcards
equation of line through p parallel to v
x = p + tv
domain of T (based on an m × n matrix)
Rn
codomain of T (based on an m × n matrix)
Rm
range of T
all images T(x)
linear transformation of a m × n matrix
- Rn → Rm
- T(u + v) = T(u) + T(v)
- T(cu) = cT(u)
standard matrix of a linear transformation T
- T(x) = x1T(e1) + x2T(e2) + … + xn(Ten)
*
one-to-one and onto
- one-to-one: mapping T: Rn → Rm if each b in Rm is the image of at most one x in Rn (T(x) = 0 has only the trivial solution)
- onto: mapping T: Rn → Rm if each b in Rm is the image of at least one x in Rn
invertible matrix
for a square matrix A, another matrix C exists for which AC = CA = I
properties of invertible matrices
- x = A-1b
- (AB)-1 = B-1A-1
*
alogrithm for finding A-1
row reduce the augmented matrix [A | I]
Invertible Matrix Theorem
- A is invertible
- A is row equivalent to the n × n identity matrux
- A has n pivot positions
- Ax = 0 has only the trivial solution
- the columns of A form a linearly independent set
- x → Ax maps Rn to Rn
- Ax = b has at least one solution for each b in Rn
- x → Ax is one-to-one
- the columns of A span Rn
- there exists a n × n matrix C such that CA = I
- there exists a n × n matrix D such that AD = I
- AT is inverstible
properties of determinants
- elementary row operations
- roe replacement: detB = detA
- interchange: detB = -detA
- scalar multiplication: detB = kdetA
- a square matrix is invertible if detA does not equal 0
- for n × n matrices, detAT = detA
- detAB = (detA)(detB)
Cramer’s Rule
- for any b in Rn, the unique solution x of Ax = b has entries given by xi = detAi(b) ÷ detA
- replace a column with b, find the determinant of the new matrix, and divided by the determinant of the original matrix to find the solution for the variable corresponding to the column replaces
formula for A-1
A-1 = adjA ÷ detA, where adjA = transpose of the matrix of cofactors
calculating area/volume using matrices
- for a 2 × 2 matrix, the area of the parallelogram detrmined by the columes of A is |detA|
- for a 3 × 3 matrix, the volume of the parallelpiped determined by the columns of A is |detA|
- one point must be at the vertex
subspace of vector space V
- subset of vector space V
- zero vector of V contained in H (H contains {0})
- H closed under vector addition (for every u and v in H, u + v also in H)
- H closed under multiplication by scalars (for each u in H, cu also in H)
nulA
- set of all solutions of the homogenous equation Ax = 0
- subspace of Rn
- span of the vector equation representation of the solution set of Ax = 0
colA
- set of all linear combinations of the columns of A
- subspace of Rm
linear transformations of vector spaces (and subspaces)
- linear transformation: occurs from a vector space V to a vector space W, assigning each vector x in V a unique vector T(x) in W which ovey all laws of vector spaces (ie, closed under addition and multiplication by scalars)
kernel and range
- kernel: nulT (set of all u in V such that T(u) = 0 in W)
- range: set of all vectors in W of the form T(x) for some x in V
linear independence in vectors
- an indexed set of vectors (eg, {v1, …, vp}) in which v1 does not equal zero is linearly dependent if and only if some vj with j > 1 is a linear combination of the preceding vectors
basis
- B ({b1 … bp}) is a basis for the subspace H if
- B is linearly independent
- H = span{b1 … bp} (ie, subspace spanned by B coincides with H)
bases for nulA and colA
- nulA = set of solutions to the homogenous equation
- colA = pivot columns of A (row-reduce to determine)
change-of-coordinates matrix
- x = PB[x}B
- PB-1x = [x]B
rowA
set of all linear combinations of the row vectors in A
rank
- dimension of colA
- rankA + dimnulA = n
change-of-basis matrix
- for bases B = [b1 … bp] and C = [c1 … cp] in vector space V, there exists a unique square matrix such that [x}C = PB→C[x]B
- to find PB→C: [c1 … cn | b1 … bn] ~ [I | PB→C]
eigenvectors and eigenvalues
- eigenvector - nonzero vector x for a n × n matrix A such that Ax = λx for some scalar λ
- eigenvalue - scalar λ for which a nontrivial solution x exists such that Ax = λx
determining eigenvectors/eigenvalues for a matrix A
- potential eigenvectors given: multiply matrix A by potential eigenvector; if the result is a scalar multiple of the potential eigenvector, then it is an eigenvector of A
- potential eigenvalue given: (A - λI) = 0
basis for eigenspaces of A
- compute (A - λI)
- row reduce the augmented matrix for (A - λI) = 0
- rewrite as a vector-equation, if necessary
distinct eigenvalues theorem
if v1 … vp are eigenvectors that crrespend to distinct eigenvalues λ1 … λr of a n n matrix A, then the set {v1 … vp} is linearly independent
the characteristic equation
- det(A - λI)
- a scalar λ is an eigenvalue of a n × n matrix A if and only if λ satisfies the characteristic equation
similar matrices
- an invertible matrix p exists such that A = PBP-1, or B = P-1AP
- similar matrices share a characteristic polynomial and hence eigenvalues (including multiplicities)
criteria for diagonalization
a n × n matrix A is diagonalizeable if and only if A has n linearly independent eigenvectors
diagonalization
- find the eigenvalues of A
- find n linearly independent eigenvectors of A
- construct P using the linearly independent eigenvectors
- construct D from the corresponding eigenvalues
u º v
u1v1 + u2v2 + … + upvp
length of v (||v||)
√(v º v) = √(v1v1 + v2v2 + … +vpvp)
||cv||
|c| × ||v||
distance between u and v (dist(u,v))
||u - v|| = √(u1 - v1)2 + (u2 - v2)2 + … (up - vp)2
orthogonal vectors
- dist(u, -v) = ||u||2 + ||v||2 + 2uºv
- uºv = 0
orthogonal compliments
- x is an orthogonal compliment to W if and only if x is orthogonal to every vector in a set that spans W┴
- W┴ is a subspace of Rn
- (RowA)┴ = NulA
- (ColA)┴ = NulAT
find a unit vector u in the same direction as v
- compute ||v||
- multiply v by (1 ÷ ||v||)
- check that ||u|| = 1
orthogonal set
if S = {u1, … up} contains nonzero vectors in Rn all orthogonal to one another
properties of orthogonal sets
- if S = {u1, …, up}, an orthogonal set of nonzero vectors in Rn, then S is linearly independent and hence is a basis for the subspace spanned by S
orthogonal basis
cj = (y º uj) ÷ (uj º uj)
orthogonal projection of y onto L
projLy = u[(y º u) ÷ (u º u)]
distance from y to L
||y - (projLy)||
orthonormal sets and orthonormal bases
- orthonormal set: orthogonal set of unit vectors
- orthonormal basis: orthonormal set which spans a subspace W
express