Midterm 3 Flashcards

1
Q

Diagonal matrix

A

matrix where the only non-zero entries are on the diagonal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

similar matrices

A

a matrix A is similiar to a matrix D if A = PDP^-1
P is an invertible matrix
A and D have the same eigenvalues and determinants!!!
- they have the same characteristic polynomial and therefore the same eigenvalues

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

If two matrices have the same eigenvalues, does that necessarily mean they are similar to each other?

A

FALSE, only the converse is true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Diagonalization

A

Splitting up matrix A into a diagonal matrix D and an invertible matrix P
- very useful for computing A^k with large ks
A^k = PD^kP^-1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Algebraic multiplicity

A

the number of repeats for an eigenvalue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Geometric multiplicity

A

the number of eigenvectors for a given eigenvalue
Dimension of the Null(A- λI) for a specific λ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Singular

A

NOT INVERTIBLE
free variables, linearly dependent columns
Nonsingular = invertible!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Diagonalization Formula

A

A = PDP^-1
P: set of all linearly independent eigenvectors
D: the corresponding eigenvalues (in order)
Allows us to solve A^k for large k
A^k = PD^kP^-1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

The Diagonalization Theorem

A

An nxn matrix A is diagonalizable if and only if A has n linearly independent eigenvectors
Dimension of A = Dimension of P
A is diagonalizable if and only if there are enough eigenvectors to form a basis of Rn : eigenvector basis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Steps to Diagonalize a Matrix

A
  1. find the eigenvalues using the characteristic polynomials
    det(A - λI) = 0
  2. find the linearly independent eigenvectors of A
    (A - λI)v = 0, plug in λ
    - solve the null space in parametric vector form
    IF the number of total eigenvectors is NOT equal to the number of columns in A, then A is not diagonalizable
  3. Construct P from the eigenvectors
  4. Construct D using the corresponding eigenvalues
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Theorem - Eigenvalues and Diagonalizable

A

An nxn matrix with n distinct eigenvalues is diagonalizable
- if vi …vn are eigenvectors correspond to n distinct eigenvalues of matrix A. Then {vi … vn} is linearly independent, therefore A is diagonalizable
BUT it is not necessary for a nxn matrix to have n distinct eigenvalues to be diagonalizable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Theorem - Matrices whose Eigenvalues are Not Distinct

A

Geometric multiplicity must be less than or equal to algebraic multiplicity of λ
A matrix is diagonalizable IF AND ONLY IF the sum of the dimensions of the eigenspaces (Nul(A -λI)) equals n (the number of columns)
Total geometric multiplicity = number of columns in matrix A THEREFORE geometric multiplicity has to equal algebraic multiplicity
characteristic polynomial of A factors completely into linear factors - can be real or imaginary

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

DIAGONALIZABILITY AND INVERTIBILITY

A

they have no correlation with each other
-a matrix can be diagonalizable but not invertible because it can have a eigenvalue of 0
- a matrix can be invertible but not diagonalizable
1,1
0,1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Complex number

A

a + bi
i = sqrt(-1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Complex eigenvalue

A

eigenvalue that is a complex number a + bi
if b = 0, then λ is a real eigenvalue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Complex eigenvector

A

an eigenvector subsisting of a complex eigenvalue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Complex number Space ℂn

A

the space of all complex numbers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

ℂ2

A

complex number space with 2 entries
at least one entry is a complex number

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Conjugate of a complex number

A

the conjugate for (a+bi) is (a-bi)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Complex conjugate of a vector x

A

x with a bar on top of it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Re x

A

the real parts of a complex vector x
an entry CAN be 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Im x

A

the imaginary parts of a complex vector x
an entry can be 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

We can identify ℂ with R2

A

a + bi <-> (a,b)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

we can add and multiply complex numbers

A

Add: like normal (2-3i)+(-1+i) = 1-2i, similiar to matrix addition
Multiply: FOIL!!! - no matrix multiplication

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

absolute value of a complex number a + bi

A

sqrt(a^2 + b^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

we can write complex numbers in polar form

A

(a,b) = a + ib = r(cosφ + isinφ)
a is the real part and b is the imarginary part

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Argument of lambda = a + bi

A

the angle φ produced by a and b on their respective Re x and Im x axis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Finding complex eigenvalues and complex eigenvectors

A
  1. det(A - λI) = 0 to get the eigenvalues λ; the complex roots are the complex eigenvalues
  2. Solve (A-λI)x = 0 for x to get the eigenvectors
    should get one “free variable”
  3. Find the other eigenvector
    - Find the conjugate of the other eigenvector!
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Re x and Im x

A

xbar = vector whose entries are the complex conjugates of the entires in x
for example:
(3-1,i, 2) => (3,0,2) + i(-1,1,0)
Re x is the first, Im x is the second
xbar = (3+i,-i,2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Properties of Complex Conjugate Matrixes

A

you can find the conjugates first and then multiply together for:
rx Bx BC rB ???
r being scalars
uppercase being matrixes and x being vectors
conjugate of (x + y) = xbar + ybar
conjugate of Av is equal to Avbar
Im(xxbar) = 0
(xy)bar = xbarybar

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Complex Eigenvalues and Complex Eigenvector Come in Pairs!!!

A

no such thing as an odd number of complex eigenvalues

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Rotation Dilation Matrix

A

matrix in the form of
a,-b
b,a
the eigenvalues are: a + bi, a - bi
the length of the eigenvalue (r) is sqrt(a^2+b^2)
the angle of the eigenvalue is tan^-1(b/a)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Euler’s Formula

A

e^(iφ) = cosφ + isinφ
multiplying two complex numbers =
r1*r2e^(i(φ1+φ2))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Complex numbers and Polynomials

A

if lambda is a complex root of characteristic polynomial, then lambda bar is also a root of that real polynomial
lambda bar is an eigenvalue of A as well with an eigenvector of vbar

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Inner Product or Dot Product

A

a scalar
u*v
uTv

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Vector Length

A

||v|| = sqrt(v*v) = sqrt(v1^2 + v2^2 +…+vn^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Unit Vector

A

vector whose length is 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Vector normalization

A

Dividing a nonzero vector by its length to make it a unit vector
(1/||v||)*v

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Distance between two vectors

A

dist(u,v) = || u - v||

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Orthogonal vectors

A

Two vectors are orthogonal if their dot products equals 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Orthogonal complements

A

A set of vectors that are all orthogonal to a subspace W
Representation as a line or plane depends on the nullspace of W

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

For a subspace to be in Rn

A

subspace (contains the zero vector and is closed under addition and multiplication) has n entries for each vector in it (dimension n)
R1 means that the vectors have one entry
Span of just [1]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Dot Product vs Cross Product

A

dot product gives you a number while cross product gives you a vector

44
Q

Theorem: Dot Product Properties

A

uv = vu (Symmetry)
(u + v) * w = uw+ vw (linearity) and vice versa
c is a scalar:
(cu)v = c(uv) = u * (cv)
- can find the dot product of the two vectors first and then multiply by the scalar
uu >= 0 (Positivity!)
u
u = 0 only if u = 0

45
Q

Vector Length Properties

A

vector length is always positive
||cv|| = |c| ||v||
||cv||^2 = c^2 ||v||^2

46
Q

Normalizing a Vector

A

v(1/||v||) gives u, a unit vector
u is in the same direction as v BUT it has a different magnitude (because the length changed)

47
Q

Finding the Distance between Two Vectors

A
  1. subtract the two vectors (u-v)
  2. find the length of the resultant vector
    || u - v ||
48
Q

Orthogonality Basics

A

Two vectors are orthogonal = two vectors are perpendicular to each other
||u - (- v) || = || u - v ||
u * v = 0
Zero vector is orthogonal to every vector in Rn

49
Q

The Pythagorean theorem

A

two vectors are orthogonal if and only if || u + v||^2 = ||u||^2 + ||v||^2

50
Q

Orthogonal Complements Basics

A

set of vectors where each vector is orthogonal to a subspace W
Orthogonal COmplement of W = W⊥

51
Q

W⊥

A

a vector x is in W⊥ if and only if x is orthogonal to every vector in a set that spans W
- must calculate every single dot product pair to prove orthogonality
W⊥ is a subspace of Rn just like W
- both subspaces have n entries
- do not necessarily have the same dimension
dim(Row W⊥) = n - dim(Col W)
because Row W⊥ = Nul W

52
Q

Theorem: Perps of SubSpaces

A

Let A be an mxn matrix:
(Row A)⊥ = Null A
(Col A) ⊥ = Null A^T

Proof:
Av = 0
- taking the dot product of every row of A with the vector v and seeing if v is orthogonal to A = dot product of every row of A is equal to 0 making the vector v orthogonal to A which is also the null space of Av = 0

53
Q

Rank Theorem Expanded and Row A

A

Row A: space spanned by the rows of matrix A
pivot rows of A
dim(Row A) = dim(Col A)
# of pivot rows is equal to the number of pivot columns
Row A^T = Col A
THEREFORE N (number of columns in a matrix)
Dim(Col A) + dim (Nul A) = N
Dim(Row A) + dim(Nul A) = N

54
Q

Orthogonal Set

A

~~~
```a set of vectors in Rn where each pair of distinct vectors from the set is orthogonal
ui*uj = 0 where u and j don’t equal each other

55
Q

Orthogonal Basis

A

basis for a subspace W that is also an orthogonal set

56
Q

Orthogonal Projection

A

projecting a vector onto a line/plan to get its orthogonal complement?
yhat = proj(_L)y =
(yu/uu)u
With L being the subspace spanned by u
(subspaces must include the 0 vector)

57
Q

Orthonormal set

A

orthogonal set where every vector is a unit vector

58
Q

Orthonormal basis

A

basis for a subspace W that is also an orthonormal set

59
Q

Orthogonal Matrix

A

SQUARE matrix whose columns form an orthonormal set

60
Q

Theorem: Orthogonal Sets and Linear Independence

A

if S = {u1…up} is an orthogonal set of nonzero vectors in Rn, then S is linearly independent and is a basis for the subspace spanned by S

61
Q

All Orthogonal Sets are Linearly Independent Sets

A

TRUE only if there are no zero vectors
BUT not all linearly independent sets are orthogonal
REMEMBER to omit the zero vector for an orthogonal set!

62
Q

Theorem: Finding the weights for a linear comibination of an orthogonal basis

A

Let {u1…up} be an orthogonal basis for a subspace W or Rn
for every y in W, the weights in the linear combination
y = c1u1+…+cpup
is given by
cj = (yuj/ujuj)
FOR ORTHOGONAL BASES

63
Q

How to find an Orthogonal Projection

A

yhat = proj(L)y = (yu/uu)u
y = yhat + z (z is the component of y orthogonal to u)

64
Q

Orthogonal Projections can be written as a Linear Combination of a Vector’s Components

A

y = (yu1/u1u1)u1 + (yu2/u2u2)u2

65
Q

Orthonormal Sets vs Orthogonal

A

all orthonormal sets are orthogonal while not all orthogonal sets are orthonormal

66
Q

Theorem: Transpose of matrix with Orthonormal Columns

A

A mxn matrix U has orthonormal columns if and only if U^TU = I
-the transpose of a matrix with orthonormal columns multiplied by the original matrix ALWAYS results in the identity matrix (even if NOT square!)
Proof: an orthonormal vector times itself is the square root of its length which is 1 (dot product)

67
Q

A^TA where A is a matrix with orthogonal columns

A

produces a diagonal matrix with all entries equal to each vector’s length squared

68
Q

Theorem: Properties of a Matrix with orthonormal columns

A

||Ux|| = ||x||
linear mapping x -> Ux preserves length
(Ux)(Uy) = xy
(Ux)*(Uy) = 0 if and only if x and y are orthogonal to each other
- preserves orthogonality

69
Q

Difference between Orthogonal Matrix and a Matrix with Orthonormal Columns

A

Orthogonal Matrices must be square

70
Q

● U-1 = UT

A

○ The inverse of orthogonal matrices is its transpose
○ Orthogonal matrices have linearly independent columns

71
Q

Determinant of an Orthogonal Matrix

A

if A is an orthogonal matrix, then detA is equal to 1 or -1
converse is NOT TRUE

72
Q

Orthogonal Projection vs orthogonal component of y onto W

A

yhat vs z
z = y - yhat

73
Q

Best Approximation

A

||y-yhat|| < ||y - v||
the verticle distance going straight up and down between a vector and its projection space
ANY distance between a vector and a subspace that is not per pendicular to the space is automatically not the shortest distance

74
Q

Properties of an orthogonal projection onto Rn

A

Given a vector y and a subspace W in Rn, there is a vector yhat in W that is the UNIQUE vector in W for which y-yhat is orthogonal to W
yhat is the unique vector in W closest to y

75
Q

Theorem: Orthogonal Decomposition Theorem

A

Let W be a subspace of Rn; each y in Rn can be written uniquely in the form of y =yhat + z whre yhat is in W and z is in Wperp

76
Q

if {u1..up} is any orthogonal basis of W then yhat is

A

the fun equation we all know about
we assume that W is not the zero subspace because everything projected on the zero subspace is just the zero vector

77
Q

Properties of Orthogonal Projections

A

if y is in W = Span {u1…up} then projwy = y
if y is already in the subspace then projecting it onto the same subspace does nothing

78
Q

The Best Approximation Theorem

A

||y-yhat|| < ||y-v||
yhat is the closest point in W to y

79
Q

Theorem: Orthonormal Basis and Projections

A

if {u1….up} is an orthonormal basis for a subspace W in Rn, then proj = (y*u1)u1 + (y *u2)u2…
projwy = UU^Ty for all y

80
Q

Theorem: Matrix with orthonormal columns vs orthogonal matrix

A

if U is an nxp matrix with orthonormal columns and W is the column space of U, U^TUx = Ipx = x for all x in Rp
UU^Ty = projxy for all y in Rn
if U is an nxn matrix with orthonormal columns then U is an orthogonal matrix
UU^Ty = Iy = y for all y in Rn

81
Q

Gram-Schmidt

A

producing an orthogonal/orthonormal basis for any nonzero subspace of Rn

82
Q

The actual algorithm for Gram-Schmidt

A

v1 = x1
v2 = x2 - (x2 *v1/v1 *v1)v1
v2 = x3 - (x3 *v1/v1 *v1)v1 - (x3 *v2/v2 *v2)v2

83
Q

{v1…vp} is an orthogonal basis for W… what about its Span in relation to original matrix

A

Span are the same
span{v1..vp} = span{x1..xk}

84
Q

what is required for gram-schmidt

A

linearly independent basis
any nonzero subspace has an orthogonal basis because an ordinary basis {x1..xp} is always available

85
Q

Orthonormal Bases

A

normalize all vectors in the orthogonal basis

86
Q

QR Factorization

A

If A is an mxn matrix with linearly independent columns, then A can be factored as A = QR
Q: an mxn matrix whose columns form an orthonormal basis for Col A
R: an nxn upper triangular invertible matrix with positive entires on its diagonal

87
Q

How to QR Factorize

A

Use Gram-Schmidt to find Q, if needed, normalize to make it orthonormal
THEN solve A = QR or solve R = Q^TA

if the columns of A were linearly dependent, then R would not be invertible

88
Q

General least-squares problem

A

Finding the x that makes ||b-Ax|| as small as possible

89
Q

Normal Equations

A

A^TAx = A^Tb

90
Q

Difference between x and xhat

A

x just refers to some general solution while xhat is the solution that solves the least squares problem/normal equations

91
Q

least squares error

A

distance from b to Axhat where xhat is the least-squares solution to b
||b-Axhat||

92
Q

why do we solve least-square problems

A

want to find a close enough solution to Ax = b when it is an inconsistent system

93
Q

if A is mxn and b is in Rn, a least-squares solution of Ax=b is an xhat in Rn such that

A

||b-Axhat|| <= ||b-Ax|| for all x in Rn
it can be equal when the columns of A are linearly dependent?
if A is already consistent, then ||b-Axhat|| = 0

94
Q

Solution of the General Least-Squares Problem

A

Use normal equations!!

95
Q

Theorem: Least Square Solutions and Normal EQ

A

set of least-squres solutions of Ax=b coincide with the nonempty set of solutions of the normal equations A^TAx = A^Tb
POSSIBLE TO HAVE more than one least-squares solution
- with the existence of a free variable aka columns of A are linearly dependent

96
Q

Theorem: Logically equivalent statements

A
  1. the equation Ax = b has a unique least-squares solution for each b in Rn
  2. the columns of A are linearly independent
  3. the matrix A^TA is invertible

When these statements are true, the least-squares solution xhat is given by xhat = (ATA)^-1ATb

97
Q

Least Squares Error

A

||b-Axhat||

98
Q

Theorem: Finding the LeastSquares Solution using QR Factorization

A

given an mxn matrix A with linearly independent columns, let A = QR be aQR factorization
the equation Ax = b has a unique least-square solution, given by
xhat = R^-1Q^Tb
Rxhat = Q^Tb

99
Q

if b is orthogonal to the columns if A, what can we say about the leastsquares solution

A

if b is orthogonal to A, then the projection of b onto A is 0
a least square solution xhat of Ax=b satisfies Axhat = 0

100
Q

Least-Squares Lines

A

y = B0 + B1x

101
Q

Residual

A

the difference between the actual y-value and the predicted y-value

102
Q

Least-Squares Line

A

line of best-fit for a set of data
minimizes the sum of the residuals aka the least-squares solution

103
Q

Objective in Least-Squares Lines

A

finding B0 and B1 that create the least-squares lines, plugging in xs from the data points
using Betas as your variables
can use the normal equations to solve

104
Q

Mean-Deviation Procedure

A
  1. find the average of all the x-values x_
  2. calculate x* = x - x_
  3. then solve XB = y but use the x* values
105
Q

General Linear Model

A

y = XB + episolon
solve the normal equations
XTXB = XTy