test 2 vocab Flashcards
closure property for vector addition
x+y in V for all x,y in V
closure property for scalar multiplication
ax in V for all a in F and x in V
subspace
Let S be an nonempty subset of a veto space V over F. If S is a vector space over F using the addition and scalar multiplication operations, then S is said to be a subspace of V.
trivial subspace
given a vector space V, the set Z={0} containing only the zero vector is a subspace of V because (A1) and (M1) are trivially satisfied.
parallelogram law
Vector addition in R2 and R3 is easily visualized by using the parallelogram law, which states that two vectors U and V, the sum U+V is the vector defined by the diagonal of the parallelogram.
space spanned by S
for a set of vectors S ={v1,v2…vr}, the subspace span(S)= {a1v1+a2v2+…arvr} generated by forming all linear combinations of vectors from S.
If V is a vector space such that V=span(S), we say S is a spanning set for V. in other words, S spans V whenever each vector in V is a linear combination of scots from S
Sum of subspaces
if X and Y are subspaces of a vector space V, then the sum of X and Y is defined to be the set of all possible sums of vectors from X with vectors from Y. The sum X+Y is again a subspace of V. If Sx, Sy span X,Y then Sx U Sy spans X+Y
Range
For a linear function f mapping Rn into Rm, let R(f) denote the range of f. That is, R(f)= {f(x)|x is in Rn} is a subset of Rm is the set of all “images” as x varies freely over Rn
linear spaces
subspaces of Rm
range of a matrix
A is in Rmxn is defined to be the subspace R(A) of Rm that is generated by the range f(x)=Ax
image space of A
Because R(A) is the set of all “images” of vectors x is in Rm under transformation by A, some people call R(A) the image space of A
nullspace
N(f)- the set of vectors that are mapped to 0.
nullspace of A
N(A)- the set of all solutions to the homogeneous system Ax=0
left hand nullspace
N(A^T)- the set of all solutions to the left hand homogenous system y^TA=0^T
linearly independent set
A set of vectors is said to be linearly independent set whenever the only solution for the scalars ai in the homogeneous equation: a1v1+a2v2+…+anvn=0 is the trivial solution a1=a2=…an=0. Those that contain no dependency relationships. empty set is always linearly independent
linearly dependent set
Whenever there is a nontrivial solution for the a’s (at least one ai is not 0) in the set S. Those in which at least one vector is a combination of the others
diagonally dominant
The magnitude of each diagonal entry exceeds the sum of the magnitude of the off diagonal entire in the corresponding row.
maximal linearly independent subset of columns
a linearly independent set containing as many columns from A as possible. The basic columns in A always constitute one solution
basis
A linearly independent spanning set for a vector spanning set for a vector space V
least squares solution
any vector that provides a minimum value for (Ax-b)^T(Ax-b)
euclidean norm
||x|| = (x^Tx)^(1/2) for real and ||x|| = (x*x)^(1/2) for complex
normalize
we normalize x by setting u=x/||x||
distance
can be visualized with the aid of the parallelogram law||u-v||
triangle inequality
||x+y|| is less than or equal to ||x|| + ||y||
unit space spheres
S1 is a an octahedron , S2 is a sphere and S infinity is a cube
norm
for a real or complex vector space V is a function ||*|| mapping V into R that satisfies
||x||>=0 ||x||=0 x=0
||ax||=|a| ||x|| for all scalars a
||x+y|| <= ||x||+||y||
frobenius matrix norm
||A||^2 F= trace(A*A)
matrix norm
a function ||*|| from the set of all complex matrices into R that satisfies… similar properties to the nor, and also ||AB||<= ||A|| ||B|| for all compatible matrices
induces
A vector norm that is defined on Cp for all p=m,n induces a matrix norm on Cmxn by setting ||A||=max(||x||=1) ||Ax||
A- inner product
If Anxn is a nonsingular matrix then =xAAy is an inner product for Cnx1
standard inner product for matrices
<a>= trace(A^TB) and </a><a>=trace(A*B)</a>
A-nrom
generated by the A inner product on Cnx1 is ||x||A= (A|A)^(1/2) = trace(A*A)^(1/2) =||A||f
parallelogram identity
||x+y||^2 +||x-y||^2 = 2(||x||^2 +||y||^2)
orthogonal
in an inner product space V, 2 vectors are orthogonal whenever=0. for real numbers, x^Ty=0 and for complex numbers, x*y=0
angle
in a real inner product space the radian measure of the angle between nonzero vectors is defined to be cos()=/(||x|| ||y||)
orthonormal set
whenever ||ui|| = 1 for each i and ui is perpendicular to uj for all i not equal to j. every orthogonal set is linearly independent.
Fourier expansions
if B={u1,u2….un} is an orthonormal basis for an inner product space V, then each x in V can be expressed as
x=u1 + u2+…un. The scalars are the coordinates of x with respect to B and they are called Fourier coefficients.
Gram-Schmidt sequence
if B={x1’x2….xn} is a basis for a general inner product space S tenth GS sequence is defined by:
u1=x1/||x1|| and uk=…. (see notes) is an orthonormal basis for S.
QR factorization
factorization A=QR, it is uniquely determined by A. Every matrix Anxm with linearly independent columns can be uniquely factored as A=QR in which the columns of Qmxn are an orthonormal basis for R(A) and Rnxn us an upper triangular matrix with positive diagonal entires. It is the complete road map of the GS process because the columns of Q=(q1|q2|….|qn) are the result of applying the GS procedure to the columns of A=(a1|a2|…|an) and R is given by… (see notes)
permutation
A permutation p = (p1,p2,…,pn) of the numbers (1,2,…,n) is simply any rearrangement.
Parity
The parity of a permutation is unique—i.e., if a permuta- tion p can be restored to natural order by an even (odd) number of interchanges, then every other sequence of interchanges that restores p to natural order must also be even (odd).
sign of a permutation
+1: if p can be restored to natural order by an even number of interchanges,
-1: if p can be restored to natural order by an odd number of interchanges.
minor determinant
of Am×n is defined to be the determinant of any k × k submatrix of A
Schur complements
The matrices D − CA^(−1B) and A − BD^(−1C)
cofactor
of An×n associated with the (i, j)-position is defined as ̊Aij=(−1)^(i+)jMij,
where Mij is the n−1 × n−1 minor obtained by deleting the ith row and jth column of A. The matrix of cofactors is denoted by ̊A.
adjugate
of An×n is defined to be adj (A) = ̊A^T , the transpose of the matrix of cofactors. If A is nonsingular, then
A^−1 = ( ̊A^T)/det(A)= adj(A)/det(A).
unitary matrix
a complex matrix U(nxn) whose columns (or rows) constitute an orthonormal basis for C^n.
orthogonal matrix
a real matrix P(nxn) whose columns(or rows) constitute an orthonormal basis for R^n.