Definitions And Basic Rules Flashcards
Stuff to memorise
Vector Space
A set V equipped with the following data:
D1: Addition operation (sum of 2 vectors im V is also in V)
D2: contains zero vector
D3: scalar multiplication (scalarXv must be in V)
8 vector space rules
- v+w = w+v (COMMUTATIVE)
- (v+w)+u = v+(w+u) (ASSOCIATIVE)
- 0+v = v+0 = v (ZERO VECTOR IS ADDITIVE IDENTITY)
- k.(v+w) = kv+kw, k is a scalar (DISTRIBUTIVE)
- (k+l).v = kv+lv, k and l are scalars (DISTRIBUTIVE)
- k.(l.v) = (kl).v , k, l scalars (ASSOCIATIVE)
- 1.v = v (1 IS MULTIPLICATIVE IDENTITY)
- 0.v = 0 (0 scalar X any vector = 0 vector)
Additive inverse
(-v) := (-1)v
Subspace
A subspace that satisfies all the vector space rules (see vector space card)
Degree of a polynomial
The highest power of x
Degree of a trig polynomial
The highest multiple of x in the formula
Linear combination
A linear combination of a finite collection v1,v2,…,vn of vectors in a vector space V is a vector of the form a1v1+a2v2+…+anvn where a1, a2, …, an are scalars
Span
A list of vectors spans a vector space if every vector in the vector space can be written as a linear combination of the vectors in the list
Linearly independent
v1, …, vn is linearly independent if a1v1+…+anvn =0 has only the trivial solution a1=a2…=an=0
list is linearly independent iff …
all vectors are non-zero and cannot be expressed as linear combinations of the others
Linear combination of preceding vectors proposition
v1, v2, …, vn is a list of vectors in V. The following are equivalent:
- The list is linearly dependent
- Either v1 = 0, or for some r in {2, 3, …n} vr is a linear combination the v1, v2, …, v(r-1) vectors
Bumping off proposition
Suppose l1, l2, …, lm is a linearly independent list of vectors in V and s1, s2, …, sn spans V. Then m<= n
Basis
A list of vectors that is linearly independent and spans V
Finite-dimensional
A vector space is finite dimensional if it has a basis e1, e2, …, en (you can count the basis vectors)
Dimension of V
din V:= the number of vectors in a basis for V
Invariance of dimension theorem
If e1, e2, …, en and f1, f2, …, fm are bases of a vector space V, then m=n
Bases give coordinates proposition
A list of vectors e1, e2, …, en is a basis for V iff every vector in V can be written as a linear combination v = a1e1+a2e2+…+anen in precisely one way
Sifting algorithm
- Start with a list of vectors v1, v2, …,vn in a vector space V
- Consider each vector vi consecutively. If vi=0, or if it is a linear combination of the preceding vectors, remove it
- At the end, the resulting list will be linearly independent, by the linear combination of preceding vectors proposition
Coordinate vector of v with respect to the basis B
v = a1b1=a2b2+…+anbn
v := [a1, a2, …, an]^T in Col(n)
Change of basis matrix from B to C
B and C are both bases for V. The change of basis matrix from B to C is the matrix whose columns are the coordinate vectors of the B basis with respect to the C basis
P(C
Change of basis theorem
B and C are bases of V.
v = P(C
Linear map
A function T: V –> W satisfying:
T(v+v’) = T(v) + T(v’)
T(kv) = kT(v)
where V, v’ are vectors and k is scalar
Sufficient to define a linear map on a basis proposition
Suppose B = {e1, e2, …, en} is a basis of V and let W be a vector space with w1, w2, …, wn in W. There exists a linear map T:V–>W such that ei |–> wi , i = 1, …, n
T after S
The composite of T with S.
If S: U –> V and T: V–> W then the composition of T after S is the function T(S(u))
Isomorphism
A linear map T: V–>W is an isomorphism of vector spaces if there exists a linear map S:W–>V such that
T after S = id(v) and S after T = id(w)
We say that S is the inverse of T
Uniqueness of inverse
If T:V–>W is a linear map and S,S’:W–>V are inverses of T, then S=S’
Isomorphic
Two vector spaces V and W are isomorphic if there exists and isomorphism T:V–>W
Theorem: V and W are isomorphic iff…
dimV=dimW
The matrix of T with respect to the bases B and C
The matrix whose columns are the coordinate vectors of T(bi) with respect to the basis C
[T](C
T(v) = …
[T](C
Functoriality of a matrix on a linear map theorem
Let S:U–>V and T:V–>W be linear maps between finite dimensional vector spaces. Let B, C, D be bases for U, V and W respectively.
[T after S](D
Corollary of Functoriality of a matrix on a linear map theorem
T is invertible iff [T](C
Kernel
T:V –> W
Ker(t) := {v in V : T(v) = the zero vector in W}
(null space)
Image
T:V –> W
Im(T) := {w in W : w=T(v) for some v in V}
(image/ column space)
Nullity
Nullity(T) := Dim(Ker(T))
Rank
Rank(T) := Dim(Im(T))
Rank-nullity theorem
Nullity(T)+Rank(T) = Dim(T)
Injective
f(x) = f(x’) ==> x = x’
Surjective
For every y in Y, there is an x in X such that f(x) = y
T:V –> W then T injective iff
Ker(T) = {the zero vector in v}
T:V –> V where V is finite dimensional
T injective iff
T surjective
T: V–>W is an isomorphism iff
T is injective and surjective
Eigenvalue
T: V–>V
l is an eigenvalue of T if there exists a non-zero vector such that T(v) = Lv
l is an eigenvalue of T iff… (3 statements)
T-Lid is not injective
T-Lid is not surjective
T-Lid is not an isomorphism
Eigenvector
A non-zero vector such that T(v)=Lv for some l in the Reals
Determinant of a linear operator
det(T) := det([T}(B
Characteristic polynomial
det(Lid-T)
Eigenspace
E(L) := {v in V : T(v) = Lv} in V