Chapter 4: Real vector spaces Flashcards
Vector space
A set of objects that are closed under addition and scalar multiplication.
Closure under addition
If u and v are objects in V, then u + v is in V.
Closure under scalar multiplication
if k is any scalar and u is any object in V, then ku is in V.
Subspace
A subset of a vector space (say V), that is itself a vector space under the addition and scalar multiplication defined on V.
If W is a set of one or more vectors in a vector space V, then W is a subspace of V iff:
- u and v are vector is W, then u+v is in W.
- k is any scalar and u is any vector in W, then ku is in W.
Create a new subspace from known subspaces:
If W1, W2, … Wr are subspaces of a vector space V, then the intersection of these subspaces is also a subspace of V.
Span of S
The subspace of a vector space V, that is formed from all possible linear combinations of the vectors in a nonempty set S.
We say that S span that subspace.
A subspace of Rn using the matrix A.
The solution set of Ax = 0, in n unknowns.
Theorem 4.2.5, considering the equality of the spans of 2 vector spaces.
If S = {v1, v2, …, vr} and S’ = {w1, w2, …, wk} are nonempty sets of vectors in a vector space V, then span{v1, v2, …, vr} = span{w1, w2, …, wk} iff each vector in S is a linear combination of those in S’, and each vector in S’ is a linear combination of those in S.
Definition of linear independence:
If S = {v1, v2, …, vr} is a nonempty set of vectors in a vector space V, then the vector equation k1v1 + k2v2 + … + krvr = 0 has at least one solution, namely, k1 = k2 = … = kr = 0. We call this the trivial solution. If this is the only solution, then S is said to be a linearly independent set.
Interpretation of linear independence:
A set with 2+ vectors is: Linearly independent iff no vector in S is expressible as a linear combination of the other vectors in S.
Basic theorem concerned with linear independence ( 1 or 2 vectors):
- A finite set that contains 0 is linearly dependent.
- A set with exactly one vector is linearly independent iff that vector is not 0.
- A set with exactly 2 vectors is linearly independent iff neither vector is a scalar multiple of the other.
Let S = {v1, v2, …, vr} be a set of vectors in Rn. When is the set linearly dependent?
If r > n
Wronskian of f1, f2, …, fn
If f1 = f1 (x), f2 = f2 (x), …, fn<strong> </strong>= fn (x) are functions that are n-1 times differentiable on the interval (-∞, ∞):

Theorem 4.3.4 using the Wronskian to determine linear independence of functions
If the functions f1, f2,…, fn have n-1 continous derivatives on the interval (-∞, ∞)
and if the Wronskian of these functions is not identically zero on (-∞, ∞), then these functions form a linearly independent set of vectors in C(n-1)(-∞, ∞)
Define a Basis
If V is any vector space and S = {v1, v2, …, vn} is a finite set of vectors in V, then S is called a basis for V if the following 2 conditions hold:
- S is linearly independent
- S spans V.
Theorem 4.4.1: Uniqueness of Basis Representation
If S = {v1, v2, … vn} is a basis for a vector space V, then every vector v in V can be expressed in the form v = c1v1 + c2v2 + … + cnvn in exactly one way.
Definition for a coordinate vector of v relative to S.
If S = {v1, v2,… vn} is a basis for a vector space V, and v = c1v1+ c2v2 + … + cnvn
is the expression for a vector v in terms of the basis S, then the scalars c1, c2, … cn are called the coordinates of v relative to the basis S. The vector (c1, c2, … cn) in Rn constructed from these coordinates is called the coordinate vector of v relative to S*; * it is denoted by
(v)S = (c1, c2, … cn)
Theorem 4.5.1 on the number of vectors in a basis.
All bases for a finite-dimensional vector space have the same number of vectors.
Theorem 4.5.2 on the a basis of a vector space and linear independence
Let V be a finite dimensional vector space, and let {v1, v2,… vn} be any basis.
- If a set has more than n vectors, then it is linearly dependent.
- If a set has fewer than n vectors, then it does not span V.
Dimension of a finite-dimensional vector space V.
The number of vectors in a basis for V.
The zero vector space has a dimension of zero.
Dimension of 0
0
Plus/Minus Theorem
Let S be a nonempty set of vectors in a vector space V.
- If S is a linearly independent set and if v is a vector in V that is outside of span(S), then the set S ⋃ {v} that results by inserting v into S is still linearly independent.
- If v is a vector in S that is expressible as a linear combination of other vectors in S, and if S - {v} denotes the set obtained by removing v from S, then S and S - {v} span the same space; that is,
span(S) = span(S - {v})
Theorem 4.5.4 on the conditions of a basis.
Let V be an n-dimensional vector space, and let S be a set in V with exactly n vectors.
Then S is a basis for V if and only if S spans V or S is linearly independent.
Theorem 4.5.6 that relates the dimension of a vector space to the dimensions of its subspaces.
If W is a subspace of a finite-dimensional vector space V, then:
- W is finite-dimensional
- dim(W) ≤ dim(V)
- W = V if and only if dim(W) = dim(V).
The columns of a transition matrix
Are the coordinate vectors of the old basis, relative to the new basis.
Theorem 4.6.1 on the invertibility of transition matrices
If P is the transition matrix from a basis B’ to a basis B for a finite-dimensional vector space V, then P is invertible
and P-1 is the transition matrix from B to B’.
The procedure for computing PB → B’
Form the matrix [B’ | B]
Use elementary row operations to reduce the matrix to reduced row echelon form.
The resulting matrix will be [I | PB → B’]
[new basis | old basis] → [I | transition from old to new]
Theorem 4.6.2 on the transitionto the Standard Basis
Let B’ = {u1, u2, … , un} be any basis for the vector space Rn and let S = {e1, e2, … , en} be the standard basis for Rn. If the vectors in these basis are written in column form, then:
PB’→S = [u1 | u2 | … | un]
Row space of A.
The subspace spanned by the row vectors of A.
Column space of A
The subspace spanned by the column vectors of A.
Null space of A
Solution space of the homogeneous system of equations Ax = 0.
Theorem 4.7.2 on the null space and elementary row operations
Elementary row operations do not change the null space of a matrix
Theorem 4.7.4 on the row space and elementary row operations
Elementary row operations do not change the row space of a matrix.
Theorem 4.8.1 on the dimesnion of the row and column space of a matrix
The row space and column space of a matrix A have the same dimension.
Rank of matrix A
The common dimension of the row space and column space of A.
Nullity of A
The dimension of the null space of A.
Theorem 4.8.2: Dimension Theorem for Matrices
If matrix A has n columns:
rank(A) + nullity(A) = n
Theorem 4.8.3 on rank & nullity
If A is an m x n matrix, then:
- rank(A) = the number of leading variables in the general solution of Ax = 0
- nullity(A) = the number of parameters in the general solution of Ax = 0
Number of parameters contained in the general solution of a consistent linear system Ax = b,
of m equations in n unknowns, where
A has rank r
n - r parameters.
Orthogonal complement of W
The set of all vectors that are orthogonal to every vector in W.
Theorem 4.8.9 on orthogonal complements & null, row, column space
If A is an m x n matrix:
- The null space of A and the row space of A are orthogonal complements.
- The null space of AT and the column space of A are orthogonal complements.
Theorem 4.9.1
Properties of matrix transfrmations:
- TA(0) = 0
- TA(ku) = kTA(u)
- TA(u + v) = TA(u) + TA(v)
- TA(u - v) = TA(u) - TA(v)
Steps:
Finding the Standard Matrix for a Matrix Transformation
- Find the images of the standard basis vectors e1, e2, … en for Rn in column form.
- Construct the matrix that has the images obtained in Step 1 as its successive columns.
4.10.1: Equivalent statements for transformation matrix A:
- A is invertible
- The range of TA is Rn
- TA is one-to-one
4.10.2: Relationships between u, v and matrix Transformations
- T(u + v) = T(u) + T(v)
- T(k u) = k T(u)