Chapter 4: Basis, dimension and rank Flashcards
Definition A subspace of Rn
is a set U of vectors in Rn
such that:
(1) The zero vector 0 ∈ U.
(2) If x and y are in U then x + y is in U.
(3) If x is in U and c ∈ R is a scalar, then cx ∈ U.
Definition Let A be an m × n matrix.
a) The null space of A, written null(A), is
the subspace of solutions to the homogeneous equation Ax = 0. Thus
null(A) = {x ∈ R^n: Ax = 0}.
Definition Let A be an m × n matrix.
b) The image space of A, written im(A) is
the set of all vectors
y ∈ Rm such that Ax = y has a solution. Thus
im(A) = {Ax : x ∈ R^n}.
Definition Let A be an n × n matrix and let λ be an eigenvalue of A. The λ-eigenspace of A and is
denoted E_λ is
The collection of all eigenvectors corresponding to λ together with the zero vector.
Equivalently
Eλ = {x ∈ R^n: Ax = λx}.
Definition. If V = {v1, v2, · · · , vk} is a set of vectors in R^n, then the span of V is
then the set of all linear combinations of v1, v2, · · · , vk. Symbolically,
span V = {∑^k_i=1 vici: c1, c2, · · · , ck ∈ R}.
Theorem 5.1.1 A span is a subspace Let v1, . . . , vk be vectors in R^n. Then
a) U = span{v1, . . . , vk} is a subspace of Rn.
b) If W is a subspace of Rn and each vi ∈ W, then U ⊆ W.
Definition A set {v1, v2, · · · , vk} of vectors in R^n
is linearly independent if and only if
the only solution to the equation
c1v1 + c2v2 + · · · + ckvk = 0, ci∈ R
is c1 = c2 = · · · = ck = 0
Theorem 5.2.2 If A is an m × n matrix, let a1, · · · , an. denote the columns of A.
a) {a1, · · · , an} is independent in Rm if and only if Ax = 0, x ∈ Rn, implies x = 0.
b) Rn = span{a1, · · · , an} if and only if Ax = b has a solution x for every vector b ∈ Rm.
Theorem 5.2.3 The following are equivalent for an n × n matrix
A.
a) A is invertible.
b) The columns of A are linearly independent.
c) The columns of A span Rn
d) The rows of A are linearly independent.
e) The rows of A span the set of all 1 × n rows.
Definition A basis for a subspace S of Rn
is a set of vectors in S that
a) spans S, and
b) is linearly independent.
Theorem 5.2.5 The basis theorem
Let S be a subspace of Rn.
Then any two bases for S have the same number of vectors.
Dimension
If S is a nontrivial subspace of Rn
, then the number
of vectors in a basis for S is called the dimension of S, denoted
dim(S). If S = {0} then we define dim(S) = 0.
Theorem 5.2.6 Let U =/= {0} be a subspace of Rn. Then
a) U has a basis and dim U ≤ n.
b) Any independent set in U can be enlarged (by adding vectors
from any fixed basis of U) to a basis of U, if not already so.
c) Any spanning set for U can be cut down (by deleting vectors)
to a basis of U, if not already so.
Row space of matrix A, an m × n matrix.
written row(A) is the subspace of Rn
spanned by the rows of A.
Column space of matrix A, an m × n matrix.
written col(A), is the subspace of Rm
spanned by the columns of A.
Lemma 5.4.1 Let and A B denote m × n matrices.
a) If A → B by elementary row operations, then row(A) = row(B).
b) If A → B by elementary column operations, then col(A) = col(B).
Lemma 5.4.2 If R is a row-echelon matrix, then
a) The nonzero rows of R are a basis of row(R).
b) The columns of R containing leading ones are a basis of col(R).
rank(A)=
dim(row(A))=dim(col(A))
Theorem 5.4.1 Rank Theorem Let A be any m × n matrix of
rank r. Then
dim(row(A))= dim(col(A)).
Moreover, if A is carried to a row-echelon matrix R by row operations, then
a) The r nonzero rows of R are a basis of row A.
b) If the leading 1s lie in columns j1, j2, . . . , jr of R, then columns
j1, j2, . . . , jr of A are a basis of col(A).
Rank of a matrix A
the dimension of its row and column spaces and is denoted rank(A).
Corollary 5.4.1 For any matrix A, rank(A) =
rank(A^T).
Definition The nullity of a matrix A
the dimension of its null space null(A), and is denoted nullity(A).
Theorem 5.4.2 The rank and nullity theorem If A is an m × n matrix then
rank(A) + nullity(A) = n.
Definition Let A and B be n × n matrices. We say that A is similar to B if
there exists an invertible n × n matrix P such that P^−1AP = B. If A is similar to B we write A ∼ B, where ∼ denotes an equivalence relation.
Theorem 5.5.1 Let A and B be n × n matrices with A ∼ B. Then:
(a) det(A) = det(B);
(b) A is invertible if and only if B is invertible;
(c) A and B have the same rank;
(d) A and B have the same characteristic polynomial;
(e) A and B have the same eigenvalues.
(f) A and B have the same trace.
Definition An n × n matrix A is diagonalisable if
it is similar to a diagonal matrix.
Theorem 5.5.3 Let A be an n × n matrix. Then A is diagonalisable if and only if it
has n linearly independent eigenvectors.
Moreover, there exists an invertible matrix P and a diagonal matrix D such that P^−1AP = D if and only if:
* The columns of P are n linearly independent eigenvectors of A;
* The diagonal entries of D are the eigenvalues of A corresponding to the eigenvectors in P (in the same order).
Theorem 5.5.4 and 5.5.5 Let A be an n × n matrix with n distinct eigenvalues with corresponding eigenvectors x1, . . . , xn .
Then
{x1, . . . , xn} is a linearly independent set and A is diagonalisable.
Algebraiic multiplicity of an eigenvalue
its multiplicity as a root of the characteristic equation. The geometric
multiplicity of an eigenvalue λ is the dimension of its eigenspace
Eλ.
Theorem 5.5.6 (The diagonalisation theorem) Let A be an n × n matrix whose distinct eigenvalues are λ1, . . . , λk. The following are equivalent:
(a) A is diagonalisable;
(b) The union of a basis for the each of the eigenspaces of A contains n vectors;
(c) The algebraic multiplicity of each eigenvalue is equal to itsvgeometric multiplicity and the sum of these multiplicities across all eigenvalues is n.
Theorem 9.2.2 Let B = {u1, . . . , un} and C = {v1, . . . , vn} be ordered bases for Rn and let PC←B be the change of basis matrix from B to C. Then for all x ∈ Rn
a) PC←B[x]B = [x]C;
b) PC←B is the unique n × n matrix P such that P[x]B = [x]C
c) PC←B is invertible and (PC←B)^−1 = PB←C
Theorem Gauss Jordan method for computing a change of
basis matrix
[C|B] → [In|PC←B]