Linear Algebra 1 Flashcards
Given m, n ≥ 1, what is an m × n matrix?
a rectangular array with m
rows and n columns.
What is a row vector?
A 1 x n matrix
What is a column vector?
An m x 1 matrix
What is a square matrix?
An n x n matrix
What is a diagonal matrix?
If A = (aᵢⱼ) is a
square matrix and aᵢⱼ = 0 whenever i ≠ j, then we say that A is a diagonal
matrix
What is F ? (Fancy F)
The field from which the entries (scalars) of a matrix come
Usually F = the reals, or the complex
What does Mₘₓₙ(F) mean?
Mₘₓₙ(F) = {A : A is an m × n matrix with entries from F}
What does Fⁿ mean? (Fancy F)
Fⁿ for M₁ₓₙ(F)
Similarly for Fᵐ
Is matrix addition associative and commutative?
Yes to both
What is the formula for entry (i, j) with matrix multiplication?
ₖ₌₀Σⁿaᵢₖbₖⱼ
Is matrix multiplication associative?
Yes
Is matrix multiplication distributive?
Yes
When do two matrices commute?
If AB=BA
Not true for most A and B
What is an upper triangular matrix?
Let A = (aᵢⱼ) ∈ Mₙₓₙ(F)
If aᵢⱼ = 0 whenever i > j
What is an lower triangular matrix?
Let A = (aᵢⱼ) ∈ Mₙₓₙ(F)
If aᵢⱼ = 0 whenever i < j
We say that A ∈ Mₙₓₙ(F) is invertible if …..
there exists B ∈ Mₙₓₙ(F) such that AB = Iₙ = BA.
If A ∈ Mₙₓₙ(F) is invertible, is the inverse unique?
Prove it
Yes Proof: Suppose that B, C ∈ Mₙₓₙ(F) are both inverses for A Then AB = BA = Iₙ and AC = CA = Iₙ so B = BIₙ = B(AC) = (BA)C = IₙC = C.
Let A, B be invertible n×n matrices. Is AB invertible?
Yes
Let A, B be invertible n×n matrices.
What is (AB)⁻¹ ??
Prove it
(AB)⁻¹ = B⁻¹A⁻¹
What is the transpose of A = (aᵢⱼ) ∈ Mₙₓₙ(F)?
the n × m matrix Aᵀ with (i, j) entry aⱼᵢ
What is an orthogonal matrix?
We say that A ∈ Mₙₓₙ(R) is orthogonal if AAᵀ = Iₙ = AᵀA
Equivalently, A is invertible and Aᵀ = A⁻¹
What is a unitary matrix?
We say that A ∈ Mₙₓₙ(C) is unitary if AA⁻ᵀ = Iₙ = A⁻ᵀA
By A⁻ (A bar) we
mean the matrix obtained from A by replacing each entry by its complex
conjugate.
What is the general strategy for solving a system of m equations in variables x1, …, xn by Gaussian elimination?
Swap equations if necessary to make the coefficient of x1 in the first
equation nonzero.
Divide through the first equation by the coefficient of x1
Subtract appropriate multiples of the first equation from all other equations to eliminate x1 from all but the first equation.
Now the first equation will tell us the value of x1 once we have determined the values of x2, . . . , xn, and we have m − 1 other equations in
n − 1 variables.
Use the same strategy to solve these m−1 equations in n−1 variables
What are the 3 elementary row operations on the augmented matrix A|b
for some 1 ≤ r < s ≤ m, interchange rows r and s
for some 1 ≤ r ≤ m and λ ≠ 0, multiply (every entry of) row r by λ
• for some 1 ≤ r, s ≤ m with r ≠ s and λ ∈ F, add λ times row r to row
s.
Are the EROs invertible?
Yes
We say that an m × n matrix E is in echelon form if…
(i) if row r of E has any nonzero entries, then the first of these is 1;
(ii) if 1 ≤ r < s ≤ m and rows r, s of E contain nonzero entries, the first of which are eᵣⱼ and eₛₖ respectively, then j < k (the leading entries of
lower rows occur to the right of those in higher rows);
(iii) if row r of E contains nonzero entries and row s does not (that is,
eₛⱼ = 0 for 1 ≤ j ≤ n), then r < s (zero rows, if any exist, appear
below all nonzero rows).
Let E | d be the m × (n + 1) augmented matrix of a system of equations, where E is in echelon form. We say that variable xⱼ is determined if …..
What is the alternative to being determined?
if there is i such that eᵢⱼ is the leading entry of row i of E (so eᵢⱼ = 1)
Otherwise we say that xⱼ is free
Gaussian Elimination:
What shows that the equations are inconsistent?
When the final row reads 0=1
What is reduced row echelon form?
We say that an m × n matrix is in reduced row echelon form
(RRE form) if it is in echelon form and if each column containing the leading
entry of a row has all other entries 0.
Can all matrices in echelon form be reduced to RRE form?
Yes
An invertible nxn matrix can be reduced to Iₙ using [ ]
Prove it
EROs
Proof:
Take A ∈ Mₙₓₙ(F) with A invertible.
Proof. Take A ∈ Mn×n(F) with A invertible.
Let E be an RRE form of A.
We can obtain E from A by EROs, and EROs do not change the solution
set of the system of equations Ax = 0. If Ax = 0, then x = Iₙ x = (A⁻¹A)x = A⁻¹(Ax) = A⁻¹0 = 0, so the only n × 1 column vector x with Ax = 0 is
x = 0. (Here 0 is the n × 1 column vector of zeros.) So the only solution of
Ex = 0 is x = 0.
We can read off solutions to Ex = 0. We could choose arbitrary values
for the free variables—but the only solution is x = 0, so there are no free
variables. So all the variables are determined, so each column must contain
the leading entry of a row (which must be 1). Since the leading entry of a
row comes to the right of leading entries of rows above, it must be the case
that E = Iₙ
What is an elementary matrix?
For an ERO on an m × n matrix, we define the corresponding
elementary matrix to be the result of applying that ERO to Iₘ
Is the inverse of an ERO and ERO?
Yes
Is the inverse of an elementary matrix and elementary matrix?
Yes
Let A be an m × n matrix, let B be obtained from A by applying
an ERO. Then B = EA, where E is …..
Prove it
E is the elementary matrix for that ERO
Let A be an invertible n × n matrix. Let X1, X2, . . . , Xk be
a sequence of EROs that take A to Iₙ. Let B be the matrix obtained from In
by this same sequence of EROs. Then B = ??
prove it
B = A⁻¹
Proof:
Let Eᵢ be the elementary matrix corresponding to ERO Xᵢ. Then applying X1, X2, . . . , Xk to A gives matrix Eₖ…E₂E₁A = Iₙ, and applying 1, X2, . . . , Xk to Iₙ gives matrix Eₖ…E₂E₁ = B
So BA = Iₙ, so B = A⁻¹
The sequence of EROs X1, X2, . . . , Xk that take A to Iₙ exists.
Prove it
Proof theorem 6: An invertible n × n matrix can be reduced to Iₙ using EROs.
What is a vector space?
Let F be a field. A vector space over F is a non-empty set V together with a map V × V → V given by (v, v′) |→ v + v′
(called addition) and a map F × V → V given by (λ, v) |→ λv (called scalar multiplication)
that satisfy the vector space axioms
What are the vector space axioms?
Addition ones
• u + v = v + u for all u, v ∈ V (addition is commutative);
• u + (v + w) = (u + v) + w for all u, v, w ∈ V (addition is associative);
• there is 0ᵥ ∈ V such that v + 0ᵥ = v = 0ᵥ + v for all v ∈ V (existence of additive identity);
• for all v ∈ V there exists w ∈ V such that v+w = 0ᵥ = w + v (existence
of additive inverses);
What are the vector space axioms?
Multiplication ones)
• λ(u + v) = λu + λv for all u, v ∈ V , λ ∈ F (distributivity of scalar multiplication over vector addition);
• (λ + µ)v = λv + µv for all v ∈ V , λ, µ ∈ F (distributivity of scalar multiplication over field addition);
• (λµ)v = λ(µv) for all v ∈ V , λ, µ ∈ F (scalar multiplication interacts
well with field multiplication);
• 1v = v for all v ∈ V (identity for scalar multiplication).
For m, n ≥ 1, is the set Mₘₓₙ(R) a real vector space?
Yes
Elements of V are called [ ]
Elements of F are called [ ]
If V is a vector space over R, then we say that V is a [ ] vector space
If V is a vector space over C, then we say that V is a [ ] vector space
If V is a vector space over F, then we say that V is an [ ] vector space
Vectors Scalars Real Complex F
Let V be a vector space over F
Then there is a [ ] additive identity element 0ᵥ
unique
Prove that:
Let V be a vector space over F. Take v ∈ V . Then there
is a unique additive inverse for v.
Proof
Let V be a vector space over F. Take v ∈ V .
What is the unique additive inverse of v?
-v
Let V be a vector space over a field F. Take v ∈ V , λ ∈ F.
Then
λ0ᵥ = 0ᵥ
Prove it
We have
λ0ᵥ = λ(0ᵥ + 0ᵥ) (definition of additive identity)
= λ0ᵥ + λ0ᵥ (distributivity of scalar · over vector +).
Adding -(λ0ᵥ) to both sides, we have
λ0ᵥ + (-(λ0ᵥ)) = (λ0ᵥ + λ0ᵥ) + (-(λ0ᵥ))
so 0ᵥ = λ0ᵥ (using definition of additive inverse, associativity of addition, definition of additive identity).
Let V be a vector space over a field F. Take v ∈ V , λ ∈ F. Then 0ᵥ = 0ᵥ (First v = v, second v = V) Prove it
Proof
Let V be a vector space over a field F. Take v ∈ V , λ ∈ F.
Then
−λ)v = −(λv) = λ(−v)
Prove it
We have
λv + λ(−v) = λ(v + (−v)) (distributivity of scalar · over vector +)
= λ0ᵥ (definition of additive inverse)
= 0ᵥ
So λ(−v) is the additive inverse of λv (by uniqueness), so λ(−v) =
−(λv).
Similarly, we see that λv + (−λ)v = 0ᵥ and so (−λ)v = −(λv).
Let V be a vector space over a field F. Take v ∈ V , λ ∈ F.
Then
f λv = 0ᵥ then λ = 0 or v = 0ᵥ
Prove it
Suppose that λv = 0ᵥ , and that λ ≠ 0
Then λ⁻¹ exists in F, and
λ⁻¹(λv) = λ⁻¹ 0ᵥ
so (λ⁻¹ λ)v = 0ᵥ (scalar · interacts well with field ·, and by (i))
so
1v = 0ᵥ
so v = 0ᵥ (identity for scalar multiplication)
What is a subspace?
Let V be a vector space over F. A subspace of V is a non-empty subset of V that is closed under addition and scalar multiplication, that is,
a subset U ⊆ V such that
(i) U ≠ ∅ (U is non-empty);
(ii) u₁ + u₂ ∈ U for all u₁, u₂ ∈ U (U is closed under addition);
(iii) λu ∈ U for all u ∈ U, λ ∈ F (U is closed under scalar multiplication).
Is {0ᵥ} a subspace of V?
Always
The zero/trivial subspace
Is V a subspace of V?
Always
What are the subspaces of V called that isn’t 0ᵥ?
Proper subsapce
What is the subspace test?
Let V be a vector space over F, let U be a subset of V . Then U is a subspace if and only if
(i) 0ᵥ ∈ U; and
(ii) λu₁ + u₂ ∈ U for all u₁, u₂ ∈ U and λ ∈ F
Prove the subspace test
Assume that U is a subspace of V .
• 0ᵥ ∈ U: Since U is a subspace, it is non-empty, so there exists u₀ ∈ U
Since U is closed under scalar multiplication, 0ᵤ = 0ᵥ ∈ U
• λu₁ + u₂ ∈ U for all u₁, u₂ ∈ U, and λ ∈ F. Then λu₁ ∈ U because U is closed under scalar multiplication, so λu₁ + u₂ ∈ U because U is closed under addition
(Prove other direction)
Assume that 0ᵥ ∈ U and that λu₁ + u₂ ∈ U for all u₁, u₂ ∈ U, and λ ∈ F.
• U is non-empty: have 0ᵥ ∈ U
• U is closed under addition: for u₁ + u₂ ∈ U have u₁ + u₂ = 1 * u₁ + u₂ ∈ U
• U is closed under scalar multiplication: for u ∈ U and λ ∈ F, have λu = λu + 0ᵥ ∈ U
So U is a subspace of V
What does the notation U ≤ V mean? What is the difference between that and U ⊆ V?
If U is a subspace of the vector space V , then we write U ≤ V . (Compare with U ⊆ V , which means that U is a subset of V but we do not
know whether it is a subspace.)
Let V be a vector space over F, and let U ≤ V . Then
(i) U is a vector space over F;
Prove it
We need to check the vector space axioms, but first we need to
check that we have legitimate operations.
Since U is closed under addition, the operation + restricted to U gives
a map U × U → U.
Since U is closed under scalar multiplication, that operation restricted
to U gives a map F × U → U.
Now for the axioms.
Commutativity and associativity of addition are inherited from V .
There is an additive identity (by the Subspace Test).
There are additive inverses: if u ∈ U then multiplying by −1 ∈ F and
applying [(−λ)v = −(λv) = λ(−v)] shows that −u ∈ U.
The other four properties are all inherited from V .
Let V be a vector space over F, and let U ≤ V . Then
(ii) if W ≤ U then W ≤ V (“a subspace of a subspace is a subspace”).
Prove it
This is immediate from the definition of a subspace
Let V be a vector space over F. Take A, B ⊆ V and take λ ∈ F
Define A+B and λA
A + B := {a + b : a ∈ A, b ∈ B}
λA := {λa : a ∈ A}.
Let V be a vector space. Take U, W ≤ V
Is U+W a subspace of V?
Is U ∩ W a subspace of V?
Prove it
Yes
Then U +W ≤
V and U ∩ W ≤ V .
Does R, the reals have any proper subspaces, and if so what are they?
No
Let V = R, let U be a non-trivial subspace of V
Then there exists u₀ ∈ U with u₀ ≠ 0. Take x ∈ R. Let λ = x/u₀ Then x = λu₀∈ U, because U is closed under scalar multiplication. So U = V .So R has no non-zero proper subspaces
Let V be a vector space over F, take u₁, u₂, …, uₘ∈ V .
Define U := {α₁u₁ + … + αₘuₘ : α₁, …, αₘ ∈ F}. Then U ≤ V .
Prove it
Subspace test
pg29
Let V be a vector space over F, take u₁, u₂, …, uₘ∈ V. What is a linear combination of u₁, u₂, …, uₘ
a vector α₁u₁ + … + αₘuₘ for some α₁, …, αₘ ∈ F
Define the span of u₁, u₂, …, uₘ
Span(u₁, u₂, …, uₘ) := {α₁u₁ + … + αₘuₘ : α₁, …, αₘ ∈ F}.
The smallest subspace of V that contains u₁, u₂, …, uₘ
What are the different notations for the span of u₁, u₂, …, uₘ
Span(u₁, u₂, …, uₘ)
Sp(u₁, u₂, …, uₘ)
<u></u>
Define the span of a set S ⊆ V (even a potentially infinite set S)
Span(S) := {α₁s₁ + … + αₘsₘ : m ≥ 0,s₁, …, sₘ ∈ S, α₁, …, αₘ ∈ F}
Can a linear combination involve infinitely many elements? Say if S is infinite
No
a linear combination only ever involves finitely many
elements of S, even if S is infinite.
What is the empty sum?
And what is the span of the empty set?
ᵢ∈∅ Σαᵢuᵢ is 0ᵥ (the ‘empty sum’), so
Span ∅ = {0ᵥ}
For any S ⊆ V, what is the relationship between Span(S) and V
Span(S) ≤ V
What is a spanning set?
Let V be a vector space over F. If S ⊆ V is such that V =
Span(S), then we say that S spans V , and that S is a spanning set for V .
Define linear dependence
Let V be a vector space over F. We say that v₁, …, vₘ ∈ V
are linearly dependent if there are α₁, …, αₘ ∈ F, not all 0, such that α₁v₁ + … + αₘvₘ = 0.
Define linear independence
If v₁, …, vₘ ∈ V are not linearly dependent, then we say that they are linearly independent.
When is S ⊆ V linearly independent?
We say that S ⊆ V is linearly independent if every finite subset of S is
linearly independent
So v₁, …, vₘ ∈ Vare linearly independent if and only if …
So v₁, …, vₘ ∈ V are linearly independent if and only if the only linear combination of them that gives 0ᵥ is the trivial combination, that is,
if and only if α₁v₁ + … + αₘvₘ = 0 implies α₁ = … = αₘ = 0
Let v₁, …, vₘ be linearly independent in an F-vector space V . Let vₘ₊₁∈ V be such that vₘ₊₁ ∉ Span(v₁, …, vₘ). Then v₁, …, vₘ, vₘ₊₁ are linearly [ ]
Prove it
Independent
Proof:
Take α₁, …, αₘ₊₁ ∈ F such that α₁v₁ + … + αₘ₊₁vₘ₊₁ = 0
If αₘ₊₁ ≠ 0, then we have
vₘ₊₁ = - (1/ αₘ₊₁)(α₁v₁ + … + αₘvₘ) ∈ Span(v₁, …, vₘ) which is a contradiction.
So αₘ₊₁ = 0, so α₁v₁ + … + αₘvₘ = 0
But v₁, …, vₘ are linearly independent, so this means that α₁ = … = αₘ = 0
Let V be a vector space.
What is a basis of V
A basis of V is a linearly independent
spanning set.
Define a finite dimensional vector space
A vector space with a finite basis
What is the standard basis of Rⁿ?
For 1 ≤ i ≤ n, let eᵢ be the row vector with coordinate 1 in the ith entry and 0 elsewhere.
Then e₁, …, eₙ are linearly independent: if α₁v₁ + … + αₙvₙ = 0 then by looking at the ith entry we see that αᵢ = 0 for all i
Also, e₁, …, eₙ span Rⁿ, because (α₁, …, αₙ) = α₁e₁ + … + αₙeₙ
So e₁, …, eₙ is a basis for Rⁿ. And the standard basis
What is the standard basis of Mₘₓₙ?
Consider V = Mₘₓₙ(R). For 1 ≤ i ≤ m and 1 ≤ j ≤ n, let Eᵢⱼ be the matrix with a 1 in entry (i, j) and 0 elsewhere. Then {Eᵢⱼ : 1 ≤ i ≤ m, 1 ≤ j ≤ n} is a basis for V , called the standard basis of Mₘₓₙ(R)
Let V be a vector space over F, let S = {v₁, …, vₙ} ⊆ V .
Then S is a basis of V if and only if every vector in V has a unique expression
as a linear combination of elements of S.
Prove it
Proof pg32
Prop 17
Let V be a vector space over F. Suppose that V has a finite
spanning set S.
Then S contains a linearly independent [ ]
Spanning set
if V has a finite spanning set, then V has a [ ]
Prove it
basis
Proof:
Let S be a finite spanning set for V .
Take T ⊆ S such that T is linearly independent, and T is a largest such
set (any linearly independent subset of S has size ≤ |T|).
Suppose, for a contradiction, that Span(T) ≠ V .
Then, since Span(S) = V , there must exist v ∈ S \ Span(T).
Now by Lemma 16 we see that T ∪ {v} is linearly independent, and
T ∪ {v} ⊆ S, and |T ∪ {v}| > |T|, which contradicts our choice of T.
So T spans V , and by our choice is linearly independent.
What is Steinitz Exchange Lemma?
Let V be a vector space over
F. Take X ⊆ V . Suppose that u ∈ Span(X) but that u ∉ Span(X \ {v})
for some v ∈ X. Let Y = (X \ {v}) ∪ {u} (“exchange u for v”). Then
Span(Y ) = Span(X).
Prove Steinitz Exchange Lemma
pg34
Let V be a vector space. Let S, T be finite subsets of V . If S is linearly independent and T spans V , then |S| …..
Let V be a vector space. Let S, T be finite subsets of V . If S is linearly independent and T spans V , then |S| ≤ |T|. “linearly independent
sets are at most as big as spanning sets”
Let V be a vector space. Let S, T be finite subsets of V . If S is linearly independent and T spans V , then |S| ≤ |T|. “linearly independent
sets are at most as big as spanning sets”
Prove it
Poof pg 35 (top)
Let V be a finite-dimensional vector space. Let S, T be bases
of V . Then S and T are finite, and |S| = ????
Then S and T are finite, and |S| = |T|
Let V be a finite-dimensional vector space. Let S, T be bases of V . Then S and T are finite, and |S| = |T|
Prove it
Since V is finite-dimensional, it has a finite basis B. Say |B| = n.
Now B is a spanning set and |B| = n, so by Theorem 20 any finite linearly
independent subset of V has size at most n.
Since S is a basis of V , it is linearly independent, so every finite subset
of S is linearly independent.
So in fact S must be finite, and |S| ≤ n. Similarly, T is finite and |T| ≤ n.
Now S is linearly independent and T is spanning, so by Theorem 20
|S| ≤ |T|.
Applying Theorem 20 with the roles of S and T reversed shows that
|S| ≥ |T|.
So |S| = |T|
What is the dimension of a finite-vector space?
Let V be a finite-dimensional vector space. The dimension of V , written dim V , is the size of any basis of V
What is the dimension of Rⁿ?
The standard basis is e₁, e₂, …, eₙ and hence has dimension n
What is the dimension of the vector space Mₘₓₙ?
mn
What is row space?
Let A be an m×n matrix over F. We define the row space of A to be the span of the subset of Fⁿ consisting of the rows of A, and we denote it by
rowsp(A).
What is row rank?
We define the row rank of A to be rowrank(A) := dim rowsp(A)
Let A be an m ×n matrix, and let B be a matrix obtained from A by a finite sequence of EROs.
Then rowsp(A) = ???
Rowrank(A) = ???
Then rowsp(A) = rowsp(B). In particular, rowrank(A) = rowrank(B).
Let U be a subspace of a finite-dimensional vector space V . Then
(a) U is finite-dimensional, and dim U …. ; and
(b) if dim U = dim V , then …
(a) U is finite-dimensional, and dim U ≤ dim V ; and
(b) if dim U = dim V , then U = V
Let U be a subspace of a finite-dimensional vector space V
(a) U is finite-dimensional, and dim U ≤ dim V ;
prove it
By Theorem 20, every linearly independent subset of V has size at most n.
Let S be a largest linearly independent set contained in U, so |S| ≤ n.
[Secret aim: S spans U.]
Suppose, for a contradiction, that Span(S) 6= U.
Then there exists u ∈ U \ Span(S).
Now by Lemma 16 S ∪ {u} is linearly independent, and |S ∪ {u}| > |S|,
which contradicts our choice of S.
So U = Span(S) and S is linearly independent, so S is a basis of U,
and as we noted earlier |S| ≤ n.
Let U be a subspace of a finite-dimensional vector space V
(b) if dim U = dim V , then U = V
prove it
If dim U = dim V , then there is a basis S of U with dim U elements.
Then S is a linearly independent subset of V with size dim V . Now
adding any vector to S must give a linearly dependent set as every linearly independent subset of V has size at most n, so S must span
V . So V = Span(S) = U
In an n-dimensional
vector space, any linearly independent set of size n is a [ ]. Similarly, any
spanning set of size n is a [ ] .
basis
basis
Let U be a subspace of a finite-dimensional vector space V
Can a basis of U be extended to a basis of V?
Explain
Then every basis of U can be extended to a basis of V
That is, if u₁, …, uₘ is a basis of U, then there are vₘ₊₁, …, vₙ ∈ V such that u₁, …, uₘ, vₘ₊₁, …, vₙ is a basis of V
This does not say that if U ≤ V and if we have a
basis of V then there is a subset that is a basis of U. The reason it does not
say this is that in general this is false.
Let U be a subspace of a finite-dimensional vector space V .
Then every basis of U can be extended to a basis of V
prove it
Proof pg 38
Idea: Start with a basis of U and and vectors till we reach a basis of V
n Let S be a finite set of vectors in Rⁿ. How can we (efficiently) find a basis of Span(S)?
Let m = |S|. Write the m elements of S as the rows of
an m × n matrix A.
Use EROs to reduce A to matrix E in echelon form. Then rowsp(E) =
rowsp(A) = Span(S), by Lemma 22.
The nonzero rows of E are certainly linearly independent. So the nonzero
rows of E give a basis for Span(S)
What is the dimension formula?
Let U, W be subspaces of a finite-dimensional vector space V over F. Then dim(U + W) + dim(U ∩ W) = dim U + dim W
Prove the dimension formula
Take a basis v₁, …, vₘ of U ∩ W
Now U ∩ W ≤ U and U ∩ W ≤ W, so by Theorem 24 we can extend this basis to a basis v₁, …, vₘ, u₁, …, uₚ of U, and a basis v₁, …, vₘ, w₁, …, wᵩ
of W.
With this notation, we see that dim(U ∩ W) = m, dim U = m + p and dim W = m + q
Let U, W be subspaces of a finitedimensional vector space V over F
Claim. v₁, …, vₘ, u₁, …, uₚ, w₁, …, wᵩ is a basis of U + W
Prove it
Call this collection of vectors S.
Note that all these vectors really are in U+W (eg, u₁ = u₁ + 0ᵥ ∈ U + W).
spanning: Take x ∈ U + W. Then x = u + w for some u ∈ U, w ∈ W.
Since v₁, …, vₘ, u₁, …, uₚ span U, there are α₁, …, αₘ, α’₁, …, α’ₚ ∈ F such that u = α₁v₁ + … + αₘvₘ + α’₁u₁ + … + α’ₚuₚ
Similarly, there are β₁, …, βₘ, β’₁, …, β’ᵩ ∈ F such that w = β₁v₁ + … + βₘvₘ + β’₁w₁ + … + β’ᵩwᵩ
Then x = u + w = (α₁+β₁)v₁ + … + (αₘ+βₘ)vₘ + α’₁u₁ + … + α’ₚuₚ + β’₁w₁ + … + β’ᵩwᵩ ∈ Span(S).
And certainly Span(S) ⊆ U + W.
So Span(S) = U + W
Proof continues pg41
What is a direct sum of two subspaces?
Let U, W be subspaces of a vector space V . If U ∩ W = {0ᵥ} and U + W = V , then we say that V is the direct sum of U and W, and we
write V = U ⊕ W
What is a direct complement?
Let U, W be subspaces of a vector space V . If U ∩ W = {0ᵥ} and U + W = V , then we say that V is the direct sum of U and W, and we
write V = U ⊕ W
In this case, we say that W is a direct complement of U in V (and vice
versa).
Let U, W be subspaces of a finite-dimensional vector space V . The following are equivalent:
(i) V = U ⊕ W;
(ii) every v ∈ V has a unique expression as u+w where u ∈ U and w ∈ W;
(iii) dim V = dim U + dim W and V = [ ] ;
(iv) dim V = dim U + dim W and [ ] = {0ᵥ};
(v) if u₁, …, uₘ is a basis for U and w₁, …, wₙ is a basis for W, then [ ] is a basis for V
(i) V = U ⊕ W;
(ii) every v ∈ V has a unique expression as u+w where u ∈ U and w ∈ W;
(iii) dim V = dim U + dim W and V = U + W;
(iv) dim V = dim U + dim W and U ∩ W = {0ᵥ}
(v) if u₁, …, uₘ is a basis for U and w₁, …, wₙ is a basis for W, then u₁, …, uₘ, w₁, …, wₙ is a basis for V
What is a linear map/transformation?
Let V , W be vector spaces over F. We say that a map T : V → W is linear if
(i) T(v₁ + v₂) = T(v₁) + T(v₂) for all v₁, v₂ ∈ V (preserves additive structure); and
(ii) T(λv) = λT(v) for all v ∈ V and λ ∈ F (respects scalar multiplication).
We call T a linear transformation or a linear map.
- Let V , W be vector spaces over F, let T : V → W be linear. Then T(0ᵥ) = ??
T(0ᵥ) = 0𝓌
If T : V → W and T(0ᵥ) ≠ 0𝓌, then can T ever be linear?
No
That is, if T is any map that preserves additive structure then T(0ᵥ) = 0𝓌,
and if T is any map that respects scalar multiplication then T(0ᵥ) = 0𝓌
Prove it
Let x = T(0ᵥ) ∈ W
The z + z = T(0ᵥ) + T(0ᵥ) = T(0ᵥ + 0ᵥ) = T(0ᵥ) = z (using the
assumption to see that T(0ᵥ) + T(0ᵥ) = T(0ᵥ + 0ᵥ))
so z = 0𝓌
Let V , W be vector spaces over F, let T : V → W. The
following are equivalent:
(i) T is linear;
(ii) T(αv₁ + βv₂) = [ ] for all v₁, v₂ ∈ V and α, β ∈ F;
(iii) for any n ≥ 1, if v₁, …, vₙ ∈ V and α₁, …, αₙ∈ F then [ ] = α₁T( v₁) + … + αₙT(vₙ)
(ii) T(αv₁ + βv₂) = αT(v₁) + βT(v₂) for all v₁, v₂ ∈ V and α, β ∈ F;
(iii) for any n ≥ 1, if v₁, …, vₙ ∈ V and α₁, …, αₙ∈ F then T(α₁v₁ + …+ αₙvₙ) = α₁T( v₁) + … + αₙT(vₙ)
What is the identity map?
Let V be a vector space. Then the identity map idᵥ : V → V given by idᵥ (v) = v for all v ∈ V is a linear map
What is the zero map
Let V , W be vector spaces. The zero map 0 : V → W that sends every v ∈ V to 0𝓌 is a linear map. (In particular, there is at least one linear
map between any pair of vector spaces.)
Do linear transformations themselves form a vector space?
Yes with the operations of addition and scalar multiplication , as well as the zero map
Let V , W be vector spaces over F. For S, T : V → W and
λ ∈ F, define S + T : V → W by [ ] for v ∈ V , and define λS : V → W by [ ] for v ∈ V
Let V , W be vector spaces over F. For S, T : V → W and
λ ∈ F, define S + T : V → W by (S + T)(v) = S(v) + T(v) for v ∈ V , and
define λS : V → W by (λS)(v) = λS(v) for v ∈ V
Let U, V , W be vector spaces over F. Let S : U → V and T : V → W be linear. Then is T ◦ S U → W linear?
Prove or disprove it
Yes
proof top of page 46
Let V , W be vector spaces, let T : V → W be linear. We say that T is invertible if????
Let V , W be vector spaces, let T : V → W be linear. We say that T is invertible if there is a linear transformation S : W → V such that
ST = idᵥ and T S = id𝓌 (where idᵥ and id𝓌 are the identity maps on V and W respectively). In this case, we call S the inverse of T, and write it as T⁻¹
Let V , W be vector spaces. Let T : V → W be linear.
Then T is invertible if and only if T is injective/surjective/bijective
Bijective
Let V , W be vector spaces. Let T : V → W be linear.
Then T is invertible if and only if T is bijective
Prove it
Proof bottom of pg 46
Let U, V , W be vector spaces. Let S : U → V and
T : V → W be invertible linear transformations. Then T S : U → W is [ ] , and (TS)⁻¹ =
Prove it
invertible
(TS)⁻¹ = S⁻¹T⁻¹
Let V , W be vector spaces. Let T : V → W be linear
Define the kernel (or null space) of T
ker T := {v ∈ V : T(v) = 0𝓌}
Let V , W be vector spaces. Let T : V → W be linear
Define the image of T
Im T := {T(v) : v ∈ V }
Let V , W be vector spaces. Let T : V → W be linear. For v₁, v₂ ∈ V , T(v₁) = T(v₂) iff [ ]
v₁ - v₂ ∈ ker T
Let V , W be vector spaces. Let T : V → W be linear. For v₁, v₂ ∈ V , T(v₁) = T(v₂) iff v₁ - v₂ ∈ ker T
Prove it
For v₁, v₂ ∈ V, we have
T(v₁) = T(v₂) ⇔ T(v₁) - T(v₂) = 0𝓌 ⇔ T(v₁ - v₂) = 0𝓌 ⇔ v₁ - v₂ ∈ ker T
Let V , W be vector spaces. Let T : V → W be linear. Then
T is injective if and only if [ ]
kerT = {0ᵥ}
Let V , W be vector spaces. Let T : V → W be linear. Then
T is injective if and only if kerT = {0ᵥ}
Prove it
Proof. (⇐) Assume that ker T = {0ᵥ}
Take v₁, v₂ ∈ V with T(v₁) = T(v₂).
Then v₁ - v₂ ∈ ker T (previously proved), so v₁ = v₂
So T is injective.
(⇒) Assume that ker T ≠ {0ᵥ} Then there is v ∈ ker T with v ≠ 0ᵥ
Then T(v) = T(0ᵥ), so T is not injective.
Let V , W be vector spaces over F. Let T : V → W be
linear. Then
(i) ker T is a subspace of [ ] and Im T is a subspace of [ ];
(ii) if A is a spanning set for V, then T(A) is a spanning set for [ ]; and
(iii) if V is finite-dimensional, then ker T and [ ] are finite-dimensional.
Let V , W be vector spaces over F. Let T : V → W be
linear. Then
(i) ker T is a subspace of [V] and Im T is a subspace of [W];
(ii) if A is a spanning set for V , then T(A) is a spanning set for [ImT]; and
(iii) if V is finite-dimensional, then ker T and [ImT] are finite-dimensional.
Define nullity
Let V , W be vector spaces with V finite-dimensional. Let T : V → W be linear. We define the nullity of T to be null(T) := dim(ker T)
Define Rank
Let V , W be vector spaces with V finite-dimensional. Let T : V → W be linear.
the rank of T to be rank(T) := dim(Im T)
What is the rank-nullity theorem?
Let V , W be vector spaces with V finite-dimensional. Let T : V → W be linear. Then dim V = rank(T) + null(T).
Prove the rank-nullity theorem
Take a basis v₁, …, vₙ for ker T, where n = null(T).
Since ker T ≤ V , by Theorem 24 this can be extended to a basis v₁, …, vₙ, v’₁, …, v’ₙ of V
Then dim(V ) = n + r.
For 1 ≤ i ≤ r, let wᵢ = T(v’ᵢ)
Let V be a finite-dimensional vector space. Let T : V → V be linear. The following are equivalent: (i) T is invertible; (ii) rank T = [ ] ; (iii) null T = [ ]
(i) T is invertible;
(ii) rank T = dim V ;
(iii) null T = 0
Let V be a finite-dimensional vector space. Let T : V → V
be linear.
Are any one-sided inverses two-sided?
Prove it
Then any one-sided inverse of T is a two-sided inverse, and so is unique.
Proof pg50
Let V and W be vector spaces, with V finite-dimensional. Let
T : V → W be linear. Let U ≤ V . Then dim U−null T ≤ [ ] ≤ dim U.
In particular, if T is [ ] then dim T(U) = dim U
prove it
dim T(U)
injective
proof end pg 50
Let V be an n-dimensional vector space over F, let v₁, …, vₙ be a basis of V . Let W be an m-dimensional vector space over F, let w₁, …, wₘ be a basis of W. Let T : V → W be a linear transformation. We define an m × n matrix for T as follows…..
(basis form)
For 1 ≤ j ≤ n, T(vⱼ) ∈ W so T(vⱼ) is uniquely expressible as a linear combination of w₁, …, wₘ : there are unique aᵢⱼ (for 1 ≤ i ≤ m such that T(vⱼ) = a₁ⱼw₁, …, aₘⱼwₘ.
That is,
T(v₁) = a₁₁w₁, …, aₘ₁wₘ
T(v₂) = a₁₂w₁, …, aₘ₂wₘ
…
T(vₙ) = a₁ₙw₁, …, aₘₙwₘ
We say that M(T) = (aᵢⱼ) is the matrix for T with respect to these ordered bases for V and W
Let V be an n-dimensional vector space over F, let Bᵥ be
an ordered basis for V . Let W be an m-dimensional vector space over F, let B𝓌 be an ordered basis for W. Then
(i) the matrix of 0 : V → W is [ ]
(ii) the matrix of idᵥ : V → V is [ ]
(iii) if S : V → W, T : V → W are linear and α, β ∈ F, then M(αS+βT) = [ ]
Moreover, let T : V → W be linear, with matrix A with respect to Bᵥ and B𝓌. Take v ∈ V with coordinates xᵀ = (x₁, …, xₙ)ᵀ with respect to Bᵥ.
Then Ax is the [ ] of T(v) with respect to [ ]
(i) the matrix of 0 : V → W is 0ₘₓₙ
(ii) the matrix of idᵥ : V → V is Iₙ
(iii) if S : V → W, T : V → W are linear and α, β ∈ F, then M(αS+βT) = αM(S) + βM(T)
Then Ax is the coordinate vector of T(v) with respect to B𝓌
Let U, V , W be finite-dimensional vector spaces over F, with ordered bases Bᵤ , Bᵥ , B𝓌 respectively. Say Bᵤ has size m, Bᵥ has
size n, B𝓌 has size p. Let S : U → V and T : V → W be linear. Let A be
the matrix of S with respect to BU and Bᵥ. Let B be the matrix of T with
respect to Bᵥ and B𝓌 . Then the matrix of T ◦ S with respect to Bᵤ and B𝓌
is [ ]
BA
Prove that matrix multiplication is associative
Proof, end of pg 54
Let V be a finite-dimensional vector space. Let T : V → V
be an invertible linear transformation. Let v₁, …, vₙ be a basis of V . Let A be the matrix of T with respect to this basis (for both domain and codomain).
Is A invertible? If so, what does the inverse represent?
Then A is invertible, and A⁻¹
is the matrix of T⁻¹ with respect to this basis
What is the change of basis theorem?
Let V , W be finite-dimensional
vector spaces over F. Let T : V → W be linear. Let v₁, …, vₙ and v’₁, …, v’ₙ be bases for V. Let w₁, …, wₙ and w’₁, …, w’ₙ be bases for W. Let A = (aᵢⱼ) ∈ Mₘₓₙ (F) be the matrix for T with respect to v’₁, …, v’ₙ and w’₁, …, w’ₙ.
Take pᵢⱼ, qᵢⱼ ∈ F such that v’ᵢ = ⱼ₌₁Σⁿ pᵢⱼvⱼ and w’ᵢ = ⱼ₌₁Σⁿ qᵢⱼwⱼ
Let P = (pᵢⱼ) ∈ Mₘₓₙ (F) and Q = (qᵢⱼ) ∈ Mₘₓₙ (F)
Then B = Q⁻¹AP
Let V be a finite dimensional vector space. Let T : V → V be linear.
What is the second version of change of basis theorem (only one vector space)?
Let V be a finitedimensional vector space. Let T : V → V be linear. Let v₁, …, vₙ and v’₁, …, v’ₙ be bases for V. Let A be the matrix of T with respect to v₁, …, vₙ. Let B be the matrix of T with respect to v’₁, …, v’ₙ.
Let P be the change
of basis matrix, that is, the n × n matrix (pᵢⱼ) such that v’ᵢ = ⱼ₌₁Σⁿ pᵢⱼvⱼ
Then
B = P⁻¹AP
change of basis theorem:
The change of basis matrix P is the matrix of the identity map
idᵥ : V → V with respect to the basis [ ] for V as domain and the basis [ ] as codomain
v’₁, …, v’ₙ domain
v₁, …, vₙ codomain
When are two matrices similar?
Take A,B ∈ Mₘₓₙ (F).If there is an invertible n × n matrix P
such that P⁻¹AP = B, then we say that A and B are similar
rowsp(Aᵀ) =
rowrank(Aᵀ) =
colsp(Aᵀ) =
colrank(Aᵀ) =
colsp(A)
colrank(A)
rowsp(A)
rowrank(A)
Take A ∈ Mₘₓₙ (F), let r = colrank(A). Then there are
invertible matrices P ∈ Mₙₓₙ (F) and Q ∈ Mₘₓₘ (F) such that Q⁻¹AP has the block form
( Ir 0rxs )
( 0txr 0txs )
where s = n − r and t = m − r
Prove it
Proof pg59
Take A ∈ Mₘₓₙ (F). Let R be an invertible m × m matrix, let
P be an invertible n × n matrix. Then
(i) rowsp(RA) = [ ] and so rowrank(RA) = [ ];
(ii) colrank(RA) = [ ];
(iii) colsp(AP) = [ ] and so colrank(AP) = colrank([ ]);
(iv) rowrank(AP) = rowrank([ ]).
(i) rowsp(RA) = rowsp(A) and so rowrank(RA) = rowrank(A);
(ii) colrank(RA) = colrank(A);
(iii) colsp(AP) = colsp(A) and so colrank(AP) = colrank(A);
(iv) rowrank(AP) = rowrank(A).
Let A be an m × n matrix.
Then what is colrank(A) = ??
colrank(A) = rowrank(A)
What is the rank of a matrix?
Let A be an m × n matrix. The rank of A, written rank(A), is
the row rank of A (which we have just seen is also the column rank of A).
rank(A) = rank(T)
Let A be an m×n matrix. Let x be the n×1 column vector
of variables x₁, …, xₙ. Let S be the solution space of the system Ax = 0 of m
homogeneous linear equations in x₁, …, xₙ, that is, S = {v ∈ 𝒸ₒₗFⁿ : Av = 0}.
Then dim S = [ ]
dim S = n − colrank A
Let V be a vector space over F
What is a bilinear form on V?
A bilinear form on V is a
function of two variables from V taking values in F, often written :
V × V → F, such that
(i) = α₁ + α₂ for all v₁, v₂, v₃ ∈ V and α₁,α₂ ∈ F; and
(ii) = α₂ + α₃ for all v₁, v₂, v₃ ∈ V and α₂,α₃ ∈ F
What is a Gram matrix?
Let V be a vector space over F. Let be a bilinear form on V . Take v₁, …, vₙ ∈ V . The Gram matrix of v₁, …, vₙ with respect to < −, − > is the n × n matrix () ∈ Mₙₓₙ (F)
Let V be a finite-dimensional vector space over F. Let be a bilinear form on V. Let v₁, …, vₙ be a basis for V . Let
A ∈ Mₙₓₙ (F) be the associated Gram matrix. For u, v ∈ V , let x = (x₁, …, xₙ)∈ Fⁿ and y = (y₁, …, yₙ)∈ Fⁿ be the unique coordinate vectors such that u = x₁v₁ + … + xₙvₙ and v = y₁v₁ + … + yₙvₙ
Then <u> = ???</u>
<u> = xAyᵀ</u>
What is a symmetric bilinear form?
We say that a bilinear form : V × V → F is symmetric if
= for all v₁, v₂ ∈ V
What is a positive definite bilinear from?
Let V be a real vector space. We say that a bilinear form
: V × V → R is positive definite if ≥ 0 for all v ∈ V, with = 0 if and only if v = 0.
What is an inner product on a real vector space?
An inner product on a real vector space V is a positive definite symmetric bilinear form on V
What is an inner product space?
ymmetric bilinear form on V .
We say that a real vector space is an inner product space if it is equipped
with an inner product. Unless otherwise specified, we write the inner product
as
Let V be a real inner product space
What is the norm/magnitude/length of v for v ∈ V?
||v|| := √
Define the angle between any two vectors for any inner product space
the angle between nonzero vectors x, y ∈ V to be cos⁻¹( / (||x|| ||y||) ) where this is taken to lie in the interval [0, π]
Let V be a finite-dimensional real inner product space. Take u ∈ V \ {0}. Define u⊥ (The ⊥ should be superscript) dim(u⊥) = ? V = (in terms of u⊥) ??
u⊥ := { v ∈ V : = 0}
Then u⊥ is a subspace of V
dim(u⊥) = dim V - 1 V = Span(u) ⊕ u⊥
Let V be an inner product space. We say that {v₁, . . . , vₙ} ⊆ V is an orthonormal set if …
We say that {v₁, . . . , vₙ} ⊆ V is an orthonormal set if for all i, j we have
= δᵢⱼ = { 1 if i = j
{ 0 if i ≠ j
Let {v₁, . . . , vₙ} be an orthonormal set in an inner product space V. Are v₁, . . . , vₙ linearly dependent or independent?
linearly independent
So a set of n orthonormal vectors in an n-dimensional vector space is a [ ]
basis
Let V be an n-dimensional real inner product space.
Is there an orthonormal basis of V?
Yes, v₁, . . . , vₙ
ᵀ
Take X ∈ Mₙₓₙ (R). Consider Rⁿ equipped with the usual
inner product = x · y. The following are equivalent:
(i) XXᵀ = [ ]
(ii) [ ] X = Iₙ
(iii) the [ ] of X form an orthonormal basis of Rⁿ;
(iv)the [ ] of X form an orthonormal basis of Rⁿcol;
(v) for all x, y ∈ Rⁿ, we have xX · yX = [ ]
(i) XXᵀ = Iₙ
(ii) XᵀX = Iₙ
(iii) the rows of X form an orthonormal basis of Rⁿ;
(iv) the columns of X form an orthonormal basis of Rⁿcol;
(v) for all x, y ∈ Rⁿ, we have xX · yX = x · y
X is orthogonal iff the map Rₓ is an [ ]
Isometry
What is the Cauchy-Schwarz Inequality?
Let V be a real inner product
space. Take v₁, v₂ ∈ V . Then || ≤ ||v₁|| ||v₂||, with equality if and only if v₁, v₂ are linearly dependent.
What it a complex inner product space?
A complex inner product space is a complex vector space equipped with
a positive definite sesquilinear form
What are Hermitian forms and spaces?
Hermitian form = Positive definite sesquilinear form
Hermitian Space = complex inner product space
Let V be a complex vector space
What is a sesquilinear form?
Let V be a complex vector space. A function : V ×V → C is a sesquilinear form if
(i) = α₁ + α₂ for all v₁, v₂, v₃ ∈ V and α₁,α₂ ∈ C; and
\_\_\_\_\_ (ii) =