Linear Algebra 1 Flashcards
Given m, n ≥ 1, what is an m × n matrix?
a rectangular array with m
rows and n columns.
What is a row vector?
A 1 x n matrix
What is a column vector?
An m x 1 matrix
What is a square matrix?
An n x n matrix
What is a diagonal matrix?
If A = (aᵢⱼ) is a
square matrix and aᵢⱼ = 0 whenever i ≠ j, then we say that A is a diagonal
matrix
What is F ? (Fancy F)
The field from which the entries (scalars) of a matrix come
Usually F = the reals, or the complex
What does Mₘₓₙ(F) mean?
Mₘₓₙ(F) = {A : A is an m × n matrix with entries from F}
What does Fⁿ mean? (Fancy F)
Fⁿ for M₁ₓₙ(F)
Similarly for Fᵐ
Is matrix addition associative and commutative?
Yes to both
What is the formula for entry (i, j) with matrix multiplication?
ₖ₌₀Σⁿaᵢₖbₖⱼ
Is matrix multiplication associative?
Yes
Is matrix multiplication distributive?
Yes
When do two matrices commute?
If AB=BA
Not true for most A and B
What is an upper triangular matrix?
Let A = (aᵢⱼ) ∈ Mₙₓₙ(F)
If aᵢⱼ = 0 whenever i > j
What is an lower triangular matrix?
Let A = (aᵢⱼ) ∈ Mₙₓₙ(F)
If aᵢⱼ = 0 whenever i < j
We say that A ∈ Mₙₓₙ(F) is invertible if …..
there exists B ∈ Mₙₓₙ(F) such that AB = Iₙ = BA.
If A ∈ Mₙₓₙ(F) is invertible, is the inverse unique?
Prove it
Yes Proof: Suppose that B, C ∈ Mₙₓₙ(F) are both inverses for A Then AB = BA = Iₙ and AC = CA = Iₙ so B = BIₙ = B(AC) = (BA)C = IₙC = C.
Let A, B be invertible n×n matrices. Is AB invertible?
Yes
Let A, B be invertible n×n matrices.
What is (AB)⁻¹ ??
Prove it
(AB)⁻¹ = B⁻¹A⁻¹
What is the transpose of A = (aᵢⱼ) ∈ Mₙₓₙ(F)?
the n × m matrix Aᵀ with (i, j) entry aⱼᵢ
What is an orthogonal matrix?
We say that A ∈ Mₙₓₙ(R) is orthogonal if AAᵀ = Iₙ = AᵀA
Equivalently, A is invertible and Aᵀ = A⁻¹
What is a unitary matrix?
We say that A ∈ Mₙₓₙ(C) is unitary if AA⁻ᵀ = Iₙ = A⁻ᵀA
By A⁻ (A bar) we
mean the matrix obtained from A by replacing each entry by its complex
conjugate.
What is the general strategy for solving a system of m equations in variables x1, …, xn by Gaussian elimination?
Swap equations if necessary to make the coefficient of x1 in the first
equation nonzero.
Divide through the first equation by the coefficient of x1
Subtract appropriate multiples of the first equation from all other equations to eliminate x1 from all but the first equation.
Now the first equation will tell us the value of x1 once we have determined the values of x2, . . . , xn, and we have m − 1 other equations in
n − 1 variables.
Use the same strategy to solve these m−1 equations in n−1 variables
What are the 3 elementary row operations on the augmented matrix A|b
for some 1 ≤ r < s ≤ m, interchange rows r and s
for some 1 ≤ r ≤ m and λ ≠ 0, multiply (every entry of) row r by λ
• for some 1 ≤ r, s ≤ m with r ≠ s and λ ∈ F, add λ times row r to row
s.
Are the EROs invertible?
Yes
We say that an m × n matrix E is in echelon form if…
(i) if row r of E has any nonzero entries, then the first of these is 1;
(ii) if 1 ≤ r < s ≤ m and rows r, s of E contain nonzero entries, the first of which are eᵣⱼ and eₛₖ respectively, then j < k (the leading entries of
lower rows occur to the right of those in higher rows);
(iii) if row r of E contains nonzero entries and row s does not (that is,
eₛⱼ = 0 for 1 ≤ j ≤ n), then r < s (zero rows, if any exist, appear
below all nonzero rows).
Let E | d be the m × (n + 1) augmented matrix of a system of equations, where E is in echelon form. We say that variable xⱼ is determined if …..
What is the alternative to being determined?
if there is i such that eᵢⱼ is the leading entry of row i of E (so eᵢⱼ = 1)
Otherwise we say that xⱼ is free
Gaussian Elimination:
What shows that the equations are inconsistent?
When the final row reads 0=1
What is reduced row echelon form?
We say that an m × n matrix is in reduced row echelon form
(RRE form) if it is in echelon form and if each column containing the leading
entry of a row has all other entries 0.
Can all matrices in echelon form be reduced to RRE form?
Yes
An invertible nxn matrix can be reduced to Iₙ using [ ]
Prove it
EROs
Proof:
Take A ∈ Mₙₓₙ(F) with A invertible.
Proof. Take A ∈ Mn×n(F) with A invertible.
Let E be an RRE form of A.
We can obtain E from A by EROs, and EROs do not change the solution
set of the system of equations Ax = 0. If Ax = 0, then x = Iₙ x = (A⁻¹A)x = A⁻¹(Ax) = A⁻¹0 = 0, so the only n × 1 column vector x with Ax = 0 is
x = 0. (Here 0 is the n × 1 column vector of zeros.) So the only solution of
Ex = 0 is x = 0.
We can read off solutions to Ex = 0. We could choose arbitrary values
for the free variables—but the only solution is x = 0, so there are no free
variables. So all the variables are determined, so each column must contain
the leading entry of a row (which must be 1). Since the leading entry of a
row comes to the right of leading entries of rows above, it must be the case
that E = Iₙ
What is an elementary matrix?
For an ERO on an m × n matrix, we define the corresponding
elementary matrix to be the result of applying that ERO to Iₘ
Is the inverse of an ERO and ERO?
Yes
Is the inverse of an elementary matrix and elementary matrix?
Yes
Let A be an m × n matrix, let B be obtained from A by applying
an ERO. Then B = EA, where E is …..
Prove it
E is the elementary matrix for that ERO
Let A be an invertible n × n matrix. Let X1, X2, . . . , Xk be
a sequence of EROs that take A to Iₙ. Let B be the matrix obtained from In
by this same sequence of EROs. Then B = ??
prove it
B = A⁻¹
Proof:
Let Eᵢ be the elementary matrix corresponding to ERO Xᵢ. Then applying X1, X2, . . . , Xk to A gives matrix Eₖ…E₂E₁A = Iₙ, and applying 1, X2, . . . , Xk to Iₙ gives matrix Eₖ…E₂E₁ = B
So BA = Iₙ, so B = A⁻¹
The sequence of EROs X1, X2, . . . , Xk that take A to Iₙ exists.
Prove it
Proof theorem 6: An invertible n × n matrix can be reduced to Iₙ using EROs.
What is a vector space?
Let F be a field. A vector space over F is a non-empty set V together with a map V × V → V given by (v, v′) |→ v + v′
(called addition) and a map F × V → V given by (λ, v) |→ λv (called scalar multiplication)
that satisfy the vector space axioms
What are the vector space axioms?
Addition ones
• u + v = v + u for all u, v ∈ V (addition is commutative);
• u + (v + w) = (u + v) + w for all u, v, w ∈ V (addition is associative);
• there is 0ᵥ ∈ V such that v + 0ᵥ = v = 0ᵥ + v for all v ∈ V (existence of additive identity);
• for all v ∈ V there exists w ∈ V such that v+w = 0ᵥ = w + v (existence
of additive inverses);
What are the vector space axioms?
Multiplication ones)
• λ(u + v) = λu + λv for all u, v ∈ V , λ ∈ F (distributivity of scalar multiplication over vector addition);
• (λ + µ)v = λv + µv for all v ∈ V , λ, µ ∈ F (distributivity of scalar multiplication over field addition);
• (λµ)v = λ(µv) for all v ∈ V , λ, µ ∈ F (scalar multiplication interacts
well with field multiplication);
• 1v = v for all v ∈ V (identity for scalar multiplication).
For m, n ≥ 1, is the set Mₘₓₙ(R) a real vector space?
Yes
Elements of V are called [ ]
Elements of F are called [ ]
If V is a vector space over R, then we say that V is a [ ] vector space
If V is a vector space over C, then we say that V is a [ ] vector space
If V is a vector space over F, then we say that V is an [ ] vector space
Vectors Scalars Real Complex F
Let V be a vector space over F
Then there is a [ ] additive identity element 0ᵥ
unique
Prove that:
Let V be a vector space over F. Take v ∈ V . Then there
is a unique additive inverse for v.
Proof
Let V be a vector space over F. Take v ∈ V .
What is the unique additive inverse of v?
-v
Let V be a vector space over a field F. Take v ∈ V , λ ∈ F.
Then
λ0ᵥ = 0ᵥ
Prove it
We have
λ0ᵥ = λ(0ᵥ + 0ᵥ) (definition of additive identity)
= λ0ᵥ + λ0ᵥ (distributivity of scalar · over vector +).
Adding -(λ0ᵥ) to both sides, we have
λ0ᵥ + (-(λ0ᵥ)) = (λ0ᵥ + λ0ᵥ) + (-(λ0ᵥ))
so 0ᵥ = λ0ᵥ (using definition of additive inverse, associativity of addition, definition of additive identity).
Let V be a vector space over a field F. Take v ∈ V , λ ∈ F. Then 0ᵥ = 0ᵥ (First v = v, second v = V) Prove it
Proof
Let V be a vector space over a field F. Take v ∈ V , λ ∈ F.
Then
−λ)v = −(λv) = λ(−v)
Prove it
We have
λv + λ(−v) = λ(v + (−v)) (distributivity of scalar · over vector +)
= λ0ᵥ (definition of additive inverse)
= 0ᵥ
So λ(−v) is the additive inverse of λv (by uniqueness), so λ(−v) =
−(λv).
Similarly, we see that λv + (−λ)v = 0ᵥ and so (−λ)v = −(λv).
Let V be a vector space over a field F. Take v ∈ V , λ ∈ F.
Then
f λv = 0ᵥ then λ = 0 or v = 0ᵥ
Prove it
Suppose that λv = 0ᵥ , and that λ ≠ 0
Then λ⁻¹ exists in F, and
λ⁻¹(λv) = λ⁻¹ 0ᵥ
so (λ⁻¹ λ)v = 0ᵥ (scalar · interacts well with field ·, and by (i))
so
1v = 0ᵥ
so v = 0ᵥ (identity for scalar multiplication)
What is a subspace?
Let V be a vector space over F. A subspace of V is a non-empty subset of V that is closed under addition and scalar multiplication, that is,
a subset U ⊆ V such that
(i) U ≠ ∅ (U is non-empty);
(ii) u₁ + u₂ ∈ U for all u₁, u₂ ∈ U (U is closed under addition);
(iii) λu ∈ U for all u ∈ U, λ ∈ F (U is closed under scalar multiplication).
Is {0ᵥ} a subspace of V?
Always
The zero/trivial subspace
Is V a subspace of V?
Always
What are the subspaces of V called that isn’t 0ᵥ?
Proper subsapce
What is the subspace test?
Let V be a vector space over F, let U be a subset of V . Then U is a subspace if and only if
(i) 0ᵥ ∈ U; and
(ii) λu₁ + u₂ ∈ U for all u₁, u₂ ∈ U and λ ∈ F
Prove the subspace test
Assume that U is a subspace of V .
• 0ᵥ ∈ U: Since U is a subspace, it is non-empty, so there exists u₀ ∈ U
Since U is closed under scalar multiplication, 0ᵤ = 0ᵥ ∈ U
• λu₁ + u₂ ∈ U for all u₁, u₂ ∈ U, and λ ∈ F. Then λu₁ ∈ U because U is closed under scalar multiplication, so λu₁ + u₂ ∈ U because U is closed under addition
(Prove other direction)
Assume that 0ᵥ ∈ U and that λu₁ + u₂ ∈ U for all u₁, u₂ ∈ U, and λ ∈ F.
• U is non-empty: have 0ᵥ ∈ U
• U is closed under addition: for u₁ + u₂ ∈ U have u₁ + u₂ = 1 * u₁ + u₂ ∈ U
• U is closed under scalar multiplication: for u ∈ U and λ ∈ F, have λu = λu + 0ᵥ ∈ U
So U is a subspace of V
What does the notation U ≤ V mean? What is the difference between that and U ⊆ V?
If U is a subspace of the vector space V , then we write U ≤ V . (Compare with U ⊆ V , which means that U is a subset of V but we do not
know whether it is a subspace.)
Let V be a vector space over F, and let U ≤ V . Then
(i) U is a vector space over F;
Prove it
We need to check the vector space axioms, but first we need to
check that we have legitimate operations.
Since U is closed under addition, the operation + restricted to U gives
a map U × U → U.
Since U is closed under scalar multiplication, that operation restricted
to U gives a map F × U → U.
Now for the axioms.
Commutativity and associativity of addition are inherited from V .
There is an additive identity (by the Subspace Test).
There are additive inverses: if u ∈ U then multiplying by −1 ∈ F and
applying [(−λ)v = −(λv) = λ(−v)] shows that −u ∈ U.
The other four properties are all inherited from V .
Let V be a vector space over F, and let U ≤ V . Then
(ii) if W ≤ U then W ≤ V (“a subspace of a subspace is a subspace”).
Prove it
This is immediate from the definition of a subspace
Let V be a vector space over F. Take A, B ⊆ V and take λ ∈ F
Define A+B and λA
A + B := {a + b : a ∈ A, b ∈ B}
λA := {λa : a ∈ A}.
Let V be a vector space. Take U, W ≤ V
Is U+W a subspace of V?
Is U ∩ W a subspace of V?
Prove it
Yes
Then U +W ≤
V and U ∩ W ≤ V .
Does R, the reals have any proper subspaces, and if so what are they?
No
Let V = R, let U be a non-trivial subspace of V
Then there exists u₀ ∈ U with u₀ ≠ 0. Take x ∈ R. Let λ = x/u₀ Then x = λu₀∈ U, because U is closed under scalar multiplication. So U = V .So R has no non-zero proper subspaces
Let V be a vector space over F, take u₁, u₂, …, uₘ∈ V .
Define U := {α₁u₁ + … + αₘuₘ : α₁, …, αₘ ∈ F}. Then U ≤ V .
Prove it
Subspace test
pg29
Let V be a vector space over F, take u₁, u₂, …, uₘ∈ V. What is a linear combination of u₁, u₂, …, uₘ
a vector α₁u₁ + … + αₘuₘ for some α₁, …, αₘ ∈ F
Define the span of u₁, u₂, …, uₘ
Span(u₁, u₂, …, uₘ) := {α₁u₁ + … + αₘuₘ : α₁, …, αₘ ∈ F}.
The smallest subspace of V that contains u₁, u₂, …, uₘ
What are the different notations for the span of u₁, u₂, …, uₘ
Span(u₁, u₂, …, uₘ)
Sp(u₁, u₂, …, uₘ)
<u></u>
Define the span of a set S ⊆ V (even a potentially infinite set S)
Span(S) := {α₁s₁ + … + αₘsₘ : m ≥ 0,s₁, …, sₘ ∈ S, α₁, …, αₘ ∈ F}
Can a linear combination involve infinitely many elements? Say if S is infinite
No
a linear combination only ever involves finitely many
elements of S, even if S is infinite.
What is the empty sum?
And what is the span of the empty set?
ᵢ∈∅ Σαᵢuᵢ is 0ᵥ (the ‘empty sum’), so
Span ∅ = {0ᵥ}