Linear Algebra 1 Flashcards

1
Q

Given m, n ≥ 1, what is an m × n matrix?

A

a rectangular array with m

rows and n columns.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a row vector?

A

A 1 x n matrix

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a column vector?

A

An m x 1 matrix

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a square matrix?

A

An n x n matrix

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a diagonal matrix?

A

If A = (aᵢⱼ) is a
square matrix and aᵢⱼ = 0 whenever i ≠ j, then we say that A is a diagonal
matrix

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is F ? (Fancy F)

A

The field from which the entries (scalars) of a matrix come

Usually F = the reals, or the complex

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What does Mₘₓₙ(F) mean?

A

Mₘₓₙ(F) = {A : A is an m × n matrix with entries from F}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What does Fⁿ mean? (Fancy F)

A

Fⁿ for M₁ₓₙ(F)

Similarly for Fᵐ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Is matrix addition associative and commutative?

A

Yes to both

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the formula for entry (i, j) with matrix multiplication?

A

ₖ₌₀Σⁿaᵢₖbₖⱼ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Is matrix multiplication associative?

A

Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Is matrix multiplication distributive?

A

Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

When do two matrices commute?

A

If AB=BA

Not true for most A and B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is an upper triangular matrix?

A

Let A = (aᵢⱼ) ∈ Mₙₓₙ(F)

If aᵢⱼ = 0 whenever i > j

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is an lower triangular matrix?

A

Let A = (aᵢⱼ) ∈ Mₙₓₙ(F)

If aᵢⱼ = 0 whenever i < j

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

We say that A ∈ Mₙₓₙ(F) is invertible if …..

A

there exists B ∈ Mₙₓₙ(F) such that AB = Iₙ = BA.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

If A ∈ Mₙₓₙ(F) is invertible, is the inverse unique?

Prove it

A
Yes
Proof:
Suppose that B, C ∈ Mₙₓₙ(F) are both inverses for A
Then AB = BA = Iₙ and AC = CA = Iₙ
so B = BIₙ = B(AC) = (BA)C = IₙC = C.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Let A, B be invertible n×n matrices. Is AB invertible?

A

Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Let A, B be invertible n×n matrices.
What is (AB)⁻¹ ??
Prove it

A

(AB)⁻¹ = B⁻¹A⁻¹

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the transpose of A = (aᵢⱼ) ∈ Mₙₓₙ(F)?

A

the n × m matrix Aᵀ with (i, j) entry aⱼᵢ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is an orthogonal matrix?

A

We say that A ∈ Mₙₓₙ(R) is orthogonal if AAᵀ = Iₙ = AᵀA

Equivalently, A is invertible and Aᵀ = A⁻¹

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is a unitary matrix?

A

We say that A ∈ Mₙₓₙ(C) is unitary if AA⁻ᵀ = Iₙ = A⁻ᵀA
By A⁻ (A bar) we
mean the matrix obtained from A by replacing each entry by its complex
conjugate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is the general strategy for solving a system of m equations in variables x1, …, xn by Gaussian elimination?

A

Swap equations if necessary to make the coefficient of x1 in the first
equation nonzero.
Divide through the first equation by the coefficient of x1
Subtract appropriate multiples of the first equation from all other equations to eliminate x1 from all but the first equation.
Now the first equation will tell us the value of x1 once we have determined the values of x2, . . . , xn, and we have m − 1 other equations in
n − 1 variables.
Use the same strategy to solve these m−1 equations in n−1 variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What are the 3 elementary row operations on the augmented matrix A|b

A

for some 1 ≤ r < s ≤ m, interchange rows r and s
for some 1 ≤ r ≤ m and λ ≠ 0, multiply (every entry of) row r by λ
• for some 1 ≤ r, s ≤ m with r ≠ s and λ ∈ F, add λ times row r to row
s.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Are the EROs invertible?

A

Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

We say that an m × n matrix E is in echelon form if…

A

(i) if row r of E has any nonzero entries, then the first of these is 1;
(ii) if 1 ≤ r < s ≤ m and rows r, s of E contain nonzero entries, the first of which are eᵣⱼ and eₛₖ respectively, then j < k (the leading entries of
lower rows occur to the right of those in higher rows);
(iii) if row r of E contains nonzero entries and row s does not (that is,
eₛⱼ = 0 for 1 ≤ j ≤ n), then r < s (zero rows, if any exist, appear
below all nonzero rows).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Let E | d be the m × (n + 1) augmented matrix of a system of equations, where E is in echelon form. We say that variable xⱼ is determined if …..
What is the alternative to being determined?

A

if there is i such that eᵢⱼ is the leading entry of row i of E (so eᵢⱼ = 1)
Otherwise we say that xⱼ is free

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Gaussian Elimination:

What shows that the equations are inconsistent?

A

When the final row reads 0=1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What is reduced row echelon form?

A

We say that an m × n matrix is in reduced row echelon form
(RRE form) if it is in echelon form and if each column containing the leading
entry of a row has all other entries 0.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Can all matrices in echelon form be reduced to RRE form?

A

Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

An invertible nxn matrix can be reduced to Iₙ using [ ]

Prove it

A

EROs
Proof:
Take A ∈ Mₙₓₙ(F) with A invertible.
Proof. Take A ∈ Mn×n(F) with A invertible.
Let E be an RRE form of A.
We can obtain E from A by EROs, and EROs do not change the solution
set of the system of equations Ax = 0. If Ax = 0, then x = Iₙ x = (A⁻¹A)x = A⁻¹(Ax) = A⁻¹0 = 0, so the only n × 1 column vector x with Ax = 0 is
x = 0. (Here 0 is the n × 1 column vector of zeros.) So the only solution of
Ex = 0 is x = 0.
We can read off solutions to Ex = 0. We could choose arbitrary values
for the free variables—but the only solution is x = 0, so there are no free
variables. So all the variables are determined, so each column must contain
the leading entry of a row (which must be 1). Since the leading entry of a
row comes to the right of leading entries of rows above, it must be the case
that E = Iₙ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What is an elementary matrix?

A

For an ERO on an m × n matrix, we define the corresponding

elementary matrix to be the result of applying that ERO to Iₘ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Is the inverse of an ERO and ERO?

A

Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Is the inverse of an elementary matrix and elementary matrix?

A

Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Let A be an m × n matrix, let B be obtained from A by applying
an ERO. Then B = EA, where E is …..
Prove it

A

E is the elementary matrix for that ERO

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Let A be an invertible n × n matrix. Let X1, X2, . . . , Xk be
a sequence of EROs that take A to Iₙ. Let B be the matrix obtained from In
by this same sequence of EROs. Then B = ??

prove it

A

B = A⁻¹

Proof:
Let Eᵢ be the elementary matrix corresponding to ERO Xᵢ. Then applying X1, X2, . . . , Xk to A gives matrix Eₖ…E₂E₁A = Iₙ, and applying 1, X2, . . . , Xk to Iₙ gives matrix Eₖ…E₂E₁ = B
So BA = Iₙ, so B = A⁻¹

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

The sequence of EROs X1, X2, . . . , Xk that take A to Iₙ exists.
Prove it

A

Proof theorem 6: An invertible n × n matrix can be reduced to Iₙ using EROs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What is a vector space?

A

Let F be a field. A vector space over F is a non-empty set V together with a map V × V → V given by (v, v′) |→ v + v′
(called addition) and a map F × V → V given by (λ, v) |→ λv (called scalar multiplication)
that satisfy the vector space axioms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

What are the vector space axioms?

Addition ones

A

• u + v = v + u for all u, v ∈ V (addition is commutative);
• u + (v + w) = (u + v) + w for all u, v, w ∈ V (addition is associative);
• there is 0ᵥ ∈ V such that v + 0ᵥ = v = 0ᵥ + v for all v ∈ V (existence of additive identity);
• for all v ∈ V there exists w ∈ V such that v+w = 0ᵥ = w + v (existence
of additive inverses);

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

What are the vector space axioms?

Multiplication ones)

A

• λ(u + v) = λu + λv for all u, v ∈ V , λ ∈ F (distributivity of scalar multiplication over vector addition);
• (λ + µ)v = λv + µv for all v ∈ V , λ, µ ∈ F (distributivity of scalar multiplication over field addition);
• (λµ)v = λ(µv) for all v ∈ V , λ, µ ∈ F (scalar multiplication interacts
well with field multiplication);
• 1v = v for all v ∈ V (identity for scalar multiplication).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

For m, n ≥ 1, is the set Mₘₓₙ(R) a real vector space?

A

Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Elements of V are called [ ]
Elements of F are called [ ]
If V is a vector space over R, then we say that V is a [ ] vector space
If V is a vector space over C, then we say that V is a [ ] vector space
If V is a vector space over F, then we say that V is an [ ] vector space

A
Vectors
Scalars
Real
Complex
F
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Let V be a vector space over F

Then there is a [ ] additive identity element 0ᵥ

A

unique

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Prove that:
Let V be a vector space over F. Take v ∈ V . Then there
is a unique additive inverse for v.

A

Proof

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Let V be a vector space over F. Take v ∈ V .

What is the unique additive inverse of v?

A

-v

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Let V be a vector space over a field F. Take v ∈ V , λ ∈ F.
Then
λ0ᵥ = 0ᵥ
Prove it

A

We have
λ0ᵥ = λ(0ᵥ + 0ᵥ) (definition of additive identity)
= λ0ᵥ + λ0ᵥ (distributivity of scalar · over vector +).
Adding -(λ0ᵥ) to both sides, we have
λ0ᵥ + (-(λ0ᵥ)) = (λ0ᵥ + λ0ᵥ) + (-(λ0ᵥ))
so 0ᵥ = λ0ᵥ (using definition of additive inverse, associativity of addition, definition of additive identity).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q
Let V be a vector space over a field F. Take v ∈ V , λ ∈ F.
Then
0ᵥ = 0ᵥ
(First v = v, second v = V)
Prove it
A

Proof

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Let V be a vector space over a field F. Take v ∈ V , λ ∈ F.
Then
−λ)v = −(λv) = λ(−v)
Prove it

A

We have
λv + λ(−v) = λ(v + (−v)) (distributivity of scalar · over vector +)
= λ0ᵥ (definition of additive inverse)
= 0ᵥ
So λ(−v) is the additive inverse of λv (by uniqueness), so λ(−v) =
−(λv).
Similarly, we see that λv + (−λ)v = 0ᵥ and so (−λ)v = −(λv).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Let V be a vector space over a field F. Take v ∈ V , λ ∈ F.
Then
f λv = 0ᵥ then λ = 0 or v = 0ᵥ
Prove it

A

Suppose that λv = 0ᵥ , and that λ ≠ 0
Then λ⁻¹ exists in F, and
λ⁻¹(λv) = λ⁻¹ 0ᵥ
so (λ⁻¹ λ)v = 0ᵥ (scalar · interacts well with field ·, and by (i))
so
1v = 0ᵥ
so v = 0ᵥ (identity for scalar multiplication)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

What is a subspace?

A

Let V be a vector space over F. A subspace of V is a non-empty subset of V that is closed under addition and scalar multiplication, that is,
a subset U ⊆ V such that
(i) U ≠ ∅ (U is non-empty);
(ii) u₁ + u₂ ∈ U for all u₁, u₂ ∈ U (U is closed under addition);
(iii) λu ∈ U for all u ∈ U, λ ∈ F (U is closed under scalar multiplication).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Is {0ᵥ} a subspace of V?

A

Always

The zero/trivial subspace

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Is V a subspace of V?

A

Always

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

What are the subspaces of V called that isn’t 0ᵥ?

A

Proper subsapce

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

What is the subspace test?

A

Let V be a vector space over F, let U be a subset of V . Then U is a subspace if and only if

(i) 0ᵥ ∈ U; and
(ii) λu₁ + u₂ ∈ U for all u₁, u₂ ∈ U and λ ∈ F

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

Prove the subspace test

A

Assume that U is a subspace of V .
• 0ᵥ ∈ U: Since U is a subspace, it is non-empty, so there exists u₀ ∈ U
Since U is closed under scalar multiplication, 0ᵤ = 0ᵥ ∈ U
• λu₁ + u₂ ∈ U for all u₁, u₂ ∈ U, and λ ∈ F. Then λu₁ ∈ U because U is closed under scalar multiplication, so λu₁ + u₂ ∈ U because U is closed under addition

(Prove other direction)
Assume that 0ᵥ ∈ U and that λu₁ + u₂ ∈ U for all u₁, u₂ ∈ U, and λ ∈ F.
• U is non-empty: have 0ᵥ ∈ U
• U is closed under addition: for u₁ + u₂ ∈ U have u₁ + u₂ = 1 * u₁ + u₂ ∈ U
• U is closed under scalar multiplication: for u ∈ U and λ ∈ F, have λu = λu + 0ᵥ ∈ U
So U is a subspace of V

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

What does the notation U ≤ V mean? What is the difference between that and U ⊆ V?

A

If U is a subspace of the vector space V , then we write U ≤ V . (Compare with U ⊆ V , which means that U is a subset of V but we do not
know whether it is a subspace.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Let V be a vector space over F, and let U ≤ V . Then
(i) U is a vector space over F;
Prove it

A

We need to check the vector space axioms, but first we need to
check that we have legitimate operations.
Since U is closed under addition, the operation + restricted to U gives
a map U × U → U.
Since U is closed under scalar multiplication, that operation restricted
to U gives a map F × U → U.
Now for the axioms.
Commutativity and associativity of addition are inherited from V .
There is an additive identity (by the Subspace Test).
There are additive inverses: if u ∈ U then multiplying by −1 ∈ F and
applying [(−λ)v = −(λv) = λ(−v)] shows that −u ∈ U.
The other four properties are all inherited from V .

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

Let V be a vector space over F, and let U ≤ V . Then
(ii) if W ≤ U then W ≤ V (“a subspace of a subspace is a subspace”).
Prove it

A

This is immediate from the definition of a subspace

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

Let V be a vector space over F. Take A, B ⊆ V and take λ ∈ F

Define A+B and λA

A

A + B := {a + b : a ∈ A, b ∈ B}

λA := {λa : a ∈ A}.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

Let V be a vector space. Take U, W ≤ V
Is U+W a subspace of V?
Is U ∩ W a subspace of V?
Prove it

A

Yes
Then U +W ≤
V and U ∩ W ≤ V .

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

Does R, the reals have any proper subspaces, and if so what are they?

A

No
Let V = R, let U be a non-trivial subspace of V
Then there exists u₀ ∈ U with u₀ ≠ 0. Take x ∈ R. Let λ = x/u₀ Then x = λu₀∈ U, because U is closed under scalar multiplication. So U = V .So R has no non-zero proper subspaces

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

Let V be a vector space over F, take u₁, u₂, …, uₘ∈ V .
Define U := {α₁u₁ + … + αₘuₘ : α₁, …, αₘ ∈ F}. Then U ≤ V .
Prove it

A

Subspace test

pg29

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

Let V be a vector space over F, take u₁, u₂, …, uₘ∈ V. What is a linear combination of u₁, u₂, …, uₘ

A

a vector α₁u₁ + … + αₘuₘ for some α₁, …, αₘ ∈ F

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

Define the span of u₁, u₂, …, uₘ

A

Span(u₁, u₂, …, uₘ) := {α₁u₁ + … + αₘuₘ : α₁, …, αₘ ∈ F}.
The smallest subspace of V that contains u₁, u₂, …, uₘ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

What are the different notations for the span of u₁, u₂, …, uₘ

A

Span(u₁, u₂, …, uₘ)
Sp(u₁, u₂, …, uₘ)
<u></u>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

Define the span of a set S ⊆ V (even a potentially infinite set S)

A

Span(S) := {α₁s₁ + … + αₘsₘ : m ≥ 0,s₁, …, sₘ ∈ S, α₁, …, αₘ ∈ F}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

Can a linear combination involve infinitely many elements? Say if S is infinite

A

No
a linear combination only ever involves finitely many
elements of S, even if S is infinite.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

What is the empty sum?

And what is the span of the empty set?

A

ᵢ∈∅ Σαᵢuᵢ is 0ᵥ (the ‘empty sum’), so

Span ∅ = {0ᵥ}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

For any S ⊆ V, what is the relationship between Span(S) and V

A

Span(S) ≤ V

70
Q

What is a spanning set?

A

Let V be a vector space over F. If S ⊆ V is such that V =

Span(S), then we say that S spans V , and that S is a spanning set for V .

71
Q

Define linear dependence

A

Let V be a vector space over F. We say that v₁, …, vₘ ∈ V

are linearly dependent if there are α₁, …, αₘ ∈ F, not all 0, such that α₁v₁ + … + αₘvₘ = 0.

72
Q

Define linear independence

A

If v₁, …, vₘ ∈ V are not linearly dependent, then we say that they are linearly independent.

73
Q

When is S ⊆ V linearly independent?

A

We say that S ⊆ V is linearly independent if every finite subset of S is
linearly independent

74
Q

So v₁, …, vₘ ∈ Vare linearly independent if and only if …

A

So v₁, …, vₘ ∈ V are linearly independent if and only if the only linear combination of them that gives 0ᵥ is the trivial combination, that is,
if and only if α₁v₁ + … + αₘvₘ = 0 implies α₁ = … = αₘ = 0

75
Q

Let v₁, …, vₘ be linearly independent in an F-vector space V . Let vₘ₊₁∈ V be such that vₘ₊₁ ∉ Span(v₁, …, vₘ). Then v₁, …, vₘ, vₘ₊₁ are linearly [ ]
Prove it

A

Independent
Proof:
Take α₁, …, αₘ₊₁ ∈ F such that α₁v₁ + … + αₘ₊₁vₘ₊₁ = 0
If αₘ₊₁ ≠ 0, then we have
vₘ₊₁ = - (1/ αₘ₊₁)(α₁v₁ + … + αₘvₘ) ∈ Span(v₁, …, vₘ) which is a contradiction.
So αₘ₊₁ = 0, so α₁v₁ + … + αₘvₘ = 0
But v₁, …, vₘ are linearly independent, so this means that α₁ = … = αₘ = 0

76
Q

Let V be a vector space.

What is a basis of V

A

A basis of V is a linearly independent

spanning set.

77
Q

Define a finite dimensional vector space

A

A vector space with a finite basis

78
Q

What is the standard basis of Rⁿ?

A

For 1 ≤ i ≤ n, let eᵢ be the row vector with coordinate 1 in the ith entry and 0 elsewhere.
Then e₁, …, eₙ are linearly independent: if α₁v₁ + … + αₙvₙ = 0 then by looking at the ith entry we see that αᵢ = 0 for all i
Also, e₁, …, eₙ span Rⁿ, because (α₁, …, αₙ) = α₁e₁ + … + αₙeₙ
So e₁, …, eₙ is a basis for Rⁿ. And the standard basis

79
Q

What is the standard basis of Mₘₓₙ?

A

Consider V = Mₘₓₙ(R). For 1 ≤ i ≤ m and 1 ≤ j ≤ n, let Eᵢⱼ be the matrix with a 1 in entry (i, j) and 0 elsewhere. Then {Eᵢⱼ : 1 ≤ i ≤ m, 1 ≤ j ≤ n} is a basis for V , called the standard basis of Mₘₓₙ(R)

80
Q

Let V be a vector space over F, let S = {v₁, …, vₙ} ⊆ V .
Then S is a basis of V if and only if every vector in V has a unique expression
as a linear combination of elements of S.
Prove it

A

Proof pg32

Prop 17

81
Q

Let V be a vector space over F. Suppose that V has a finite
spanning set S.
Then S contains a linearly independent [ ]

A

Spanning set

82
Q

if V has a finite spanning set, then V has a [ ]

Prove it

A

basis
Proof:
Let S be a finite spanning set for V .
Take T ⊆ S such that T is linearly independent, and T is a largest such
set (any linearly independent subset of S has size ≤ |T|).
Suppose, for a contradiction, that Span(T) ≠ V .
Then, since Span(S) = V , there must exist v ∈ S \ Span(T).
Now by Lemma 16 we see that T ∪ {v} is linearly independent, and
T ∪ {v} ⊆ S, and |T ∪ {v}| > |T|, which contradicts our choice of T.
So T spans V , and by our choice is linearly independent.

83
Q

What is Steinitz Exchange Lemma?

A

Let V be a vector space over
F. Take X ⊆ V . Suppose that u ∈ Span(X) but that u ∉ Span(X \ {v})
for some v ∈ X. Let Y = (X \ {v}) ∪ {u} (“exchange u for v”). Then
Span(Y ) = Span(X).

84
Q

Prove Steinitz Exchange Lemma

A

pg34

85
Q

Let V be a vector space. Let S, T be finite subsets of V . If S is linearly independent and T spans V , then |S| …..

A

Let V be a vector space. Let S, T be finite subsets of V . If S is linearly independent and T spans V , then |S| ≤ |T|. “linearly independent
sets are at most as big as spanning sets”

86
Q

Let V be a vector space. Let S, T be finite subsets of V . If S is linearly independent and T spans V , then |S| ≤ |T|. “linearly independent
sets are at most as big as spanning sets”

Prove it

A

Poof pg 35 (top)

87
Q

Let V be a finite-dimensional vector space. Let S, T be bases
of V . Then S and T are finite, and |S| = ????

A

Then S and T are finite, and |S| = |T|

88
Q

Let V be a finite-dimensional vector space. Let S, T be bases of V . Then S and T are finite, and |S| = |T|

Prove it

A

Since V is finite-dimensional, it has a finite basis B. Say |B| = n.
Now B is a spanning set and |B| = n, so by Theorem 20 any finite linearly
independent subset of V has size at most n.
Since S is a basis of V , it is linearly independent, so every finite subset
of S is linearly independent.
So in fact S must be finite, and |S| ≤ n. Similarly, T is finite and |T| ≤ n.
Now S is linearly independent and T is spanning, so by Theorem 20
|S| ≤ |T|.
Applying Theorem 20 with the roles of S and T reversed shows that
|S| ≥ |T|.
So |S| = |T|

89
Q

What is the dimension of a finite-vector space?

A

Let V be a finite-dimensional vector space. The dimension of V , written dim V , is the size of any basis of V

90
Q

What is the dimension of Rⁿ?

A

The standard basis is e₁, e₂, …, eₙ and hence has dimension n

91
Q

What is the dimension of the vector space Mₘₓₙ?

A

mn

92
Q

What is row space?

A

Let A be an m×n matrix over F. We define the row space of A to be the span of the subset of Fⁿ consisting of the rows of A, and we denote it by
rowsp(A).

93
Q

What is row rank?

A

We define the row rank of A to be rowrank(A) := dim rowsp(A)

94
Q

Let A be an m ×n matrix, and let B be a matrix obtained from A by a finite sequence of EROs.
Then rowsp(A) = ???
Rowrank(A) = ???

A
Then rowsp(A) = rowsp(B). In particular,
rowrank(A) = rowrank(B).
95
Q

Let U be a subspace of a finite-dimensional vector space V . Then

(a) U is finite-dimensional, and dim U …. ; and
(b) if dim U = dim V , then …

A

(a) U is finite-dimensional, and dim U ≤ dim V ; and

(b) if dim U = dim V , then U = V

96
Q

Let U be a subspace of a finite-dimensional vector space V
(a) U is finite-dimensional, and dim U ≤ dim V ;
prove it

A

By Theorem 20, every linearly independent subset of V has size at most n.
Let S be a largest linearly independent set contained in U, so |S| ≤ n.
[Secret aim: S spans U.]
Suppose, for a contradiction, that Span(S) 6= U.
Then there exists u ∈ U \ Span(S).
Now by Lemma 16 S ∪ {u} is linearly independent, and |S ∪ {u}| > |S|,
which contradicts our choice of S.
So U = Span(S) and S is linearly independent, so S is a basis of U,
and as we noted earlier |S| ≤ n.

97
Q

Let U be a subspace of a finite-dimensional vector space V
(b) if dim U = dim V , then U = V
prove it

A

If dim U = dim V , then there is a basis S of U with dim U elements.
Then S is a linearly independent subset of V with size dim V . Now
adding any vector to S must give a linearly dependent set as every linearly independent subset of V has size at most n, so S must span
V . So V = Span(S) = U

98
Q

In an n-dimensional
vector space, any linearly independent set of size n is a [ ]. Similarly, any
spanning set of size n is a [ ] .

A

basis

basis

99
Q

Let U be a subspace of a finite-dimensional vector space V
Can a basis of U be extended to a basis of V?
Explain

A

Then every basis of U can be extended to a basis of V
That is, if u₁, …, uₘ is a basis of U, then there are vₘ₊₁, …, vₙ ∈ V such that u₁, …, uₘ, vₘ₊₁, …, vₙ is a basis of V

This does not say that if U ≤ V and if we have a
basis of V then there is a subset that is a basis of U. The reason it does not
say this is that in general this is false.

100
Q

Let U be a subspace of a finite-dimensional vector space V .
Then every basis of U can be extended to a basis of V
prove it

A

Proof pg 38

Idea: Start with a basis of U and and vectors till we reach a basis of V

101
Q

n Let S be a finite set of vectors in Rⁿ. How can we (efficiently) find a basis of Span(S)?

A

Let m = |S|. Write the m elements of S as the rows of
an m × n matrix A.
Use EROs to reduce A to matrix E in echelon form. Then rowsp(E) =
rowsp(A) = Span(S), by Lemma 22.
The nonzero rows of E are certainly linearly independent. So the nonzero
rows of E give a basis for Span(S)

102
Q

What is the dimension formula?

A

Let U, W be subspaces of a finite-dimensional vector space V over F. Then dim(U + W) + dim(U ∩ W) = dim U + dim W

103
Q

Prove the dimension formula

A

Take a basis v₁, …, vₘ of U ∩ W
Now U ∩ W ≤ U and U ∩ W ≤ W, so by Theorem 24 we can extend this basis to a basis v₁, …, vₘ, u₁, …, uₚ of U, and a basis v₁, …, vₘ, w₁, …, wᵩ
of W.
With this notation, we see that dim(U ∩ W) = m, dim U = m + p and dim W = m + q

104
Q

Let U, W be subspaces of a finitedimensional vector space V over F
Claim. v₁, …, vₘ, u₁, …, uₚ, w₁, …, wᵩ is a basis of U + W
Prove it

A

Call this collection of vectors S.
Note that all these vectors really are in U+W (eg, u₁ = u₁ + 0ᵥ ∈ U + W).
spanning: Take x ∈ U + W. Then x = u + w for some u ∈ U, w ∈ W.
Since v₁, …, vₘ, u₁, …, uₚ span U, there are α₁, …, αₘ, α’₁, …, α’ₚ ∈ F such that u = α₁v₁ + … + αₘvₘ + α’₁u₁ + … + α’ₚuₚ
Similarly, there are β₁, …, βₘ, β’₁, …, β’ᵩ ∈ F such that w = β₁v₁ + … + βₘvₘ + β’₁w₁ + … + β’ᵩwᵩ
Then x = u + w = (α₁+β₁)v₁ + … + (αₘ+βₘ)vₘ + α’₁u₁ + … + α’ₚuₚ + β’₁w₁ + … + β’ᵩwᵩ ∈ Span(S).
And certainly Span(S) ⊆ U + W.
So Span(S) = U + W
Proof continues pg41

105
Q

What is a direct sum of two subspaces?

A

Let U, W be subspaces of a vector space V . If U ∩ W = {0ᵥ} and U + W = V , then we say that V is the direct sum of U and W, and we
write V = U ⊕ W

106
Q

What is a direct complement?

A

Let U, W be subspaces of a vector space V . If U ∩ W = {0ᵥ} and U + W = V , then we say that V is the direct sum of U and W, and we
write V = U ⊕ W
In this case, we say that W is a direct complement of U in V (and vice
versa).

107
Q

Let U, W be subspaces of a finite-dimensional vector space V . The following are equivalent:

(i) V = U ⊕ W;
(ii) every v ∈ V has a unique expression as u+w where u ∈ U and w ∈ W;
(iii) dim V = dim U + dim W and V = [ ] ;
(iv) dim V = dim U + dim W and [ ] = {0ᵥ};
(v) if u₁, …, uₘ is a basis for U and w₁, …, wₙ is a basis for W, then [ ] is a basis for V

A

(i) V = U ⊕ W;
(ii) every v ∈ V has a unique expression as u+w where u ∈ U and w ∈ W;
(iii) dim V = dim U + dim W and V = U + W;
(iv) dim V = dim U + dim W and U ∩ W = {0ᵥ}
(v) if u₁, …, uₘ is a basis for U and w₁, …, wₙ is a basis for W, then u₁, …, uₘ, w₁, …, wₙ is a basis for V

108
Q

What is a linear map/transformation?

A

Let V , W be vector spaces over F. We say that a map T : V → W is linear if
(i) T(v₁ + v₂) = T(v₁) + T(v₂) for all v₁, v₂ ∈ V (preserves additive structure); and
(ii) T(λv) = λT(v) for all v ∈ V and λ ∈ F (respects scalar multiplication).
We call T a linear transformation or a linear map.

109
Q
  1. Let V , W be vector spaces over F, let T : V → W be linear. Then T(0ᵥ) = ??
A

T(0ᵥ) = 0𝓌

110
Q

If T : V → W and T(0ᵥ) ≠ 0𝓌, then can T ever be linear?

A

No

111
Q

That is, if T is any map that preserves additive structure then T(0ᵥ) = 0𝓌,
and if T is any map that respects scalar multiplication then T(0ᵥ) = 0𝓌

Prove it

A

Let x = T(0ᵥ) ∈ W
The z + z = T(0ᵥ) + T(0ᵥ) = T(0ᵥ + 0ᵥ) = T(0ᵥ) = z (using the
assumption to see that T(0ᵥ) + T(0ᵥ) = T(0ᵥ + 0ᵥ))
so z = 0𝓌

112
Q

Let V , W be vector spaces over F, let T : V → W. The
following are equivalent:
(i) T is linear;
(ii) T(αv₁ + βv₂) = [ ] for all v₁, v₂ ∈ V and α, β ∈ F;
(iii) for any n ≥ 1, if v₁, …, vₙ ∈ V and α₁, …, αₙ∈ F then [ ] = α₁T( v₁) + … + αₙT(vₙ)

A

(ii) T(αv₁ + βv₂) = αT(v₁) + βT(v₂) for all v₁, v₂ ∈ V and α, β ∈ F;
(iii) for any n ≥ 1, if v₁, …, vₙ ∈ V and α₁, …, αₙ∈ F then T(α₁v₁ + …+ αₙvₙ) = α₁T( v₁) + … + αₙT(vₙ)

113
Q

What is the identity map?

A

Let V be a vector space. Then the identity map idᵥ : V → V given by idᵥ (v) = v for all v ∈ V is a linear map

114
Q

What is the zero map

A

Let V , W be vector spaces. The zero map 0 : V → W that sends every v ∈ V to 0𝓌 is a linear map. (In particular, there is at least one linear
map between any pair of vector spaces.)

115
Q

Do linear transformations themselves form a vector space?

A

Yes with the operations of addition and scalar multiplication , as well as the zero map

116
Q

Let V , W be vector spaces over F. For S, T : V → W and

λ ∈ F, define S + T : V → W by [ ] for v ∈ V , and define λS : V → W by [ ] for v ∈ V

A

Let V , W be vector spaces over F. For S, T : V → W and
λ ∈ F, define S + T : V → W by (S + T)(v) = S(v) + T(v) for v ∈ V , and
define λS : V → W by (λS)(v) = λS(v) for v ∈ V

117
Q

Let U, V , W be vector spaces over F. Let S : U → V and T : V → W be linear. Then is T ◦ S U → W linear?
Prove or disprove it

A

Yes

proof top of page 46

118
Q

Let V , W be vector spaces, let T : V → W be linear. We say that T is invertible if????

A

Let V , W be vector spaces, let T : V → W be linear. We say that T is invertible if there is a linear transformation S : W → V such that
ST = idᵥ and T S = id𝓌 (where idᵥ and id𝓌 are the identity maps on V and W respectively). In this case, we call S the inverse of T, and write it as T⁻¹

119
Q

Let V , W be vector spaces. Let T : V → W be linear.

Then T is invertible if and only if T is injective/surjective/bijective

A

Bijective

120
Q

Let V , W be vector spaces. Let T : V → W be linear.
Then T is invertible if and only if T is bijective
Prove it

A

Proof bottom of pg 46

121
Q

Let U, V , W be vector spaces. Let S : U → V and
T : V → W be invertible linear transformations. Then T S : U → W is [ ] , and (TS)⁻¹ =
Prove it

A

invertible

(TS)⁻¹ = S⁻¹T⁻¹

122
Q

Let V , W be vector spaces. Let T : V → W be linear

Define the kernel (or null space) of T

A

ker T := {v ∈ V : T(v) = 0𝓌}

123
Q

Let V , W be vector spaces. Let T : V → W be linear

Define the image of T

A

Im T := {T(v) : v ∈ V }

124
Q

Let V , W be vector spaces. Let T : V → W be linear. For v₁, v₂ ∈ V , T(v₁) = T(v₂) iff [ ]

A

v₁ - v₂ ∈ ker T

125
Q

Let V , W be vector spaces. Let T : V → W be linear. For v₁, v₂ ∈ V , T(v₁) = T(v₂) iff v₁ - v₂ ∈ ker T

Prove it

A

For v₁, v₂ ∈ V, we have

T(v₁) = T(v₂) ⇔ T(v₁) - T(v₂) = 0𝓌 ⇔ T(v₁ - v₂) = 0𝓌 ⇔ v₁ - v₂ ∈ ker T

126
Q

Let V , W be vector spaces. Let T : V → W be linear. Then

T is injective if and only if [ ]

A

kerT = {0ᵥ}

127
Q

Let V , W be vector spaces. Let T : V → W be linear. Then
T is injective if and only if kerT = {0ᵥ}
Prove it

A

Proof. (⇐) Assume that ker T = {0ᵥ}
Take v₁, v₂ ∈ V with T(v₁) = T(v₂).
Then v₁ - v₂ ∈ ker T (previously proved), so v₁ = v₂
So T is injective.
(⇒) Assume that ker T ≠ {0ᵥ} Then there is v ∈ ker T with v ≠ 0ᵥ
Then T(v) = T(0ᵥ), so T is not injective.

128
Q

Let V , W be vector spaces over F. Let T : V → W be

linear. Then
(i) ker T is a subspace of [ ] and Im T is a subspace of [ ];
(ii) if A is a spanning set for V, then T(A) is a spanning set for [ ]; and
(iii) if V is finite-dimensional, then ker T and [ ] are finite-dimensional.

A

Let V , W be vector spaces over F. Let T : V → W be

linear. Then
(i) ker T is a subspace of [V] and Im T is a subspace of [W];
(ii) if A is a spanning set for V , then T(A) is a spanning set for [ImT]; and
(iii) if V is finite-dimensional, then ker T and [ImT] are finite-dimensional.

129
Q

Define nullity

A

Let V , W be vector spaces with V finite-dimensional. Let T : V → W be linear. We define the nullity of T to be null(T) := dim(ker T)

130
Q

Define Rank

A

Let V , W be vector spaces with V finite-dimensional. Let T : V → W be linear.
the rank of T to be rank(T) := dim(Im T)

131
Q

What is the rank-nullity theorem?

A

Let V , W be vector spaces with V finite-dimensional. Let T : V → W be linear. Then dim V = rank(T) + null(T).

132
Q

Prove the rank-nullity theorem

A

Take a basis v₁, …, vₙ for ker T, where n = null(T).
Since ker T ≤ V , by Theorem 24 this can be extended to a basis v₁, …, vₙ, v’₁, …, v’ₙ of V
Then dim(V ) = n + r.
For 1 ≤ i ≤ r, let wᵢ = T(v’ᵢ)

133
Q
Let V be a finite-dimensional vector space. Let T : V → V
be linear. The following are equivalent:
(i) T is invertible;
(ii) rank T = [ ] ;
(iii) null T = [ ]
A

(i) T is invertible;
(ii) rank T = dim V ;
(iii) null T = 0

134
Q

Let V be a finite-dimensional vector space. Let T : V → V
be linear.
Are any one-sided inverses two-sided?
Prove it

A

Then any one-sided inverse of T is a two-sided inverse, and so is unique.
Proof pg50

135
Q

Let V and W be vector spaces, with V finite-dimensional. Let
T : V → W be linear. Let U ≤ V . Then dim U−null T ≤ [ ] ≤ dim U.
In particular, if T is [ ] then dim T(U) = dim U
prove it

A

dim T(U)

injective

proof end pg 50

136
Q

Let V be an n-dimensional vector space over F, let v₁, …, vₙ be a basis of V . Let W be an m-dimensional vector space over F, let w₁, …, wₘ be a basis of W. Let T : V → W be a linear transformation. We define an m × n matrix for T as follows…..
(basis form)

A

For 1 ≤ j ≤ n, T(vⱼ) ∈ W so T(vⱼ) is uniquely expressible as a linear combination of w₁, …, wₘ : there are unique aᵢⱼ (for 1 ≤ i ≤ m such that T(vⱼ) = a₁ⱼw₁, …, aₘⱼwₘ.
That is,
T(v₁) = a₁₁w₁, …, aₘ₁wₘ
T(v₂) = a₁₂w₁, …, aₘ₂wₘ

T(vₙ) = a₁ₙw₁, …, aₘₙwₘ
We say that M(T) = (aᵢⱼ) is the matrix for T with respect to these ordered bases for V and W

137
Q

Let V be an n-dimensional vector space over F, let Bᵥ be
an ordered basis for V . Let W be an m-dimensional vector space over F, let B𝓌 be an ordered basis for W. Then
(i) the matrix of 0 : V → W is [ ]
(ii) the matrix of idᵥ : V → V is [ ]
(iii) if S : V → W, T : V → W are linear and α, β ∈ F, then M(αS+βT) = [ ]

Moreover, let T : V → W be linear, with matrix A with respect to Bᵥ and B𝓌. Take v ∈ V with coordinates xᵀ = (x₁, …, xₙ)ᵀ with respect to Bᵥ.
Then Ax is the [ ] of T(v) with respect to [ ]

A

(i) the matrix of 0 : V → W is 0ₘₓₙ
(ii) the matrix of idᵥ : V → V is Iₙ
(iii) if S : V → W, T : V → W are linear and α, β ∈ F, then M(αS+βT) = αM(S) + βM(T)

Then Ax is the coordinate vector of T(v) with respect to B𝓌

138
Q

Let U, V , W be finite-dimensional vector spaces over F, with ordered bases Bᵤ , Bᵥ , B𝓌 respectively. Say Bᵤ has size m, Bᵥ has
size n, B𝓌 has size p. Let S : U → V and T : V → W be linear. Let A be
the matrix of S with respect to BU and Bᵥ. Let B be the matrix of T with
respect to Bᵥ and B𝓌 . Then the matrix of T ◦ S with respect to Bᵤ and B𝓌
is [ ]

A

BA

139
Q

Prove that matrix multiplication is associative

A

Proof, end of pg 54

140
Q

Let V be a finite-dimensional vector space. Let T : V → V
be an invertible linear transformation. Let v₁, …, vₙ be a basis of V . Let A be the matrix of T with respect to this basis (for both domain and codomain).
Is A invertible? If so, what does the inverse represent?

A

Then A is invertible, and A⁻¹

is the matrix of T⁻¹ with respect to this basis

141
Q

What is the change of basis theorem?

A

Let V , W be finite-dimensional
vector spaces over F. Let T : V → W be linear. Let v₁, …, vₙ and v’₁, …, v’ₙ be bases for V. Let w₁, …, wₙ and w’₁, …, w’ₙ be bases for W. Let A = (aᵢⱼ) ∈ Mₘₓₙ (F) be the matrix for T with respect to v’₁, …, v’ₙ and w’₁, …, w’ₙ.
Take pᵢⱼ, qᵢⱼ ∈ F such that v’ᵢ = ⱼ₌₁Σⁿ pᵢⱼvⱼ and w’ᵢ = ⱼ₌₁Σⁿ qᵢⱼwⱼ
Let P = (pᵢⱼ) ∈ Mₘₓₙ (F) and Q = (qᵢⱼ) ∈ Mₘₓₙ (F)
Then B = Q⁻¹AP

142
Q

Let V be a finite dimensional vector space. Let T : V → V be linear.
What is the second version of change of basis theorem (only one vector space)?

A

Let V be a finitedimensional vector space. Let T : V → V be linear. Let v₁, …, vₙ and v’₁, …, v’ₙ be bases for V. Let A be the matrix of T with respect to v₁, …, vₙ. Let B be the matrix of T with respect to v’₁, …, v’ₙ.
Let P be the change
of basis matrix, that is, the n × n matrix (pᵢⱼ) such that v’ᵢ = ⱼ₌₁Σⁿ pᵢⱼvⱼ
Then
B = P⁻¹AP

143
Q

change of basis theorem:
The change of basis matrix P is the matrix of the identity map
idᵥ : V → V with respect to the basis [ ] for V as domain and the basis [ ] as codomain

A

v’₁, …, v’ₙ domain

v₁, …, vₙ codomain

144
Q

When are two matrices similar?

A

Take A,B ∈ Mₘₓₙ (F).If there is an invertible n × n matrix P

such that P⁻¹AP = B, then we say that A and B are similar

145
Q

rowsp(Aᵀ) =
rowrank(Aᵀ) =
colsp(Aᵀ) =
colrank(Aᵀ) =

A

colsp(A)
colrank(A)
rowsp(A)
rowrank(A)

146
Q

Take A ∈ Mₘₓₙ (F), let r = colrank(A). Then there are
invertible matrices P ∈ Mₙₓₙ (F) and Q ∈ Mₘₓₘ (F) such that Q⁻¹AP has the block form
( Ir 0rxs )
( 0txr 0txs )
where s = n − r and t = m − r

Prove it

A

Proof pg59

147
Q

Take A ∈ Mₘₓₙ (F). Let R be an invertible m × m matrix, let
P be an invertible n × n matrix. Then
(i) rowsp(RA) = [ ] and so rowrank(RA) = [ ];
(ii) colrank(RA) = [ ];
(iii) colsp(AP) = [ ] and so colrank(AP) = colrank([ ]);
(iv) rowrank(AP) = rowrank([ ]).

A

(i) rowsp(RA) = rowsp(A) and so rowrank(RA) = rowrank(A);
(ii) colrank(RA) = colrank(A);
(iii) colsp(AP) = colsp(A) and so colrank(AP) = colrank(A);
(iv) rowrank(AP) = rowrank(A).

148
Q

Let A be an m × n matrix.

Then what is colrank(A) = ??

A

colrank(A) = rowrank(A)

149
Q

What is the rank of a matrix?

A

Let A be an m × n matrix. The rank of A, written rank(A), is

the row rank of A (which we have just seen is also the column rank of A).

150
Q

Let T : V → W be linear. Let Bᵥ, B𝓌 be ordered bases of V , W respectively. Let A be the matrix for T with respect to Bᵥ and B𝓌 . Then
rank(A) = .

A

rank(A) = rank(T)

151
Q

Let A be an m×n matrix. Let x be the n×1 column vector
of variables x₁, …, xₙ. Let S be the solution space of the system Ax = 0 of m
homogeneous linear equations in x₁, …, xₙ, that is, S = {v ∈ 𝒸ₒₗFⁿ : Av = 0}.
Then dim S = [ ]

A

dim S = n − colrank A

152
Q

Let V be a vector space over F

What is a bilinear form on V?

A

A bilinear form on V is a
function of two variables from V taking values in F, often written :
V × V → F, such that
(i) = α₁ + α₂ for all v₁, v₂, v₃ ∈ V and α₁,α₂ ∈ F; and
(ii) = α₂ + α₃ for all v₁, v₂, v₃ ∈ V and α₂,α₃ ∈ F

153
Q

What is a Gram matrix?

A

Let V be a vector space over F. Let be a bilinear form on V . Take v₁, …, vₙ ∈ V . The Gram matrix of v₁, …, vₙ with respect to < −, − > is the n × n matrix () ∈ Mₙₓₙ (F)

154
Q

Let V be a finite-dimensional vector space over F. Let be a bilinear form on V. Let v₁, …, vₙ be a basis for V . Let
A ∈ Mₙₓₙ (F) be the associated Gram matrix. For u, v ∈ V , let x = (x₁, …, xₙ)∈ Fⁿ and y = (y₁, …, yₙ)∈ Fⁿ be the unique coordinate vectors such that u = x₁v₁ + … + xₙvₙ and v = y₁v₁ + … + yₙvₙ
Then <u> = ???</u>

A

<u> = xAyᵀ</u>

155
Q

What is a symmetric bilinear form?

A

We say that a bilinear form : V × V → F is symmetric if

= for all v₁, v₂ ∈ V

156
Q

What is a positive definite bilinear from?

A

Let V be a real vector space. We say that a bilinear form

: V × V → R is positive definite if ≥ 0 for all v ∈ V, with = 0 if and only if v = 0.

157
Q

What is an inner product on a real vector space?

A

An inner product on a real vector space V is a positive definite symmetric bilinear form on V

158
Q

What is an inner product space?

A

ymmetric bilinear form on V .
We say that a real vector space is an inner product space if it is equipped
with an inner product. Unless otherwise specified, we write the inner product
as

159
Q

Let V be a real inner product space

What is the norm/magnitude/length of v for v ∈ V?

A

||v|| := √

160
Q

Define the angle between any two vectors for any inner product space

A

the angle between nonzero vectors x, y ∈ V to be cos⁻¹( / (||x|| ||y||) ) where this is taken to lie in the interval [0, π]

161
Q
Let V be a finite-dimensional real inner product space.
Take u ∈ V \ {0}. 
Define u⊥
(The ⊥ should be superscript)
dim(u⊥) = ?
V = (in terms of u⊥) ??
A

u⊥ := { v ∈ V : = 0}
Then u⊥ is a subspace of V

dim(u⊥) = dim V - 1
V = Span(u) ⊕ u⊥
162
Q

Let V be an inner product space. We say that {v₁, . . . , vₙ} ⊆ V is an orthonormal set if …

A

We say that {v₁, . . . , vₙ} ⊆ V is an orthonormal set if for all i, j we have
= δᵢⱼ = { 1 if i = j
{ 0 if i ≠ j

163
Q

Let {v₁, . . . , vₙ} be an orthonormal set in an inner product space V. Are v₁, . . . , vₙ linearly dependent or independent?

A

linearly independent

164
Q

So a set of n orthonormal vectors in an n-dimensional vector space is a [ ]

A

basis

165
Q

Let V be an n-dimensional real inner product space.

Is there an orthonormal basis of V?

A

Yes, v₁, . . . , vₙ

166
Q

Take X ∈ Mₙₓₙ (R). Consider Rⁿ equipped with the usual
inner product = x · y. The following are equivalent:
(i) XXᵀ = [ ]
(ii) [ ] X = Iₙ
(iii) the [ ] of X form an orthonormal basis of Rⁿ;
(iv)the [ ] of X form an orthonormal basis of Rⁿcol;
(v) for all x, y ∈ Rⁿ, we have xX · yX = [ ]

A

(i) XXᵀ = Iₙ
(ii) XᵀX = Iₙ
(iii) the rows of X form an orthonormal basis of Rⁿ;
(iv) the columns of X form an orthonormal basis of Rⁿcol;
(v) for all x, y ∈ Rⁿ, we have xX · yX = x · y

167
Q

X is orthogonal iff the map Rₓ is an [ ]

A

Isometry

168
Q

What is the Cauchy-Schwarz Inequality?

A

Let V be a real inner product

space. Take v₁, v₂ ∈ V . Then || ≤ ||v₁|| ||v₂||, with equality if and only if v₁, v₂ are linearly dependent.

169
Q

What it a complex inner product space?

A

A complex inner product space is a complex vector space equipped with
a positive definite sesquilinear form

170
Q

What are Hermitian forms and spaces?

A

Hermitian form = Positive definite sesquilinear form

Hermitian Space = complex inner product space

171
Q

Let V be a complex vector space

What is a sesquilinear form?

A

Let V be a complex vector space. A function : V ×V → C is a sesquilinear form if
(i) = α₁ + α₂ for all v₁, v₂, v₃ ∈ V and α₁,α₂ ∈ C; and

                  \_\_\_\_\_ (ii)  =