Chapter 1: Geometry of the Euclidean space Flashcards

1
Q

R ⁿ is

A

the set of all n-tuples of real numbers,
Rⁿ =
{(x_1,…, x_n) : x_1, . . , x_n∈R}.

components are coordinates

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

natural dot product/inner product/scalar
product

A

x · y = Σ_i=1 ,n x_iy_i,

where x = (x_1, … , x_n), y = (y_1,. .., y_n) ∈ Rⁿ

Properties:
(i) (α₁x₁ + α₂x₂) · y
= α₁(x₁ · y) + α₂(x₂ · y)
for any x₁, x₂, and y ∈ Rⁿ, and any real numbers α₁, α₂ ∈ R; distributive linear in the first var

(ii) x·y = y·x for any x and y ∈ Rⁿ symmetry

(iii) x·x > 0 for any x ∈ Rⁿ, and
x · x = 0 if and only if x = 0.
positively homogeneous

properties reflect the structure in Rⁿ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Proposition I.1
Cauchy-Schwarz inequality

A

For any vectors x, y ∈ Rⁿ the following inequality holds:
|x · y| ≤
|x| |y|,

(where
|x| =√{x · x}, and
|y| =√{y · y} are lengths of vectors in Rⁿ)

Equality occurs IFF x = 0 or y = λx for some λ ∈ R.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Cauchy-Schwarz inequality alt ways of writing

A

|x · y| ≤
|x| |y|,
from i=1 to n
Σxᵢ · yᵢ ≤ (Σxᵢ² Σyᵢ² )¹/²

when n=1

x₁· y₁ ≤(x₁² · y₁² )¹/² = x₁· y₁ equality
abs value= length here
|x + y|≤|x|+|y|.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Proposition I.1
Cauchy-Schwarz inequality
PROOF L5

A

x=0 TRIVIAL CASE holds

|x| =√{x · x}
x-y|=√{λx-y · λx-y}
x-y|²={λx-y · λx-y}
assume x≠0
Consider polynomial p(λ)
= |λx-y|²
now using the properties of dot prod:
linearity, symmetry
=|λ²x² +y² -2λxy|
= λ²|x|² +|y|² -2λx
y≥0
parabola in λ with at most one root
thus DISCRIMINANT ≤0
iff
(-2xy)² -4|x|²|y|² ≤0
4(x
y)²-|x|²|y|² ≤0

thus inequality proved

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Proposition I.1
Cauchy-Schwarz inequality
PROOF L5 CASE OF EQUALITY

A

Looking at the case of equality
IF CASE x=0 or y =λx (colinear)
|0|≤0
|λx * λx|= λ²|x*x|= λ²|x| |x|

only if case
suppose that |xy|≤ |x||y| for some x and y
if x≠0 then we need to show that y=λx
Considering the poly: then discriminant =0
thus only one real root and hence
λ₀ that is
0=P(λ₀ ) = (λ₀x -y)² length of vector is 0 by non-neg property of dot prod
iff
λ₀x -y=0
iff
y=λ₀x

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

L5 remark cauchy schwarz

A

Note that the argument used in the proof above is rather general and works for any (not
necessarily finite-dimensional) vector space V equipped with a scalar product <.,.>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

L5:
scalar product <·, ·>

A

bilinear map V × V → R that satisfies the following properties, which mimic the properties of the dot product:

(i) <α_1u_1 +α2_u_2, v> = α_1<u_1, v>+α_2<u_2, v_i> for all u_1, u_2, and v ∈ V , and all reals α_1, α_2 ∈ R; LINEAR IN FIRST ARGUMENT
(ii) <u, v> = <v, u> for all u and v ∈ V ; SYMMETRY
(iii) <u, u> > 0 for any u ∈ V , and <u, u> = 0 if and only if u = 0
NON NEGATIVITY

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Example:
Let V=C[a,b] be the space formed by all continuous vector functions f:[a,b] to R^n
is this a vector space

A

clearly, closed under linear operations (and f(x)=0?)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Scalar product on C[a,b]
<f,g>

A

<f,g> =
∫ₐᵇ f(t)g(t) .dt

Here our vector space is infinite dimension we can show it satisfies properties of the scalar properties

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Lemma I.2 (Integral Cauchy-Schwarz inequality)

A

For any continuous vector functions f,
g : [a, b] → R^n the following inequality holds:

|∫ₐᵇ (f.g)(t) .dt| ≤
(∫ₐᵇ |f(t)|² .dt) ¹/² (∫ₐᵇ |g(t)|² .dt) ¹/²
The equality occurs if and only if f ≡ 0 or g ≡ λf for some λ ∈ R. (identically proportional?)

|<f,g>| ≤ sqrt(<f,f> <g,g>) = |f||g|

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Proof: Lemma I.2 (Integral Cauchy-Schwarz inequality)

A

Let V be a vector space of all continuous vector-functions f : [a, b] → R^n
On this vector space we consider the following scalar product
<f,g> =ᵈᵉᶠ= ∫ₐᵇ (f.g)(t) .dt where f, g ∈ V
and the function in the integral is obtained by taking the dot product of the values f(t) and g(t). It is straightforward to check that the formula above indeed defines the scalar
product on V , that is it satisfies the properties (i)-(iii) above. Now the proof of the lemma follows the same line argument as in the proof of Lemma I.1, by considering the polynomial
P(λ) = <λf − g, λf − g>.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Corollary I.2 (Triangle inequality). Of CS

A

For any vectors x, y ∈ Rⁿ the following inequality holds:
|x + y|≤|x|+|y|.

Equality occurs IFF x = 0 or y = λx for some λ> 0 in R

(proportional/colinear for equality)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Corollary I.2 (Triangle inequality consequence of C-S).
PROOF

For any vectors x, y ∈ Rⁿ the following inequality holds:
|x + y|≤|x|+|y|.

Equality occurs IFF x = 0 or y = λx for some λ> 0 in R

A

proof:
By linearity and symmetry defn using values and lengths
|x + y|²
= (x + y) · (x + y)
= |x|² + x · y + y · x + |y|²
= |x|² + 2x · y + |y|²
≤|x|² + 2 |x · y| + |y|²

≤|x|² + 2 |x| |y| + |y|²
= (|x| + |y|)²

(second inequality used the Cauchy-Schwarz inequality)

Equality in the triangle inequality implies equality in the Cauchy-Schwarz inequality, and hence, implies that x = 0 or y = λx for some λ > 0.
shown in lecture
(if x=0 |y|²=|y|², if y= λx LHS:
|x + λx|= |(1+λ)x|= sqrt((1+λ)x·(1+λ)x )
RHS: |x|+ |λx| = |1+λ| |x| shows equality)

Conversely, if x = 0 or y = λx, where λ > 0, then the triangle inequality becomes an equality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Suppose |x+y|² =( |x|+|y|)²

Use modulus and dot product relationship

A

(x+y) · (x+y)
= ( |x|+|y|)²
by cauchy schwarz conclude x=0 or y=λx

x not eq 0 (1+λ)² |x|² = (1-λ)²) |x|² if λ<0
however plugging y= λx into inequality |x+y| <= |x| + |y| we rule out the case when λ<0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

distances in R^n

A

dist(x,y)
=|xy|= (xy)·(xy)

From MATH2051 we also know that the dot product allows us to compute lengths of curves in R^n

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Distances in R^n

A

dist(x,y)=|x−y|= (x−y)·(x−y).

From MATH2051 we also know that the dot product allows us to compute lengths of curves in R^n

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Triangle inequality in form for distances

A

In particular, the triangle inequality can be written in the following form
for any vectors x, y, and z ∈ Rn. (Make sure that you can explain why.)

dist(x, y) less than it equal to
dist(x, z) + dist(z, y)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

|x-y| inequality for triangle

A

less than or equal to |x-z| + |z-y|

iff
|u+v| <= |u| +|v|
u=x-z and v= z-y

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

x·y./ (|x| |y|)

A

x·y./ (|x| |y|) in [-1,1]
hence choose cos theta

also choosing thete in [0, pi]
assuming x,y non zero vectors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

to compute the cosine of angles
θ

between non-zero vectors x and y

A

by the formula
cosθ= x·y./ (|x| |y|)

RHS cos θ for some θ is a consequence of the Cauchy-Schwarz inequality: it guarantees that the quotient on the right hand-side above takes values in the interval [−1, 1].

The equality case in the Cauchy-Schwarz inequality says that the angle θ between non-zero vectors x and y equals πk, where k ∈ Z, if and only if the vectors are collinear.

Recall that vectors x and y are called orthogonal if x · y = 0, i.e. the angle between them equals π/2 + πk, k ∈ Z.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Pythagorean theorem:

A

vectors x and y are orthogonal if and only if |x + y|^2 = |x|^2 + |y|^2.

Particular case of
Distance using Cauchy Schwartz

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

. to compute volumes of parallelotopes (not covered in lecture but was in exercise !)

A

in more detail, for a system e1, . . . , ek of k linearly independent vectors the k-dimensional parallelotope Pk, spanned by them, is defined as
Pk ={t1e1 +…+tkek :ti ∈[0,1]}. Its k-dimensional volume is given by the formula

Vol_k (P_k) =
√[
Det(
[ e ₁. e₁ e ₁.e₂ …. e₁.eₖ]
…..
[ eₖ. e₁ eₖ. e₂ …. eₖ.eₖ])

In particular, if the ei’s are pair-wise orthogonal (that is Pk is an orthotope ), then we obtain
Volk(P_k)= SQRT((e1 ·e1)···(ek ·ek) )=|e_1|…..|e_k|

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

def 1.2 vector subspace’

linear vector space

A

A subset X ⊂ R^n st for any x, y ∈ X and any a, b ∈ R we have ax + by ∈ X.

closed under linear combos,
subspace commonly solutions to linear equations
0 is always an element of a vector subspace

we use properties of dot prod and vector space properties

we define a subspace as a span of a collection of vectors min # give basis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Which ones are vector subspaces?

Example I.3. Let X₁, X₂, X₃ ⊂ R² be subsets defined as:

X₁ = {(x₁, x₂) ∈ R²: x₁ = x₂},

X₂ = {(x₁, x₂) ∈ R²: x₁= x₂ + 1},

X₃ = {(x₁, x₂)∈ R² : x₁=x₂²}

A

X1 is a vector subspace, but X2 and X3 are not.

1) check that if x = (x₁, x₂)
and y = (y₁, y₂) are vectors from X1, then any linear combination ax + by also lies in X1.
ax + by = a(x₁, x₂) + b(y₁, y₂)
= (ax₁+ by₁ , ax₂ + by₂)
Since x₁= x₂ and y₁=y₂ we conclude that ax₁ + by₁ = ax₂ + by₂ , that is ax + by lies in X1.

2) The set X2 is not a vector space, since it does not contain the zero vector 0 = (0, 0); by Definition I.2 any vector space should do so.

3) X3 is not a vector space, note that the vectors (1, 1) and (1, −1) lie in X3, but the sum (2, 0) = (1, 1) + (1, −1) does not.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

orthogonal

A

when vectors are orthogonal angle is (pi/2)k
ie x.y=0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

pythagorean thm

A

applies to sides which are orthogonal

IFF |x+y|^2 = |x|^2+|y|^2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

def 1.4 linearly dependent

A

A collection of non-zero vectors x₁, . . . , xₖ ∈ Rⁿ is called linearly dependent if there
exists a collection of real numbers α₁, . . . , αₖ ∈ R such that not all of the αᵢ’s are equal to zero and
α₁x₁ + α₂x₂ + · · · + αₖxₖ = 0.

A collection of vectors that are not linearly dependent is called linearly independent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Self-Check Question I.4. Let x₁, . . . , xₖ ∈ Rⁿ be a collection of non-zero vectors that are pair-wise orthogonal, that is
x_i·x_j = 0 for all i not equal to j.

Can you show that these vectors are linearly independent?

A

suppose linearly dependent
consider
a₁x₁+ . . . +aₖxₖ=0
we look for these to be non zero reals
Taking x_i * (a₁x₁+ . . . +aₖxₖ)=0
gives only non zero dot prod as
a_i (x_i x_i)=0
we have x_i
x_i not equal to 0 as these are non zero vectors and |x_i|^2 = x_i*x_i
thus must have a_i=0
repeat argument for all a_i
thus linearly independent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

x,y in R linearly dependent if

A

2 vectors linearly dependent IFF x=0 or y=kx k in R (covector)
scalar multiples a_1x+a_2y=0 s.t a_1 not equal to 0 x=(a_1/a_2)y=0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

spans

A

Let x₁, . . . , xₖ ∈ Rⁿ be a collection of non-zero vectors
collection spans a vector subspace X ⊂ R^n if for any x ∈ X there exist k real numbers αi ∈ R, where i = 1, . . . , k, such that
x = α₁x₁ + α₂x₂ + · · · + αₖxₖ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Let x₁, . . . , xₖ ∈ Rⁿ be a collection of non-zero vectors that lie in a vector subspace X ⊂ R^n. Then EQUIVALENT STATEMENTS

A

(i) x1, . . . , xk is a maximal system of linearly independent vectors in X;
(ii) x1, . . . , xk is a minimal system of vectors that span X;
(iii) x1, . . . , xk is a system of linearly independent vectors that span X;
(iv) for any x ∈ X there exist a unique collection of real numbers α1, . . . , αk such that
x = α1x1 + α2x2 + · · · + αkxk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

def 1.5 BASIS

A

Let X ⊂ R^n be a vector subspace. A collection of non-zero vectors x1, . . . , xk ∈ X that satisfies one of the equivalent statements (i)-(iv) in Proposition I.3 is called a basis of X.

a basis exists of at most n vectors in R^n
we prefer an orthonormal basis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

prefer an orthonormal basis

A

If its a basis then any vector x in V can be expressed as a unique linear combo
x= a_1x_1 +…+a_kx_k for a_i in R
if basis is orthonormal then coeffs are easier to compute

a_i = (x*x_i) bc dot prods =1 or =0 for that i’th component

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

DIMENSION of a vector space X

A

BASIS VECTORS
f x_1, . . . , x_k and y1, . . . , ym are two different basis of X, then k = m,
the number of vectors in a basis of a fixed vector space X is always the same.
This integer is the dimension of a vector space X.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

E.G Let X,Y be 2 vector spaces in R^n

Show that if X⊆ Z and dim Z= dim X
then X=Z

A

The dimensions mean we can describe using a basis with the same number of vectors
Let x∈X then
x=a_1x_1+….+a_kx_k for some reals basis {x_1,…,x_k}
X⊆Z means x∈Z so
x=b_1z_1+….+b_kz_k for some reals basis {z_1,..z_k}
that means
the basis for Z is also a basis for X, as this is true for any x in X
take z in Z,
we use the basis
z=c_1z_1+….+c_kz_k
as this is a basis in X also, z in X
so Z⊆X
so X=Z

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

ONE DIMENSIONAL SUBSPACE

A

in R^n
ℓ_v={tv: t∈R }
v ∈R^n fixed non zero DIRECTION vector
Lines through O
basis 1 vector

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Plane through 0∈R^n

A

vector subspace dim 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

vector subspace dim 2

A

Plane through 0∈R^n
ℓ_v ⊥
{(x,y,0,…,0} x,y ∈R^n}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

L5: theory existence of orthonormal basis

DEFN ORTHOGONAL collection

A

Existence of an orthonormal basis. Let V⊂R^n be a subspace.

A collection of vectors x₁,…,xₖ ∈ Rⁿ is called ORTHOGONAL if vectors are pairwise orthogonal
xᵢ*xⱼ=0 for all i not equal to j

Note that if a collection of vectors is orthogonal, then the vectors are linearly independent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

ORTHONORMAL collection

A

A collection of vectors x₁,…,xₖ ∈ Rⁿ is called ORTHONORMAL if
xᵢ*xⱼ=
{1 if i=j
{0 if i not equal to j

PAIRWISE ORTHOGONAL
UNIT LENGTH

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

L5:
EXISTENCE OF A BASIS
Theorem I.3.

there MAYBE a question on the exam about this :)

A

Let V ⊂ Rⁿ be a vector subspace. Then there exists an orthonormal basis in V .

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

ORTHOGONAL COMPLEMENT OF X

A

X⊥
={y∈R^n y*x =0 ∀x∈X}⊂R^n

vector subspace X
all vectors orthogonal to all x∈X

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

dimensions of X and X⊥ in R^n

A

complimentary
dim X=k
dim X⊥=n-k

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

For ℓ_v through O in R^n
ℓ_v ⊥ is

A

a PLANE ORTHOGONAL to ℓ_v

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

is
X⊥ a VECTOR SUBSPACE?

A

yes
0 in X⊥
for x,u,v in X⊥
(au+bv)x= aux+bv*x=0 closed
(checking orthogonal to x in X checks its in our set )

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

V=span(e_1.e_2)
then V⊥

A

is a line formed by e_3 axis
orthogonal to e_1 and e_2
thus orthogonal to all combos of these (the plane)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

(X⊥)⊥

A

=X
complementary dims
y in (X⊥)⊥ by definition is orthogonal to all vectors orthogonal to all in X
dim(X)=k
dim(X⊥)=n-k
thus
dim(X⊥)⊥ = n-(n-k)=k

x in X is not in X⊥ but is orthogonal to all y in X⊥

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

LEMMMA I.4
two lines coincide as sets IFF

A

Two lines
ℓ_{p,v}
ℓ_{q,w}
coincide as sets
ℓ_{p,v}=ℓ_{q,w}

IFF
the direction vectors v,w are linearly dependent and p-q=t_o v for some t_0 in R

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

PROOF
Two lines
ℓ_{p,v}
ℓ_{q,w}
coincide as sets
ℓ_{p,v}=ℓ_{q,w}

IFF
the direction vectors v,w are linearly dependent and p-q=t_o v for some t_0 in R

A

suppose lines coincide
q∈ℓ_{p,v}
hence there exists t_0∈R s.t
q= t_0 v +p
also as lines coincide
q+w∈ℓ_{p,v}
there exists t s.t
q+w=tv+p
ℓ_{q,w}hence
(t-t_0)v=w
thus v&w linearly dependent

conversely if
q=p+t_0v for some t_0 in R
then q∈ℓ_{p,v}
since the non zero vectors v and w are linearly dependent we can assume
w=av for non zero a
concluding

ℓ_{q,w}=
{q + sw : s ∈ R} =
{p + (t_0 + sα)v : s ∈ R} =
=ℓ_{p,v}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Proposition I.5.

The lines, viewed as subsets of R^2 of form
ℓ_{p,v}
={p+tv: t ∈ R} ⊂ R^2

satisfy Euclid’s axioms A1-A5.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

PROOF

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

EUCLIDS AXIOMS

A

A1. For any two points there exists a line that goes through both of them.

A2. There exists at most one line that passes through two distinct points.
A3. Every line contains at least two distinct points.
A4. There exist three points that do not lie on a straight line.
A5. (Parallel axiom). Let ` be a line and p a point that does not lie on . Then there exists a unique line that contains p and does not intersect the line .

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

LINEAR MAP/OPERATOR
defn

A

A map L : V → W is called linear if
L(α_1u_1 + α_2u_2) =
α_1L(u_1) + α_2L(u_2)
for all α_1, α_2 ∈ R, u_1, u_2 ∈ V.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

isomorphism

A

A linear map L : V → W is called isomorphism if it is bijective, that is both
injective and surjective.

we can check these looking at rank and nullity

56
Q

C approx R^2

A

Euclidean plane as the set of complex X span {1,i} dim 2
isomorphic to R^2
Z to (Re(z), im(z)) bijective linear map

57
Q

PROJECTION ONTO LINE
Is it a linear map

A

orthogonal projection on ℓ_v
={tv, t in R}
For line through 0 with |v|=1
map
L:R^n to ℓ_v s.t
L(u) =(u * v)v

L is a linear map:
L(au_1+bu_2) using dot prod properties
=
((au_1+bu_2)* v
=
a**u_1 * v ** + b u_2 * v
= aL(u_1) +bL(u_2)

for any w in ℓ_v L(W)=w

58
Q

PROJECTION ONTO LINE PROPERTY
for any w in ℓ_v
L(w)

What happens to a w in line
V is unit vector

A

=(w * v) * v
=(t |v|^2 ) * v (tv * v ) *v
(|v|^2 =1 as unit vector)
=tv =w

59
Q

PROJECTION ONTO LINE
PROPERTY
image of L

A

coincides with ℓ_v
rank L = dim (ℓ_v)= 1

60
Q

PROJECTION ONTO LINE
PROPERTY
Ker L

A

={v in R^n| L(u)=0}
={v in R^n : u * v = 0}
=
ℓ_v⊥
as orthogonal to v

61
Q

ORTHOGONAL PROJECTION ONTO X
Π
linear map

A

this is a linear map
For orthonormal basis
u_1,…,u_k for X

Π:R^n to X
u to v = sum _{i=1,k} (u * u_i) u_i

diagram: vector projected onto another
same direction as projected onto vector
different length

im Π= X
ker Π=X⊥

62
Q

KERNEL for linear map
L:U to V

A

KerL =
{u ∈ R^n: L(u) = 0} ⊂V
PREIMAGE of 0
vectors onto 0

Nullity is the dimension

63
Q

KERNAL for PROJECTION INTO LINE
map L

A

{u ∈ R^n: u · v = 0} = `
ℓ_v⊥

64
Q

IMAGE

A

{w in W for which there there exists u in V s.t L(u)=w}⊂ W

RANK is the dimension

65
Q

IMAGE
for PROJECTION INTO LINE

A

linear map
For orthonormal basis
u_1,…,u_k for X

Π:R^n to X
u to v = sum _{i=1,k} (u * u_i) u_i

diagram: vector projected onto another
same direction as projected onto vector
different length

im Π= X

66
Q

prop 1.6
A linear map is INJECTIVE

A

A linear map L : V → W is
INJECTIVE
IFF
KERNAL TRIVIAL
nul L =0

67
Q

PROPN 1.6
A linear map is SURJECTIVE

A

IFF
rank L = dim W

68
Q

PROOF
A linear map L : V → W is
INJECTIVE
IFF
KERNAL TRIVIAL
nul L =0

A

suppose the contrary
there exists v non zero s.t L(v)=0 but injectivity contradicts as L(0)=0

converse
kernel trivial
suppose contrary, there exists v_1,v_2 in V s.t
L(v_1)=L(v_2) then by linearity
L(v_1-v_2)=L(v_1)-L(v_2)=0
but kernel trivial contradicted as v_1-v_2 ∈kernel

69
Q

PROOF
A linear map L : V → W is
SURJECTIVE

IFF
rank L = dim W

A

suppose surjective then for all w in W there exists v in V s.t L(u)=w

thus dim W=rank L

converse
suppose tank L=dim W
then dim ({w in W:L(u)=w for u in V})
then for all w in W there exists u in U s,t L(u)=w

70
Q

PROJECTION ONTO LINE
surjective?
rank and kernel

A

for any w = tv ∈ ℓ_v, we have
L(w) = (w · v)v =
t|v|^2v = tv = w,
where we used that |v| = 1.

Thus, the image of L coincides with ℓ_v, and hence, rank L = dim ℓ_v = 1.

KerL =
{u ∈ R^n: L(u) = 0}
= {u ∈ R^n: u · v = 0}
= ℓ_v⊥

where we used that v =\ 0 in the second equality, and Definition I.7 in the last.
nulL = dim Ker L = dim ℓ_v⊥

surjective IFF rank L= dim W so map L: R^n to ℓ_v surjective

71
Q

ROTATION angle θ around axis
is it a linear map

A

LINEAR MAP
R_θ(z)= exp(iθ) z

R_θ(a_1z_1 +a_2 z_2)=
exp(iθ) (a_1z_1 +a_2 z_2)
= a_1R_θ(z_1)+a_2 R_θ(z_2)

commutative and distributive
rotations preserve length and angles of vectors
another orthonormal basis is Rθ(e_1), Rθ(e_2)

72
Q

ROTATION angle θ around axis
formula

A

Rθ(v)=
(Rθ(v)* Rθ(e_1)Rθ(e_1) + (Rθ(v)* Rθ(e_2))Rθ(e_2)

(Rθ(v)* Rθ(e_1)=
|(Rθ(v)||Rθ(e_1)||cosφ
= |v||e_1| cosφ
= v*e_1

73
Q

REFLECTION THROUGH H
LINEAR MAP

A

H⊂R^n
H is HYPERPLANE vector subspace dim H =n-1
Let v in R^n be a vector orthogonal to H
v*y =0 for all y in H

R_H: R^n to R^n
R_H(V)=-v for vectors orthogonal
R_H(y)=y for all y in H

vectors in H invariant,
vectors orthogonal change

74
Q

example
R_{e_1)(z)= z conjugate

A

conjugate only i sign changes
so its a rotation Kernal dim 0
rank L=n-0=n

reflection in Re z axis

in euclidean plane, rank of rotation =2
surjective and injective

75
Q

PROPN
RANK-NULLITY THM

A

Let L : V → W be a linear map. Then
dim V = rankL + nulL

In particular, rankL <= dim V

76
Q

for any linear map
rankL ≤

A

rank L ≤min{dimV, dim W}

77
Q

How to represent v using standard basis e_1 and e_2

A

v =
(v * e_1) e_1 + (v*e_2)e_2
using standard basis in R^2

78
Q

corollary of rank nullity

A

Let L:V to W be a linear operator then
(i) if dim ≤ dim W, the operator L has maximal rank if and only if it is injective; Nul L=0

(ii) if dim V ≥ dim W, the operator L has maximal rank if and only if it is surjective. Rank = dim W

if dim V = dim W, then a linear operator L has maximal rank IFF if it is an isomorphism

proved
i)rank L= dim V IFF nul L =0 IFF injective from prev propn

79
Q

example: applying Rank nullity thm to orthogonal complement/PROJECTION
Π:R^n to X

A

dim X= dim (Im Π) = rank Π =k
dim X⊥=dim ker Π = nul Π =n-k

R^n = X union X⊥
iff
X intersection X⊥ ={0}
iff
v in R^n decomposed as
x=x+y with x in X y in Y
x projected is Π(x)

80
Q

example
V be a space of m × n matrices
W R^{m×n}.

Consider a basis of V:

A

basis of matrices eji (components 0 except ith row jth col)

let e_l standard basis in W: (lth component 1 , all 0)

Consider the linear operator
I_{m,n} : V → W
sends a matrix eji to the vector e_{n(j−1)+i}
j = 1, . . . , m and i = 1, . . . , n.
the linear operator is a linear isomorphism

81
Q

C[a,b]

A

continuous dunctions f:[a,b] to R^n
is a vector space
f(x)=0 is continuous
closed as af+bg is also a continuous funct

82
Q

Let
{v_1,..,v_n} and{w_1,..,1_n} be bases in V and W
then
any linear operator L:V to W can assign m x n matrixA_L

A

components
(a_ji)

L(v_i) =
sum_{j=1,m} a_{ji}w_j
i=1,..,n

83
Q

A_l used for any linear operator has properties

A

L to A_L
(i) A_L=0 if and only if L = 0,
(ii) Aλ_L = λA_L for any λ ∈ R and L : V → W,
(iii) A_{L+S} = A_L + A_S for any linear operators L, S : V → W,

(iv) A_{L◦S} = A_LA_S for any linear operators S : Z → V and L : V → W.
(v) Ker L = {sum x_iv_i}, where (x1, . . . , xn) is a solution to the linear system
sum_{i=1,n}a_{ji}x_i = 0 for any j = 1, . . . , m;
if n = m, then nul L = 0 if and only if det /= 0.(square matrix)

(vi) rank L = rank A_L, rank of the matrix AL; if n = m, we see that L has maximal rank if and only if det A_L /= 0.

84
Q

when is L an isomorphism
L:R^n to R^n

A

nulL = 0 ⇔ rankL = n ⇔ det A_L /= 0 ⇔ L is an isomorphism

if L satisfies these its invertible and the inverse L-1 also satisfies

LINEAR ISOMORPHISMS ~ INVERTIBLE LINEAR OPERATORS

85
Q

if L and S are two maps isomorphic then

A

their composition is

86
Q

General linear group GL(n)

A

The collection of all linear operators that satisfy any of the hypotheses is
called the general linear group and is denoted GL(n).

nulL = 0 ⇔ rankL = n ⇔ det A_L /= 0 ⇔ L is an isomorphism

87
Q

matrix of operator

A

A_L
DEPENDS ON CHOICE OF BASIS
[a_11 ……a_1n]
[…]
[a_m1…..a_mn]
m x n matrix

first column found from L(v_1)
last column from L(v_n) for vectors in basis

88
Q

Gram-Schmidt process.

exam q?

A

The process of transforming any basis into an orthogonal basis that was described in the proof of Theorem I.3 is called the Gram-Schmidt process

89
Q

what do I need to find A_L?

GL(n)

A

a basis {e1, . . . , en} in R^n, we may identify linear operators L with n × n matrices A_L,
GL(n) =
{n × n-matrices A : det A /= 0}.

90
Q

two matrices of a linear operator?

A

If {e¯1, . . . , e¯n} another basis, and and linear op has matrix A¯_L
then

L = C⁻¹A_L C
C = (c_ji),
e_j = sum
{i=1,n} c_{ji}e¯_i
.
Here C is a matrix for transition map, maps one basis to the other. In
particular, by the multiplicative property of the determinant we conclude that
det A¯L = det(C−1A_LC)
= det C−1 det AL det C = det AL,

as det C −1 = 1/ det C.

91
Q

RANK of a matrix

A

maximal # linearly indep cols of a
or
max #linearly indep rows

if A in REF then rank = #nonzero rows

92
Q

PROJECTION ONTO LINE example of matrix

A

linearly infep cols ~ rank of matrix =1

L(u) = (uv)v matrix found by
L(e_i)=(e_i v)v
= v_i V

matrix is
[v_1v, v_2v,… v_nv]

each col ~ projection of e_i onto v, scaled by v_i component v

v spanning 1d subspace: nullity = n-1

93
Q

example rotation matrix

A

Rθ(e_1) = (cosθ, sinθ)
Rθ(e_2) = (-sinθ, cosθ)
taking these as cols

A_Rθ =
[cosθ, -sinθ]
[-sinθ, cosθ]
Rθ(e_1) Rθ(e_2)

theyre linearly indep so
rank =2
nullity=0

94
Q

composition of rotations

A

is a rotation
composition
Rθ * Ra = R(θ+a)

matrices
A_Ra * A_Rθ

95
Q

ROTATIONS
preserve

A

ROTATIONS are LINEAR MAPS that PRESERVE DOT PRODUCT
L(uv) =uv for all u,v in R^n

96
Q

matrix for R(z)= z conjugate

A

R(e_1)=e_1
R(e_2)= -e_2

CONJUGATION~REFLECTION
in basis e_1
[1 0]
[0 -1]

and is therefore a rotation

97
Q

ORTHGONAL LINEAR OPERATORS

A

linear operator L : R^n → R^n is called orthogonal (or an orthogonal transformation), if it preserves the dot product:

L(u) · L(v) = u · v
for all u, v ∈ R^n

satisfy
=|L(u)||L(v)| cos φ
=u*v
PRESERVE LENGTHS

98
Q

REFLECTIONS/ROTATIONS

ORTHOGONAL LINEAR OPERATORS

A

reflections, rotations
contained in
orthogonal linear operators

contained in
euclidean isometries

99
Q

ORIENTATION PRESERVING

A

An isomorphism (invertible operator) L : R^n → R^n is called orientation preserving,
if its matrix AL with respect to some (and hence any) basis satisfies det A_L > 0. Otherwise, an
isomorphism L is called orientation reversing.

100
Q

example
reflection through coordinate hyperplane

H =
{(x_1, . . . , x_{n−1}, 0) : x_1, . . . , x_{n−1} ∈ R} ⊂ R^n

formula
matrix (standard basis)

A

M(x_1, . . . , x_{n−1}, x_n) =
(x1, . . . , x_{n−1}, −xn)

matrix
[1 0 …0]
[0 1 0…0]
[…]
[0….. 0 -1]

det =-1
concluding orientation reversing

101
Q

Write down the matrix of the reflection through the coordinate hyperplane
H˜ =
{(0, x_2, . . . , x_n) : x_2, . . . , x_n ∈ R} ⊂ R^n
.

A

M(x_1,..,x_n) = (-x_1,x_2,..,x_n)
A_m=
[-1 0 …0]
[0 1 0…0]

[0…0 1]

102
Q

when do we have a reflection through H

A

Let H in R^n be a vector subspace of dim n-1
A linear operator is the reflection through H

if M_A(v) =v for any v in H
M_H(v) = -v for any v orthogonal to H

103
Q

PROPN
basic properties for orthogonal operator

A

Let L : R^n → R^n be an orthogonal operator. Then:

(i) it is invertible, and the inverse operator L−1is also orthogonal;
(ii) if S : R^n → R^n is another orthogonal operator, then the composition L ◦ S is also an orthogonal operator;

(iii) the operator L preserves lengths of vectors, that is |L(u)| = |u| for any u ∈ R^n.

104
Q

proof PROPN for orthogonal op properties

A

Proof. We first check property (iii):
|L(u)|^2 = L(u) · L(u) = u · u = |u|^2
for any u ∈ R^n.

Now property (iii) implies that nul L = 0. Hence, any of the hypotheses in (I.5) is
satisfied, and we conclude that L is invertible. L⁻¹(u) · L⁻¹(v) =
L(L⁻¹(u)) · L(L⁻¹(v)) = u · v
for any u and v ∈ R^ n. Thus, property (i) is verified.

(ii) follows by the
repeated application of relation
(L ◦ S)(u) · (L ◦ S)(v) = L(S(u)) · L(S(v)) = S(u) · S(v) = u · v for any u and v ∈ R^n
Rank-nullity shows injective and surjective then isomorphism

105
Q

orthogonal group

A

collection of all orthogonal operators
{A in GL(n): A^TA=E}

L : R^n → R^n forms the orthogonal group O(n).

106
Q

EUCLIDEAN ISOMETRY

A

A (not necessarily linear!) map T : R^n → R^n is called a Euclidean isometry, if it preserves distances between points, that is
dist(u, v) = dist(T(u), T(v)) for all u, v ∈ R^n

107
Q

is a translation a euclidean isometry

A

P:R^n to R^n
u to u+p

dist(P(u),P(v))
= |P(u)-P(v)|
= |u +p -v+p|
=|u-v|
= dist(u,v)
for all u,v in R^n

108
Q

COROLLARY
euclidean isometries and orthogonal operators

A

Any orthogonal operator L : R^n → R^n is a Euclidean isometry.

Proof. Since L is a linear operator, then using property (iii) in Proposition I.9, we obtain
dist(u, v) = |u − v| = |L(u − v)| = |L(u) − L(v)| = dist(L(u), L(v))
for any u and v ∈ R^n

109
Q

Euclidean isometry such that T(0)=0

A

Theorem I.11. Let T : R^ n → R^n be a Euclidean isometry that fixes the origin, that is T(0) = 0.
Then T is an orthogonal linear operator.

Proof. For MATH5113M only

T fixes origin THEN it has to be linear

110
Q

PROOF L5

A
111
Q

Corollary i.12 an euclidean isometry is

A

Corollary I.12. Let T : R^n → R^ n be a Euclidean isometry. Then it is the composition of an orthogonal transformation T_0 followed by a translation P, that is T = P ◦ T_0.

112
Q

proof

A

T(0)=p trivial if p=0
p not equal to 0
p(x)=x+p bijective inverse p-1(x) =x-p

composition p^-1 T
p,T are EIs
p^-1 T(0)=0 so by thm
p-1 . T = T_0

unique composition determined by value of T
as composition of EIs is an EI

thus its an orthogonal trans uniquely determined by the value of…

113
Q

scalar product on C[a,b]

A

<f,g> =∫ₐᵇ f(t)g(t).dt

1) <a_1f_1 +a_2f_2,g> = a_1<f_1,g> +a_2<f_2,g>

2)<f,g>=<g,f>

3) <f,f> =0 iff f=0 by CS

114
Q

INTEGRAL CAUCHY SCWARTZ LEMMA

A

for any functs f,g in C[a,b]
|∫ₐᵇ f*g .dt| <=
SQRT[|∫ₐᵇ f^2 .dt||∫ₐᵇg^2.dt|]

equality IFF f=0 or g = λf for λinR (identically proportional)

|<f,g> |<= SQRT[<f,f><g,g>]

115
Q

integral Cauchy schwarz lemma
PROOF

|<f,g> |<= SQRT[<f,f><g,g>]

equality IFF f=0 or g = λf for λinR (identically proportional)

A

Let V be a vector space of all the continuous functions f:[a,b] to R^n
consider polynomial p(λ)=<λfg, λfg>

using the properties of <> mean we will have real roots discriminant gives the inequality

equality from CS lemma

116
Q

might be exam q on this
THM
when does an orthonormal basis exist

A

Let V⊂R^n be a vector subspace. Then there exists an ORTHONORMAL BASIS IN V

117
Q

PROOF

A

Let P : R^n → R^n be a translation by p = T(0). Then the inverse map P−1 is also a translation
(by −p), and an isometry. Since the composition of two isometries is an isometry, we see that the
map P−1 ◦ T is an isometry. Besides, we have P−1◦ T(0) = P−1
(p) = p − p = 0.
Thus, by Theorem I.11 the map P
−1 ◦ T is an orthogonal transformation. Setting T_0 = P−1 ◦ T, we obtain that T = P ◦ T_0.

118
Q

PROP i.13
How can we check a linear operator is orthogonal

A

Let (e_i) be an orthonormal basis in R^n.
Then a linear operator L : R^n → R^n is orthogonal
IFF
its matrix A wrt basis (e_i) satisfies A^TA = E,
(identity matrix)

119
Q

PROOF
Let (e_i) be an orthonormal basis in R^n.
Then a linear operator L : R^n → R^n is orthogonal
IFF
its matrix A wrt basis (e_i) satisfies A^TA = E,
(identity matrix)

A
120
Q

L is orthogonal ⇒ A^TA = E.

A

true

121
Q

e.g is this an orthogonal operator matrix
[0.5 √3/2]
[- √3/2 0.5]

A

By propn we have
A^T A =E
orthogonal operator

we can also show cols form orthogonal basis themselves

122
Q

e.g is this an orthogonal operator matrix

[0.5 √3/2]
[√3/2 0.5]

A

not
lengths of cols not equal to 1

123
Q

Corollary I.14. Let e1, e2 be an orthonormal basis in a Euclidean plane R^2. Then the matrix A of
any orthogonal operator L : R^2 → R^2 has one of the following forms

A

A_L =
[cos θ −sin θ]
[sin θ cos θ]
det>0 orientation preserving

or
A_l=
[cos θ sin θ]
[sin θ − cos θ]

det<0 orientation reversing
0 <=θ < 2π.

124
Q

PROOF
A_L =
[cos θ −sin θ]
[sin θ cos θ]
det>0 orientation preserving

or
A_l=
[cos θ sin θ]
[sin θ − cos θ]

det<0 orientation reversing
0 <=θ < 2π.

A

Suppose that the matrix A of L
[a c]
[b d]

Then relation A^TA=E gives
a^2+b^2=1
ac+bd=0
c^a +d^2=1
set
a=cosθ
b=sinθ
for some
0 <= θ < 2π

c=-tsin θ
d= tcos θ
for some t ∈ R
obtaining
t^2=1
thus t=+-1
corresponds to det sign

125
Q

summary of matrices for orthogonal operators

A

orientation preserving orthogonal operators R^2 → R^2 are precisely
rotations (through an angle θ in the anti-clockwise direction), and orientation reversing orthogonal
operators R^2 → R^ 2 are precisely rotations (through an angle θ in the clockwise direction) followed by
a reflection (mirror symmetry) in e_1

IF LINEAR OP
matrix cols

L(v)=Av
L(e_1)=(a,c)^T
L(e_2)= (b,d)^T

126
Q

L5 PROOF
Let V ⊂ Rⁿ be a vector subspace. Then there exists an orthonormal basis in V .

there maybe a question on the exam

A

Given an orthogonal basis→ orthonormal: by normalising vectors (GRAM SCHMIDT PROCESS exam)

If e₁,…,eₖ ∈ Rⁿ is an ORTHOGONAL BASIS in V then vectors
e₁~=e₁/ |e₁|
,…,
eₖ~=eₖ/ |eₖ|
form an orthonormal basis.
SUFFICIENT TO PROVE AN ORTHOGONAL BASIS EXISTS
Let u₁,…,uₖ be a basis of V , where k <= n is the dimension of V.
construct an orthogonal basis e₁,…,eₖ s.t

span{u₁,…,uᵢ}= span{ e₁,…,eᵢ} for all i=1,..,k

FIRSTLY
e₁=u₁
look for vector e₂ in the form e₂=u₂ + a₂,₁e₁
( e₁*e₂ =0 ,
by linearity
a₂,₁= -(u₂.e₁)/(e₁.e₁))
check span
span{u₁,u₂}= span{ e₁,e₂} ie all poss linear combos hold here

assume we have constructed the set s.t orthogonal and span cond holds:
Suppose that the non-zero orthogonal vectors e₁,…,eₗ₋₁ s.t relation holds for i=1,…,l-1 are found. Then we set
eₗ= uₗ + aₗ,₁e₁+…+ aₗ,ₗ eₗ₋₁
where coeffs are determined by conditions
eₗe₁= eₗe₂ =…=eₗeₗ₋₁
ie we find
aₗ,₁= - (uₗe₁)/ (e₁e₁)
….
aₗ,ₗ₋₁= - (uₗeₗ₋₁)/(eₗ₋₁eₗ₋₁)
Thus we find a vector eₗ s,t span set cond hold for i=1,..,l

vector eₗ non zero- o/w vectors u₁,…,uₗ would be linearly dependent. (o/w its a linear combo already in span which contradicts linearly indep)

Continue we find orthogonal collection of non zero vectors e₁,…,eᵢ satisfying spanning cond in particular

V= span{u₁,…,uₖ}= span{ e₁,…,eₖ}
conclude that e₁, . . . , eₖ is indeed a basis of V .

127
Q

We may be given a subspace in exam…

A

any subsace of R^n has an orthonormal basis (Grau schmidt- find “projection”)

128
Q

may be given subspace in exam
Given V subset in R^3
V={(x,y,z) in R^3 x+y +z=0}

Find an OB in this subspace

A

linear eq series of sols has dim 2
find any that solves that are linearly indep
e.g
u_1 =(1,-1,0)
u_2=(0,1,-1) in V
form a basis as linearly indep
orthonormal basis:
|u_1| = root 2 = |u_2|
using Gram schmidt
e_1 = (1,-1,0)
e_2 = u_2 +ae_1
e_2 = u_2 +0.5e_1 = (0.5, -0.5,-1)

a = (u_2 *e_1)/(e_1 * e_1)= -(-1)/2=1/2

checking e_1*e_2 = 0

orthonormal
e^_1 = e_1/|e_1| = (1/root(2))
e^_2 = root(2/3) (0.5,0.5,-1)

Note
e_l = u l + a{l,1}e_1 +…+a_{l,l-1}e_{l-1}

a_{l,1} = (-u_1 e_1)/(e_1e_1) ,
….,
a_{l,l-1} = -(u_l * e_{l-1})/(\e_{l-1}e_{l-1})

129
Q

notes

A

allows to construct in and dim, a vector subspace

lengths of vector in between 2 point taking a plane orthogonal to line

diagram:
plane, orthogonal (normal to plane) w

130
Q

COROLLARY I/4 for vector space
w

A

Let V subset R^n be a vector space
then for x in R^n
there exists a unique vector w in V
s.t
|x-y| > |x-w| for all y in V

y not equal to w

moreover
if e_1,..,e_n is an orthonormal basis of V then
w= sum_{i=1,n} (x* e_i) e_i

defining projection choosing a different basis doesnt change the unique vector

131
Q

PROOF
Let V subset R^n be a vector space
then for x in R^n
there exists a unique vector w in V
s.t
|x-y| > |x-w| for all y in V

y not equal to w

moreover
if e_1,..,e_n is an orthonormal basis of V then
w= sum_{i=1,n} (x* e_i) e_i

A

Show bector x-w is orthogonal to subspace V

we claim x-w orthogonal to V

(x-w)*y = 0 for all y in V

IFF
(x-w)*e_i=0 fpr all i=1,…k as all y are decomposed as

checking

by linearity
orthoormal
pythagorean thm
strictly positive
corollary 1.4 contradiction……

132
Q

e.g space of polynomials and construct an orthonormal basis

A

for example construct OB for space of polys with degree at most 2
P_2={P_n(t)=a_2t^2+a_1t+a_0 a_i in R}

we use standard inner product <f,g> integral
BASIS POLYNOMIALS
p_1(x)=1
p_2(x) =x
p_3(x) = x^2

e¬_1 = p_1(x)/ |p_1(x)| = 1/ sqrt(<1,1>) =1

e_2= x-0.5*1 =x-0.5

e¬_2 = root(2) (2x-1)
e_3=x^2-(2/3)x +(1/6) normalise by dividing by sqrt(<e_3,e_3>)

133
Q

translations x to x+a example

A

preserve length thus euclidean isometry - not linear?
T:R^n to R^n be an euclidean isometry
T(0)=0
then T is an orthogonal transform

134
Q

sketch proof that EI preserve distance

A

|T(u)-T(v)|^2 = |u-v|^2 for all u,v in R^n

(T(u)-T(v))(T(u)-T(v)) =
|u|^2 - 2 |u
v| +|v|^2
properties linearity symmetry
= |T(u)|^ - 2T(u)T(v) + |T(v)|^2
cancel out if set v=0 or u=0 so
T(u)-T(v) = u*v for all u,v

NOW SHOW LINEAR MAP: take orthonormal basis
then
T(e_1),…,T(e_n) is also an orthonormal basis
………missing

135
Q
A