Algebra 2 Flashcards
What is a Vector Space?
A vector space is an abelian group V with an additional binary operation F × V → V, called scalar multiplication (α, v) → αv, satisfying the following axioms:
1. α(u + v) = αu + αv,
2. (α + β)v = αv + βv,
3. (αβ)v = α(βv),
4. 1v = v.
What are the ‘easy’ properties of a vector space?
- α0 = 0 for all α ∈ F,
- 0v = 0 and (−1)v = −v for all v ∈ V.
- −(αv) = (−α)v = α(−v), for all α ∈ F and v ∈ V.
What is a subspace?
A subspace of V is a non-empty subset W ⊆ V such that
u, v ∈ W ⇒ u + v ∈ W and v ∈ W, α ∈ F ⇒ αv ∈ W.
These two conditions can be replaced with a single condition
u, v ∈ W, α, β ∈ F ⇒ αu + βv ∈ W.
Is the Intersection of two subspaces a subspace?
If W1 and W2 are subspaces of V then so is W1 ∩ W2.
Proof. Let u, v ∈ W1 ∩ W2 and α ∈ F. Then u + v ∈ W1 (because W1 is a subspace) and u + v ∈ W2 (because W2 is a subspace). Hence u + v ∈ W1 ∩ W2. Similarly, we get αv ∈ W1 ∩ W2, so W1 ∩ W2 is a subspace of V.
What is the sum of two subspaces?
Let W1, W2 be subspaces of the vector space V. Then W1 + W2 is defined to be the set of vectors v ∈ V such that v = w1 + w2 for some w1 ∈ W1, w2 ∈ W2. Or, if you prefer,
W1 + W2 = {w1 + w2 | w1 ∈ W1, w2 ∈ W2}.
Is W1 + W2 a subspace?
If W1, W2 are subspaces of V then so is W1 + W2. In fact, it is the smallest (with respect to the order ⊆) subspace that contains both W1 and W2.
Proof. Let u, v ∈ W1 + W2. Then u = u1 + u2 for some u1 ∈ W1, u2 ∈ W2 and v = v1 + v2 for some v1 ∈ W1, v2 ∈ W2. Then u + v = (u1 + v1) + (u2 + v2) ∈ W1 + W2. Similarly, if α ∈ F
then αv = αv1 + αv2 ∈ W1 + W2. Thus W1 + W2 is a subspace of V.
Any subspace of V that contains both W1 and W2 must contain W1 + W2, so it is the smallest such subspace.
What is a Vector Sequence and Linear Combination of vectors?
By a vector sequence we understand a finite sequence v1, v2, . . . vn of elements of a vector space V.
Vectors of the form α1v1 + α2v2 + · · · + αnvn for α1, α2, . . . , αn ∈ F are called linear combinations of v1, v2, . . . vn.
When is a vector sequence linearly dependent?
Let V be a vector space over the field F. The vector sequence v1, v2, . . . vn is called linearly dependent if there exist scalars α1, α2, . . . , αn ∈ F, not all zero, such that
α1v1 + α2v2 + · · · + αnvn = 0.
The sequence v1, v2, . . . vn is called linearly independent if they are not linearly dependent. In other words, it is linearly independent if the only scalars α1, α2, . . . , αn ∈ F that satisfy the above equation are α1 = 0, α2 = 0, . . . , αn = 0.
What is the check for linear dependence?
The vector sequence v1, . . . , vn ∈ V is linearly dependent if and only if either v1 = 0 or, for some r, vr is a linear combination of v1, . . . , vr−1.
Proof. If v1 = 0 then by putting α1 = 1 and αi = 0 for i > 1 we get α1v1 + · · · + αnvn = 0, so v1, v2, . . . , vn ∈ V is linearly dependent.
If vr is a linear combination of v1, . . . , vr−1, then vr = α1v1 + · · · + αr−1vr−1 for some α1, . . . , αr−1 ∈
F and so we get α1v1 + · · · + αr−1vr−1 − 1 · vr = 0 and again v1, v2, . . . , vn ∈ V is linearly dependent.
Conversely, suppose that v1, v2, . . . , vn ∈ V is linearly dependent, and αi are scalars, not all zero, satisfying α1v1 + α2v2 + · · · + αnvn = 0. Let r be maximal with αr ≠ 0; then α1v1 + α2v2 + · · · + αrvr = 0. If r = 1 then α1v1 = 0 which is only possible if v1 = 0. Otherwise, we get
vr = −α1/αr * v1 − · · · −αr−1/αr * vr−1.
In other words, vr is a linear combination of v1, . . . , vr−1.
Is the set of all linear combinations a subspace?
Let v1, . . . , vn be a vector sequence. Then the set of all linear combinations α1v1 + α2v2 + · · · + αnvn of v1, . . . , vn forms a subspace of V.
What is the span of a vector sequence and when does a sequence span the vector space?
The span of a vector sequence is the set of all linear combinations of that sequence.
The sequence v1, . . . , vn spans V if the span of the sequence is V. In other words, this means that every vector v ∈ V is a linear combination α1v1 + α2v2 + · · · + αnvn of v1, . . . , vn.
What is a Basis?
The vector sequence v1, . . . , vn in V forms a basis of V if it is linearly independent and spans V.
How can every vector in a vector space be written as a linear combination of a basis?
The vector sequence v1, . . . , vn forms a basis of V if and only if every v ∈ V can be written uniquely as v = α1v1 + α2v2 + · · · + αnvn; that is, the coefficients α1, . . . , αn are uniquely determined by the vector v.
For proof just assume the coefficients are not unique and then consider v - v = 0, giving that the coefficients are equals.
What is the Basis Theorem?
Suppose that v1, . . . , vm and w1, . . . , wn are both finite bases of the vector space V. Then m = n. In other words, all finite bases of V contain the same number of vectors.
What is the Dimension of a vector space?
The number n of vectors in a basis of the finite-dimensional vector space V is called the dimension of V and we write dim(V) = n.
What is Sifting?
There is an important process, which we shall call sifting, which can be applied to any sequence of vectors v1, v2, . . . , vn in a vector space V. We consider each vector vi
in turn. If it is zero, or a linear combination of the preceding vectors v1, . . . , vi−1, then we remove it from the list. The
output of the sifting is a new linearly independent vectors sequence with the same span as the original one.
How does the length of a vector sequence that spans V and one that is linearly independent differ?
Suppose that vector sequence v1, . . . , vn spans V and that the vector sequence w1, . . . , wm ∈ V is linearly independent. Then m ≤ n.
Proof. The idea is to place the wi one by one in front of the sequence v1, . . . , vn, sifting each time.
Since v1, . . . , vn spans V, w1, v1, . . . , vn is linearly dependent, so when we sift, at least one vj is
deleted. We then place w2 in front of the resulting sequence and sift again. Then we put w3 in from of the result, and sift again, and carry on doing this for each wi in turn. Since w1, . . . , wm are linearly independent none of them are ever deleted. Each time we place a vector in front of a sequence which spans V, and so the extended sequence is linearly dependent, and hence at least one vj gets eliminated each time. But in total, we append m vectors wi, and each time at least one vj is eliminated, so we must have m ≤ n.
How are basis related to a vector sequence that spans V?
Suppose that the vector sequence v1, . . . , vr spans the vector space V. Then there is a subsequence of v1, . . . , vr which forms a basis of V.
Proof. We sift the vectors v1, . . . , vr. The vectors that we remove are linear combinations of the preceding vectors, and so by Lemma 1.4.9, the remaining vectors still span V. After sifting, no vector is zero or a linear combination of the preceding vectors (or it would have been removed), so by Lemma 1.4.2, the remaining vector sequence is linearly independent. Hence, it is a basis of V.
If a vector space can be spanned by a finite sequence, is there a basis?
If a vector space V is spanned by a finite sequence, then it admits a basis.
How is the length of a basis related to the dimension of the vector space?
Let V be a vector space of dimension n over F. Then any sequence of n vectors which spans V is a basis of V, and no n − 1 vectors can span V.
How can a linearly independent sequence of vectors be extended to a basis?
Let V be a finite-dimensional vector space over F, and suppose that the vector sequence v1, . . . , vr is linearly independent in V. Then we can extend the sequence to a basis v1, . . . , vn of V, where n ≥ r.
Proof. Suppose that dim(V) = n and let w1, . . . , wn be any basis of V. We sift the combined sequence v1, . . . , vr, w1, . . . , wn.
Since w1, . . . , wn spans V, the result is a basis of V by Theorem 1.4.11. Since v1, . . . , vr is linearly independent, none of them can be a linear combination of the preceding vectors, and hence none of the vi are deleted in the sifting process. Thus the resulting basis contains v1, . . . , vr.
How many vectors are needed to form a basis?
Let V be a vector space of dimension n over F. Then any n linearly independent vectors form a basis of V and no n + 1 vectors can be linearly independent.
What is a Linear Transformation?
Let U, V be two vector spaces over the same field F. A linear transformation or linear map T from U to V is a function T : U → V such that
(i) T(u1 + u2) = T(u1) + T(u2) for all u1, u2 ∈ U;
(ii) T(αu) = αT(u) for all α ∈ F and u ∈ U.
What are the known properties of linear maps?
Let T : U → V be a linear map. Then
(i) T(0U) = 0V;
(ii) T(−u) = −T(u) for all u ∈ U.
Proof.
(i) T(0U) = T(0U + 0U) = T(0U) + T(0U), so T(0U) = 0V.
(ii) Just put α = −1 in the definition of linear map.
How are linear maps uniquely determined by their action on basis?
Let U, V be vector spaces over F, let u1, . . . , un be a basis of U and let v1, . . . , vn be any sequence of n vectors in V. Then there is a unique linear map T : U → V with T(ui) = vi for 1 ≤ i ≤ n.
Proof. Let u ∈ U. Then, since u1, . . . , un is a basis of U, by Proposition 1.4.6, there exist uniquely determined α1, . . . , αn ∈ F with u = α1u1 + · · · + αnun. Hence, if T exists at all, then we must have
T(u) = T(α1u1 + · · · + αnun) = α1v1 + · · · + αnvn,
and so T is uniquely determined.
On the other hand, it is routine to check that the map T : U → V defined by the above equation is indeed a linear map, so T does exist and is unique.
What are the Operations on Linear Maps?
We define the operations of addition, scalar multiplication, and composition on linear maps.
Let T1 : U → V and T2 : U → V be two linear maps, and let α ∈ F be a scalar.
Addition: We define a map T1 + T2 : U → V by the rule (T1 + T2)(u) = T1(u) + T2(u) for u ∈ U.
Scalar multiplication: We define a map αT1 : U → V by the rule (αT1)(u) = αT1(u) for u ∈ U.
Now let T1 : U → V and T2 : V → W be two linear maps.
Composition: We define a map T2T1 : U → W by (T2T1)(u) = T2(T1(u)) for u ∈ U. In particular, we define T^2 = TT and T^i+1 = T^i T for i > 2.
What are the Image and Kernel of a linear map?
Let T : U → V be a linear map. The image of T, written as im(T) is defined to be the set of vectors v ∈ V such that v = T(u) for some u ∈ U.
The kernel of T, written as ker(T) is defined to be the set of vectors u ∈ U such that T(u) = 0V.
Or, if you prefer:
im(T) = {T(u)| u ∈ U}; ker(T) = {u ∈ U | T(u) = 0V}
How do the Image and Kernel relate to subspaces?
(i) im(T) is a subspace of V;
(ii) ker(T) is a subspace of U;
(iii) T is injective if and only if ker(T) = {0}
Proof.
(i) We must show that im(T) is closed under addition and scalar multiplication. Let v1, v2 ∈ im(T). Then v1 = T(u1), v2 = T(u2) for some u1, u2 ∈ U. Then
v1 + v2 = T(u1) + T(u2) = T(u1 + u2) ∈ im(T); αv1 = αT(u1) = T(αu1) ∈ im(T), so im(T) is a subspace of V.
(ii) Similarly, we must show that ker(T) is closed under addition and scalar multiplication. Let u1, u2 ∈ ker(T). Then
T(u1 + u2) = T(0U + 0U) = T(0U) = 0V; T(αu1) = αT(u1) = α0V = 0V, so u1 + u2, αu1 ∈ ker(T) and ker(T) is a subspace of U.
(iii) The “only if” is obvious since ker(T) = T^−1(0). To prove the “if”, suppose ker(T) = {0} and T(u) = T(v). Then T(u − v) = T(u) − T(v) = 0, so u − v ∈ ker(T), then u − v = 0 and u = v.
What is the Dimension Formula?
Let V be a finite-dimensional vector space, and let W1, W2 be subspaces of V. Then
dim(W1 + W2) = dim(W1) + dim(W2) − dim(W1 ∩ W2).
When are two subspaces complementary?
Two subspaces W1, W2 of V are called complementary if W1 ∩ W2 = {0} and W1 + W2 = V. In this case, we say that V is a direct sum of the subspaces W1 and W2 and we denote it V = W1 ⊕ W2.
Also,
If V = W1 ⊕ W2 is a finite-dimensional vector space, then dim(V) = dim(W1) + dim(W2).
What is the Rank and Nullity of a linear map?
(i) dim(im(T)) is called the rank of T;
(ii) dim(ker(T)) is called the nullity of T.
What is the Dimension Theorem?
Let U, V be vector spaces over F with U finite-dimensional,
and let T : U → V be a linear map. Then
dim(im(T)) + dim(ker(T)) = dim(U);
i.e., rank(T) + nullity(T) = dim(U)
If T is a linear map between vector spaces with the same dimension, then what are the equivalent properties and what does it mean for T to be singular?
Let T : U → V be a linear map, where dim(U) = dim(V) = n. Then the following properties of T are equivalent:
(i) T is surjective;
(ii) rank(T) = n;
(iii) nullity(T) = 0;
(iv) T is injective;
(v) T is bijective;
Proof. T is surjective ⇔ im(T) = V, so clearly (i) ⇒ (ii). But if rank(T) = n, then dim(im(T)) = dim(V) so (by Corollary 1.4.15) a basis of im(T) is a basis of V, and hence im(T) = V.
Thus (i) ⇔ (ii).
(ii) ⇔ (iii) follows directly from Theorem 2.2.7.
(iii) ⇔ (iv) is part (iii) of Proposition 2.2.2.
Finally, (v) is equivalent to (i) and (iv), which are equivalent to each other.
If the conditions in the above are met, then T is called a non-singular linear map. Otherwise, T is called singular.
What is the Change of Basis Matrix?
Let f1, . . . ,fn be a basis of F^n. Write each element of the standard basis e1, . . . , en in this basis. The change of basis matrix is the matrix P = (e1, . . . , en) ∈ F^n,n
What is a Euclidean Form?
A euclidean form on V is a map τ : V × V → R such that
(i) τ(α1v1 + α2v2, w) = α1τ(v1, w) + α2τ(v2, w) for all v1, v2, w ∈ V and α1, α2 ∈ R,
(ii) τ(v, w) = τ(w, v) for all v, w ∈ V,
(iii) τ(v, v) > 0 for all v ∈ V \ {0}.
What is a Euclidean Space?
A euclidean space is a pair (V, τ) where V is a vector space over R and τ is a euclidean form in V.
What is length in a Euclidean Space and what is the inequality?
Let (V, τ) be a euclidean space. For v ∈ V, we define its length by ||v|| = √τ(v, v).
|τ(v, w)| ≤ ||v|| · ||w|| .
What is an angle in a Euclidean Space?
The angle ϕ between any two non-zero vectors v and w is given by
τ(v, w) = ||v|| · ||w|| · cos ϕ or
ϕ = arccos[τ(v, w)/||v|| · ||w||].
What is an Orthonormal vector sequence?
A vector sequence v1, . . . , vn of a euclidean space (V, τ) is called orthonormal, if ||vi|| = 1 for all i and the angle between each vi and vj with i ≠ j is equal to π/2. An orthonormal basis is a basis, which is an orthonormal sequence.
What are the properties of an orthonormal sequence?
Suppose v1, . . . , vn is an orthonormal sequence in a euclidean space (V, τ).
1. If v = α1v1 + . . . + αnvn, then αi = τ(v, vi).
2. The sequence v1, . . . , vn is linearly independent.
Proof.
If v = α1v1 + . . . + αnvn, then
τ(v, vi) = τ(α1v1 + . . . + αnvn, vi) = α1τ(v1, vi) + . . . + αnτ(vn, vi) = αi τ(vi, vi) = αi.
The second statement follows immediately: if α1v1 + . . . + αnvn = 0 is a linear dependency, then all αi = 0 by the first statement.
What is the Gram-Schmidt Process?
Let V be a euclidean space of dimension n. Suppose that,
for some r with 0 ≤ r ≤ n, f1, . . . ,fr is an orthonormal sequence. Then f1, . . . ,fr can be extended to an orthonormal basis f1, . . . ,fn of V
How does a matrix map vectors in different vector spaces?
Let T : U → V be a linear map with matrix A = (αij). Then T(u) = v if and only if Au = v.