MA106 - Linear Algebra Theorems Flashcards
Every matrix can be brought to row reduced form by elementary row transformations.
Need an algorithm, where:
After Termination the resulting matrix has Row Reduced Form
The algorithm terminates after finitely many steps
Call (i,j) the Pivot Position and aim the Pivot Entry. Begin with (i,j)=(1,1)
- if akj = 0 for all k ≥ i, move Pivot to (i, j+1)
Repeat Step 1, terminate if j=n - If aij = 0 but akj≠ 0 for some k > i, apply (R2) and swap ri, rk
- Now aij≠ 0, if aij≠1, apply (R3) to multiply ri by 1/aij
- aij=1, If any k≠i, akj≠0, use (R1) to subtract akjxri from rk
- akj=0 ∀ k≠i. Terminate if i=m or j=n, otherwise move Pivot to (i +1, j+1) and go back to Step 1
By Applying Elementary Row & Column operations, a matrix can be brought into the form of Identity in top left surrounded by 0
Use Elementary Row operations to row reduce A. Hence ai,c(i)=1.
Use (C1) ∀ aij≠0 with j≠ c(i) to replace cj with aij-c(i).
∀ i starting from i = 1, exchange ci and cc(i), putting all the zero columns at
the right-hand side.
The vectors v1,v2,…,vn ∈ V are linearly dependent iff either v1 = 0 or, for some r, the vector vr is a linear combination of v1, . . . , vr−1.
If v1=0, set a1=1, and ai=0 ∀ i>1. Hence the linear combination equals 0
If vr is a linear combination,
vr = α1v1 +···+αr−1vr−1
α1v1 + ··· + αr−1vr−1 − 1vr = 0
Conversely, suppose that v1,v2,…,vn ∈V are linearly dependent. Let r be maximal with ar≠0.
then α1v1 +α2v2 +···+αrvr = 0
If r=1, α1v1 = 0 which is only possible if v1=0
Otherwise
vr =−α1/αr. v1−···−αr−1/αr. vr−1.
The vectors v1,…,vn form a basis of V if and only if every v ∈ V can be written uniquely as v = α1v1 + · · · + αnvn; that is, the coefficients α1, . . . , αn are uniquely determined by the vector v.
Suppose there are other scalars βi ∈ K. Write out a vector v as a linear combination of both.
Let 0= v-v then show ai =βi.
Conversely,
Suppose α1v1+α2v2+···+αnvn =0, then
Ifα1v1+α2v2+···+αnvn = 0v1 + 0v2 +…
Uniqueness implies a1=…=an=0. hence they are linearly independent, hence form a basis.
Suppose that the vectors v1,v2,…,vn,w span V and that w is a linear combination of v1,…,vn. Then v1,…,vn span V.
any vector is a linear combination (including w). w is a linear combination of vi.
Substitute into first linear combination.
Suppose that the vectors v1, . . . , vr span the vector space V . Then there is a subsequence of v1,…,vr which forms a basis of V.
Sift v1,…,vr. Hence remaining vectors Span V.
No remaining are 0 or a linear combination
Hence they are linearly independent.
They form a basis
Let V be a vector space over K which has a finite spanning set, and suppose that the vectors v1, . . . , vr are linearly independent in V . Then we can extend the sequence to a basis v1,…,vn of V, where n≥r.
Suppose w1,…,wq is a spanning set.
Sift v1,…,vn,w1,…,wq. This results in a basis for V.
v1,…,vr are linearly independent, hence no vi removed. Hence basis contains them
What is the Exchange Lemma?
Suppose that vectors v1,…,vn span V and that vectors w1, . . . , wm ∈ V are linearly independent. Then m ≤ n.
Suppose that vectors v1,…,vn span V and that vectors w1, . . . , wm ∈ V are linearly independent. Then m ≤ n.
Place one wi in front of v1,…,vn and sift each time.
Since vj span V and wi are linearly independent, at least one vj is deleted.
In total m wi vectors are added, with at least 1 vj removed each time. Hence m ≤ n.
Let V be a vector space of dimension n over K. Then any n vectors which span V form a basis of V, and no n−1 vectors can span V.
After sifting, remaining vectors form a basis.
Hence there must be n=dim(V) vectors.
Let V be a vector space of dimension n over K. Then any n linearly independent vectors form a basis of V and no n+1 vectors can be linearly independent.
Any linearly independent set is contained in a basis. But there must be n=dim(V) vectors in the extended set.
If W1 and W2 are subspaces of V then so is W1 ∩ W2.
Show it is closed under addition and scalar multiplication. (Hint: Start with 2 elements of W1∩ W2)
If W1, W2 are subspaces of V then so is W1 + W2. In fact, it is the smallest subspace that contains both W1 and W2.
Show it is closed under addition & Scalar multiplication (Hint: Start with 2 elements of W1+W2)
Any subspace of V containing W1 and W2 must contain W1 + W2 hence it is the smallest
Let V be a finite-dimensional vector space, and let W1,W2 be sub- spaces of V . Then
dim(W1 + W2) = dim(W1) + dim(W2) − dim(W1 ∩ W2).
Find what is a subset of which. Define a basis for W1 ∩ W2 and extend them to form W1 and W2 separately. Show the resulting bases Span and are linearly independent.
Let T:U→V be a linear map. Then
(i) T(0U)=0V;
(ii) T(−u) = −T(u) for all u ∈ U.
Definition
i: T(0U)=T(0U +0U)=T(0U)+T(0U), Hence T(0U)=0V
ii. a=-1 in definition of T
Let U,V be vector spaces over K, let S be a basis of U and let f:S→V be any function assigning to each vector in S an arbitrary element of V . Then there is a unique linear map T:U→V such that for every s∈S we have T(s)=f(s).
Let u ∈ U, u = α1s1 + · · · + αnsn
If T exists:
T(u)=T(α1s1 +···+αnsn)=α1f(s1)+···+αnf(sn), Hence it would be unique.
Let U,V be vector spaces over K of dimensions n,m, respectively. Then, for a given choice of bases of U and V , there is a one-one correspondence between the set HomK(U,V) of linear maps U → V and the set Km,n of m×n matrices over K.
any linear map T : U → V determines an m × n matrix A over K.
Conversely, let A=(aij) be an mxn matrix over K. Then there is one Linear T: U→V. with T(ej)=Sum from 1 to m=aijfi Hence it is a one to one correspondence.
Let T : U → V be a linear map. Let the matrix A = (aij ) represent T with respect to chosen bases of U and V , and let u and v be the column vectors of coordinates of two vectors u ∈ U and v ∈ V , again with respect to the same bases. Then T(u)=v if and only if Au=v.
Summation Proof
- Let T1, T2 : U → V be linear maps with m × n matrices A, B respectively. Then the matrix of T1 + T2 is A + B.
- LetT:U→V be a linear map with m×n matrix A and let λ∈K be a scalar. Then the matrix of λT is λA.
Use Definitions.
Let T1: V → W be a linear map with l×m matrix A = (aij) and let T2: U → V be a linear map with m×n matrix B = (bij). Then the matrix of the composite map T1T2 : U → W is AB.
Summation Proof.
Let T : U → V be a linear map. Then
(i) im(T) is a subspace of V ;
(ii) ker(T) is a subspace of U.
Show they are closed under addition and scalar multiplication.
What is the Rank-Nullity Theorem?
Let U, V be vector spaces over K with U
finite-dimensional, and let T : U → V be a linear map. Then:
rank(T) + nullity(T) = dim(U).
Let U, V be vector spaces over K with U
finite-dimensional, and let T : U → V be a linear map. Then:
rank(T) + nullity(T) = dim(U).
ker(T) is a subspace hence is also finite-dim.
Define a basis of Ker(T) ei, extend it to a basis of U. Show dim(im(T))= size of extension.
T(e1)….T(f1)…. span im(T). but T(ei)=0.
Then left with T(fi)
Suppose for contradiction they are linearly dependent. Can prove they form a Basis of im(U).
Let T : U → V be a linear map, and suppose that dim(U ) = dim(V ) =n. T
hen the following properties of T are equivalent:
(i) T is surjective;
(ii) rank(T ) = n;
(iii) nullity(T ) = 0;
(iv) T is injective;
(v) T is bijective;
i implies im(T)=V, which give ii.
Converse, dim(im(T))=dim(V), same basis. im(T)=V.
Rank Nullity- ii double implication iii
Nullity(T)=0, hence ker(T)={0}. iv gives iii
SupposeT(u)=T(u’), T(u-u’)=0, hence u-u’ in ker(T) and are equal. T is injective
v is equivalent to i and iv.
Let T : U → V be a linear transformation, and let e1,…,en be a basis of U. Then the rank of T is equal to the size of the largest linearly independent subset of T(e1),…,T(en).
im(T) spanned by T(ei), hence some subset is a basis of im(T).
This has size dim(im(T))=rank(T). No larger subset can be linearly independent.
Suppose that the linear map T has matrix A. Then rank(T ) is equal to the column rank of A.
the columns c1, . . . , cn of A are precisely the column vectors of coordinates of the vectors T(e1),…, T(en) wrt chosen basis.
rank(T) = size of largest linearly independent subset of these.
This is Column rank by definition.
Applying elementary row operations (R1), (R2) or (R3) to a matrix does not change the row or column rank. The same is true for elementary column operations (C1), (C2) and (C3).
Row Rank of A is the dim(rowspace(A)), the space of linear combinations λ1r1 + · · · + λmrm of the rows of A. (R1), (R2) and (R3) do not change this space, so they do not change the row-rank.
columns linearly independent if
x1c1 +x2c2 +···+xscs = 0
Expand these column vectors to make a system of equations. Then Check each (Ri) on them to see if it affects the solution, hence the linear dependence.
Column operation proof the same with col/row swapped.
Let s be the number of non-zero rows in the row and column reduced form of a matrix A. Then both row rank of A and column rank of A are equal to s.
elementary operations preserve ranks, can find both ranks in this form.
row space is precisely the space spanned by the first s standard vectors, dim=s.
Similarly column space dim=s
The rank of a matrix A is equal to the number of non-zero rows after reducing A to upper echelon form.
Show non-zero rows linearly independent by contradiction.
Let A be a matrix of a linear map T. A linear map T is invertible if and only if its matrix A is invertible. The inverses T−1 and A−1 are unique.
Since the inverse map of a bijection is unique, T −1 is unique. Under the bijection between matrices and linear maps, A−1 must be the matrix of T−1. Thus, A−1 is unique as well.
If A and B are invertible n × n matrices, then AB is invertible, and (AB)−1 = B−1A−1.
ABB^−1A^−1 = B^−1A^−1AB = In.
The row reduced form of an invertible n × n matrix A is In.
If invertible, rank=n. Let B=(bij) be its row reduced form.
bic(i)=1 for 1≤i≤ n where c(1)
An invertible matrix is a product of elementary matrices.
Inverses of E E(n)1−λ,i,j, E(n)2i,j and E(n)3λ−1,i
if ErEr−1 ···E1A = In then:
A = (Er Er-1 …E1 )−1 = E1^−1E2^−1 …Er^−1,
Let A be an n × n matrix. Then
(i) the homogeneous system of equations Ax = 0 has a non-zero solution if and
only if A is singular;
(ii) the equation system Ax = b has a unique solution if and only if A is non-
singular.
The solution set of the equations is exactly nullspace(A).
nullspace(A ) = ker(T ) = {0} ⇐⇒ nullity(T ) = 0 ⇐⇒ T is non-singular,
If A is singular then its nullity is greater than 0 and so its nullspace is not equal to {0}, and contains more than one vector. Either there are no solutions, or the solution set is x + nullspace(A) for some specific solution x, in which case there is more than one solution. Hence there cannot be a unique solution when A is singular. Conversely, if A is non-singular, then it is invertible by Theorem 10.2, and one solution is x = A−1b. Since the complete solution set is then x + nullspace(A), and
nullspace(A) = {0} in this case, the solution is unique.
Elementary row operations affect the determinant of a matrix as follows.
(i) det(In) = 1.
(ii) Let B result from A by applying (R2) Then det(B) =− det(A).
(iii) If A has two equal rows then det(A) = 0.
(iv) Let B result from A by applying (R1) Then det(B) = det(A).
(v) Let B result from A by applying (R3) Then det(B) = λ det(A).
Use definition of Determinant.
If A = (aij) is upper triangular, then det(A) = a11a22 …ann is the
product of the entries on the main diagonal of A.
apply row operations (R1) to reduce the matrix to the diagonal matrix with same aii on main diagonal.
Use:
(i) det(In) = 1.
(v) Let B result from A by applying (R3) Then det(B) = λ det(A).
Let A = (aij) be an n × n matrix. Then det(AT) = det(A).
Use definition of determinant. and Inverse permutation.
For an n × n matrix A, det(A) = 0 if and only if A is singular.
A can be made to row reduced echelon form. rank(A) not affected hence neither is whether or not it is singular. (When Rank(A)
If E is an n×n elementary matrix, and B is any n×n matrix, then det(EB) = det(E) det(B).
Determine det(EB), then let B=In to find individual det(Ei)
For any two n × n matrices A and B, we have det(AB) = det(A) det(B).
If det(A)=0 then rank(A)
Let A be an n × n matrix.
det(A)=ai1ci1 +ai2ci2 +···+aincin
det(A)=a1jc1j +a2jc2j +···+anjcnj
Use Definition of Determinant.
A adj(A) = det(A)In = adj(A) A
….
If det(A) ̸= 0, then A^−1 = 1/det(A). adj(A).
…
Let A be an n × n matrix. Then λ is an eigenvalue of A if and only if det(A − λIn) = 0.
…
Similar matrices have the same characteristic equation and hence
the same eigenvalues.
…
Let T : V → V be a linear map. Then the matrix of T is diagonal with respect to some basis of V if and only if V has a basis consisting of eigenvectors of T.
Equivalently, let A be an n × n matrix over K. Then A is similar to a diagonal matrix if and only if the space Kn,1 has a basis of eigenvectors of A.
…
Let λ1,…,λr be distinct eigenvalues of T: V → V, and let v1, . . . , vr be corresponding eigenvectors. (So T (vi) = λivi for 1 ≤ i ≤ r.) Then v1,…,vr arelinearlyindependent.
…
If the linear map T : V → V (or equivalently the n × n matrix A) has n distinct eigenvalues, where n = dim(V ), then T (or A) is diagonalisable.
…