VGLA Theorems term 2 Flashcards
multiplication in mod-arg
given any two no-zero complex numbers z₁ z₂
i) |z₁z₂| = |z₁||z₂|
(ii) arg(z₁ z₂) = arg(z₁) + arg(z₂
division in mod-arg
given any two non-zero complex numbers z₁ z₂
i) |z₁/z₂| = |z₁|/|z₂|
(ii) arg(z₁/z₂) = arg(z₁) - arg(z₂
De moivres theorem
if n∈ℕ of θ is real then:
cos(θ) + isin(θ))ⁿ = cos(nθ) + isin(nθ
Eulers formula
z = re^iθ
factors of polynomials
The complex number α is a root of the polynomial equation p(z)=0 if and only if z-α is a factor of the polynomial p(z)
pair of complex roots
Any non-real roots of the polynomial equation p(z)=0 of degree n occur in complex conjugate pairs
(polynomial has real coefficiants)
how factors are expressed
A polynomial p(z) can be expressed as a product of real factors in one of the following ways
- product of linear factors only
- product of irreducible quadratic factors only
- product of atleast one linear equation and atleast one irreducible quadratic factor
number of factors
A polynomial p(z) of degree n with real coefficiants has exactly n zeros, some of which may be complex conjugate pairs
fundamental theorem of algebra
Any polynomial equation of degree n with n>=1 with complex coefficiants has atleast one complex root
|z-z₀| = α
The set of solutions to the equation is represented on the argand diagram by the points on a circle with centre z₀ and radius α
Re(α) / Im(α)
the set of solutions re[resented on the argand diagram by the points on the line where all the real parts are α (vertical line) or all the imaginary parts are α (horizontal line)
Arg(z-z₀) = θ
Set of solutions are represented on the argand diagram by a half line from z₀ at the angle θ
|z-(a+bi)|=|z-(c+di)|
Set of soltuions are represented on the argand diagram by the perpendicular bisector between the points a+bi and c+di
Expansion of any row or column
For any matrix A=[aᵢⱼ]∈Mₙₙ.
det(A) = aᵢ₁Cᵢ₁(A) + aᵢ₂Cᵢ₂(A) + … + aᵢₙCᵢₙ(A) for any 1<i></i>
determinant of an upper triangular matrix
product of the diagonal
determinant of a lower triangular matrix
product of the diagonal
determinant of a diagonal matrix
product of the diagonal
determinant of the unit matrix
1
determinant of a matrix where there is a row or column of 0
0
determinant where there is a repeated row or column
0
interchanging rows/column when finding the determinant
det(P) = - det(A)
multiplying a row/column by a constant λ when finding the determinant
det(P) = λdet(A)
adding a multiple of another row/column when finding the determinant
det(P) = det(A)
Elementary matrices multiples of determinants
det(E.A) = det(E) . det(A)
non-invertible matrices
If the matrix A is non-invertible then det(A)=0
Determinant product theorem
Given the matrices A and B then det(AB) = det(A)det(B)
invertible matrices
A matrix A is invertible if and only if det(A)!=0. When det(A)!=0 det(A⁻¹)= (det(A))⁻¹ = 1/det(A)
no trivial solution corollary
A homogeneous system A.x=0 of n equations in n unknowns has a non-trivial solution if and only if det(A) = 0
Expansion of transposition
Given matrices A,B Then (A.B)ᵀ = Aᵀ.Bᵀ
Transpose of an elementary matrix
Given an elementary matrix E then |Eᵀ|=|E|
determinant of transpose matrix
For a matrix A det(Aᵀ) = det(A)
replacement of summed rows
For a matrix A=[aᵢⱼ]∈Mₙₙ and where aₚⱼ=bₚⱼ+cₚⱼ for 1<=p<=n then:
det(A) = det(B) +det(C)
where the patrix B is constructed by replacing row p in A by the values bₚⱼ and he matrix C is constructed by replacing row p in A with the values Cₚⱼ
Inverse of a matrix
For a matrix A∈Mₙₙ A.adj(A) = adj(A).A = det(A).Iₙ in particular when det(A)!=0 A⁻¹ = 1/det(A) . adj(A)
Crammers rule
If A∈Mₙₙ is invertible, then the unique solution of the system A.x=b of n linear equations in n unknowns is given by x₁=det(A₁)/det(A), x₂=det(A₂)/det(A), xₙ=det(Aₙ)/det(A).
For each k=1,2,…n the matrix Aₖ is obtained by replacing the entries in column k of A by the entries in the column vector b
unique identities
if a binary operation on a set V has an identity it is unique
zero vector uniqueness
The zero vector (identity) in a real vector space ℝ,+,• is unique
inverse of addition uniqueness
In a real vector space, V, the inverse with respect to the addition on V of a vector v is unique
scalar multiplication distributivity
Consider m vectors in a real vector space v₁, v₂, … vₘ ∈V, with m≥2 and λ∈ℝ, then
λ(v₁ + v₂ + … + vₘ) = λv₁, λv₂, … λvₘ
scalar multiplication distributivity (inside out)
Consider a vector v∈V with V real vector space and m real numbers λ₁, λ₂,..,λₖ∈ℝ, with m≥2, then
v(λ₁ + λ₂ + … + λₘ) = vλ₁, vλ₂, … vλₘ
inverses an identities of vector spaces theoerem
0= 0 vector
If V is a real vector space, then
- ∀v∈V: 0v = 0
- ∀λ∈ℝ: λ0 = 0
- ∀λ∈ℝ, ∀v∈V: if λv = 0 then either λ=0 (scalar) or v=0
- ∀λ∈ℝ, ∀v∈V: (-λ)v = λ(-v) = -λv
inverse of a vector (for addition)
∀v∈V: (-1)v = -v
subspace of a vector space theorem
A non-empty subset I of a real vector space V is a subspace of V if and only if
1. ∀u,v∈U: u+v∈U
2. ∀u∈U, ∀λ∈ℝ: λu∈U
in other words U is a subspace of V iff U is closed under addition and scalar multiplication
subspace of spanning sets theorem
Suppose that U = span{u₁, u₂, … uₖ} where u₁, u₂, … uₖ∈V with V a real vector space. Then
- u₁, u₂, … uₖ∈U
- U is a subspace of V
- U is he smallest subspace of V containing each of the vectors u₁, u₂, … uₖ in the sense that if Ú is another subspace of V which contains all k of those vectors, the U⊆Ú
row and column space of a transpose matrix
Consider a matrix A∈Mₘₙ. Then,
i) row(A) = col(Aᵀ
(ii) col(A) = row(Aᵀ)
linear dependency due to scalar multiples theorem
A set {u₁, u₂} of two non-zero vectors in a real vector space V is linearly dependent if and only if u₂ is a scalar multiple of u₁
linear independency due to scalar multiple
A set {u₁, u₂} of two non-zero vectors in a real vector space V is linearly independent if and only if u₂ is a not scalar multiple of u₁
linear combinations and linear dependency
Consider an ordered set of k non-zero vectors in a real vector space V, S = {u₁, u₂, … uₖ}. The set S is linearly dependent if and only if there is a vector of the set that can be expressed as a linear combination of preceding vectors
number of linearly indepdent sets vs spanning sets theorem
Let S be a set of k vectors in a real vector space V which spans V. Let T be a linearly independent set of m vectors in V. Then
m≤k
(the number of vectors in a linearly independent set in V cannot exceed the number of vectors in a spanning set of V)