Lecture 2 - System of linear equations Flashcards
Introduction, basic assumptions Direct algorithms: Gauss, LU, Cholesky
Linear equation
Involves x with exponent n=1
Algebraic equation
W(x)=0
Polynimials
Homogeneous equation
Polynomial without free coefficient, a0 =0
Possible solutions
ONE SOLUTION
System is consistent and independent
INFINITE SOLUTIONS
System is consistent and dependent
NO SOLUTION
System is inconsistent
Matrix equation
We can change system of linear equation into matrix equation Ax = b, where
-> A - system matrix, left hand side of equations, coefficients
-> x - vector of unknowns
-> b - vector of right hand side, free coefficients
b=0 system is Homogeneous
else system is in-homogeneous
Kronecker-Capelli theorem
A is a matrix, n - number of variables, [A b] - augmented matrix, r() - rank
- > one solution <=> r(A) = r([A b]) = n
- > infinite solutions <=> r(A) = r([A b]) < n
- > no solution <=> r(A) != r([A b])
elementary row operation do not change the rank ( number of the linear independent columns of a matrix)
Solution methods
- > DIRECT - allows one to obtain the solution after certain numbers of operations e.g. Cramer, Gauss
- > ITERATIVE - generate the sequence of vector solutions, which converges to the real solution of the system e.g Jacobi, Gauss-Seidel
Computational complexity
- > basic arithmetic operations are called floating-point operations
- > complexity of an algorithm is the total number of floating point operations needed, as function of the input dimension
- > this value is often approximated in practice Q()
- > e.g linear complexity (x+y) needs n operation - Q(n)
Gauss Elimination (Direct method)
Two steps:
- Transformation of the system matrix to the triangular form - elementary row operations
- Forward (backward) substitution
- > computational complexity Q(n^3)
LU Factorization
L - lower matrix
U - upper matrix
variant of Gauss elimination
Digression - symbolic factorization, numerical factorization, solution
A = L*U
Process:
1. Generate L and U, so we have LUx = b
2. Define the auxiliary vector y = Ux
3. Ly = b solve using the forward substitution
4. Solve Ux = y using backward substitution
Cholesky ( Banachowicz) Factorization
We can use this if the matrix is symmetric and positive-define.
A = L * Lt
Half of the operation comparing to LU
Elementary row operations
- > Interchanging the position of two rows
- > Multiplying a particular row by a nonzero constant
- > Replacing a particular row by that equation plus a non-zero multiple of another row
Remarks
- > approximately 75% of the supercomputers computational time is use to solve systems of linear equations
- > Cramer method and explicit matrix inversion is NOT used in practice
- > Algorithms requires that pivot element CANNOT be zero. In such situation interchange of rows in necessary
- > It’s generally desirable to choose a pivot element with LARGE absolute value. It improves the numerical stability