Lecture 3 - Systems of linear equation continue Flashcards
Iterative algorithms : Jacobi, Gauss-Seidel Matrix norms
Differences and basics
Direct algorithms -> Q(n^3)
Iterative methods -> Q(n^2) assuming they CONVERGE
The solution by means of Iterative algorithms starts with the guess initial vector x1, which approximates exact solution x
Errors
x - solution
x’ - approximate solution
e - solution error
In practical cases it’s NOT possible to compute the real error e, since we DON’T know the true solution vector x.
In order to asses the error introduced by x’, we can use a residual vector r = A*x’ - b
if x’ = x, then r = 0
BUT small value of r DOESN’T ALWAYS guarantee a small real error
Sometimes it’s more comfortable to use scalar error-measure.
Vector norms
Norm - function that assigns a real-valued length to each vector
CONDITIONS:
- Norm of a nonzero vector is positive
- Scaling a vector scales its norm by the same amount
- The norm of a vector sum doesn’t exceed the sum of norms of its parts (triangle inequality)
Unit balls
Equation for n norm:
(x and y are random)
((abs(x))^n +(abs(y))^n)^(1/n)
FIRST NORM/ norm one
Sum of absolute values of all elements of the vector.
Unit ball graphic look like diamond
SECOND NORM/ norm two
WE compute the square of all elements of the vector, and we take square root of their sum.
Unit ball graphic look like circle
INFINITY norm
Maximum value from absolute values from the elements of the vector
Unit balls look like square
Jacobi
Vector from the previous iteration is used to compute the elements of the vector in subsequent iteration. Xk is used to compute Xk+1
So, in the fist iteration we are using only elements from first (quested one) vector
Converges if:
- > spectral radius of the iteration matrix is smaller than 1
- > A is diagonally dominant
Gauss-Seidel
Works the same as Jacobi, but always the most recent information are taken. So, when we are computing element 6 in k+1 iteration, we are using elements 0-5 from k+1 vector, and 7 to n form kth vector
Converges if either:
- > A is symmetric positive-define
- > a is diagonally dominant
Division of the matrix A
A = -(U+L)+D U - upper part L - lower part D - diagonal L and U are NOT the same as in the LU factorization
Jacobi in matrix form
xk+1 = D^(-1)(L+U)xk + D^(-1)*b
Gauss-Seidel in matrix form
xk+1 = (D-L)^(-1)Uxk + (D-L)^(-1)*b
Remarks
- > The convergence properties of the those methods are dependent on the matrix A
- > Usually the Gauss-Seidel method need SMALLER number of iterations to converge
Matrix norm
Must satisfy the same 3 conditions as vector norm
e.g.: Frobenius norm (It’s the same as the vector 2-norm when viewed as a vector composed as the A columns