Lecture 3 - Systems of linear equation continue Flashcards

Iterative algorithms : Jacobi, Gauss-Seidel Matrix norms

1
Q

Differences and basics

A

Direct algorithms -> Q(n^3)
Iterative methods -> Q(n^2) assuming they CONVERGE

The solution by means of Iterative algorithms starts with the guess initial vector x1, which approximates exact solution x

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Errors

A

x - solution
x’ - approximate solution
e - solution error

In practical cases it’s NOT possible to compute the real error e, since we DON’T know the true solution vector x.

In order to asses the error introduced by x’, we can use a residual vector r = A*x’ - b

if x’ = x, then r = 0
BUT small value of r DOESN’T ALWAYS guarantee a small real error
Sometimes it’s more comfortable to use scalar error-measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Vector norms

A

Norm - function that assigns a real-valued length to each vector

CONDITIONS:

  1. Norm of a nonzero vector is positive
  2. Scaling a vector scales its norm by the same amount
  3. The norm of a vector sum doesn’t exceed the sum of norms of its parts (triangle inequality)

Unit balls

Equation for n norm:
(x and y are random)
((abs(x))^n +(abs(y))^n)^(1/n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

FIRST NORM/ norm one

A

Sum of absolute values of all elements of the vector.

Unit ball graphic look like diamond

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

SECOND NORM/ norm two

A

WE compute the square of all elements of the vector, and we take square root of their sum.
Unit ball graphic look like circle

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

INFINITY norm

A

Maximum value from absolute values from the elements of the vector

Unit balls look like square

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Jacobi

A

Vector from the previous iteration is used to compute the elements of the vector in subsequent iteration. Xk is used to compute Xk+1
So, in the fist iteration we are using only elements from first (quested one) vector

Converges if:

  • > spectral radius of the iteration matrix is smaller than 1
  • > A is diagonally dominant
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Gauss-Seidel

A

Works the same as Jacobi, but always the most recent information are taken. So, when we are computing element 6 in k+1 iteration, we are using elements 0-5 from k+1 vector, and 7 to n form kth vector

Converges if either:

  • > A is symmetric positive-define
  • > a is diagonally dominant
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Division of the matrix A

A
A = -(U+L)+D
U - upper part
L - lower part 
D - diagonal
L and U are NOT the same as in the LU factorization
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Jacobi in matrix form

A

xk+1 = D^(-1)(L+U)xk + D^(-1)*b

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Gauss-Seidel in matrix form

A

xk+1 = (D-L)^(-1)Uxk + (D-L)^(-1)*b

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Remarks

A
  • > The convergence properties of the those methods are dependent on the matrix A
  • > Usually the Gauss-Seidel method need SMALLER number of iterations to converge
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Matrix norm

A

Must satisfy the same 3 conditions as vector norm

e.g.: Frobenius norm (It’s the same as the vector 2-norm when viewed as a vector composed as the A columns

How well did you know this?
1
Not at all
2
3
4
5
Perfectly