Lecture 2 - System of linear equations Flashcards

Introduction, basic assumptions Direct algorithms: Gauss, LU, Cholesky

1
Q

Linear equation

A

Involves x with exponent n=1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Algebraic equation

A

W(x)=0

Polynimials

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Homogeneous equation

A

Polynomial without free coefficient, a0 =0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Possible solutions

A

ONE SOLUTION
System is consistent and independent

INFINITE SOLUTIONS
System is consistent and dependent

NO SOLUTION
System is inconsistent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Matrix equation

A

We can change system of linear equation into matrix equation Ax = b, where
-> A - system matrix, left hand side of equations, coefficients
-> x - vector of unknowns
-> b - vector of right hand side, free coefficients
b=0 system is Homogeneous
else system is in-homogeneous

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Kronecker-Capelli theorem

A

A is a matrix, n - number of variables, [A b] - augmented matrix, r() - rank

  • > one solution <=> r(A) = r([A b]) = n
  • > infinite solutions <=> r(A) = r([A b]) < n
  • > no solution <=> r(A) != r([A b])

elementary row operation do not change the rank ( number of the linear independent columns of a matrix)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Solution methods

A
  • > DIRECT - allows one to obtain the solution after certain numbers of operations e.g. Cramer, Gauss
  • > ITERATIVE - generate the sequence of vector solutions, which converges to the real solution of the system e.g Jacobi, Gauss-Seidel
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Computational complexity

A
  • > basic arithmetic operations are called floating-point operations
  • > complexity of an algorithm is the total number of floating point operations needed, as function of the input dimension
  • > this value is often approximated in practice Q()
  • > e.g linear complexity (x+y) needs n operation - Q(n)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Gauss Elimination (Direct method)

A

Two steps:

  1. Transformation of the system matrix to the triangular form - elementary row operations
  2. Forward (backward) substitution
    - > computational complexity Q(n^3)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

LU Factorization

A

L - lower matrix
U - upper matrix
variant of Gauss elimination
Digression - symbolic factorization, numerical factorization, solution
A = L*U
Process:
1. Generate L and U, so we have LUx = b
2. Define the auxiliary vector y = Ux
3. Ly = b solve using the forward substitution
4. Solve Ux = y using backward substitution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Cholesky ( Banachowicz) Factorization

A

We can use this if the matrix is symmetric and positive-define.
A = L * Lt
Half of the operation comparing to LU

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Elementary row operations

A
  • > Interchanging the position of two rows
  • > Multiplying a particular row by a nonzero constant
  • > Replacing a particular row by that equation plus a non-zero multiple of another row
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Remarks

A
  • > approximately 75% of the supercomputers computational time is use to solve systems of linear equations
  • > Cramer method and explicit matrix inversion is NOT used in practice
  • > Algorithms requires that pivot element CANNOT be zero. In such situation interchange of rows in necessary
  • > It’s generally desirable to choose a pivot element with LARGE absolute value. It improves the numerical stability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly