test 1 vocab Flashcards
equivalent
when two systems possess equal solutions
unique solution
there is one and only one set of values for the xi’s that satisfies all equations simultaneously
no solution
there is not set of values for the xi’s that satisfies all equations simultaneously- the solution set is empty
infinitely many solutions
there are infinitely many different sets of values for the xi’s that satisfy all equations simultaneously. It is not difficult to prove that if a system has more than one solution, then it has infinitely many solutions.
elementary operations
- ) interchanging ith and nth equations
- )replace ith equation by nonzero multiple of itself
- ) replace jth equation by a combination of itself plus a multiple of the with equation
square system
n equations and n unknowns
Gaussian eliminations
1.) eliminate all terms below the first pivot
2.) select a new pivot
3.) eliminate all terms below second pivot
and continue
triangularized
all pivots are 1
back substitution
the last equation is solves for the value of the last unknown and then substituted back into the penultimate equation and so on
scalar
a real or complex number
row
horizontal line
column
vertical line
submatrix
of A is an array obtained by deleting any combination of rows and columns from A
shape or size
m(rows)xn(columns)
Gauss-Jordan Method
- ) at each step, the pivot element is forced to be 1
2. ) all terms above and below the pivot are eliminated
tridiagonal
the nonzero elements occur only on the sub diagonal, main diagonal, and super diagonal
partial pivoting
at each step, search the positions on and below the pivot position for the coefficient with the maximum magnitude. If necessary interchange the rows to bring the larger number into the pivot position
row scaling
multiplying selected rows by non zero multipliers
column scaling
multiplying selected columns by nonzero multipliers
-alters exact solution
complete pivoting
search the pivot position and every position below or to the right for the maximum magnitude, if necessary perfume row and column interchange to bring largest number to pivot position
ill conditioned
some of the perturbation in the system can produce relatively large changes in the exact solution
well conditioned
if not ill conditioned
rectangular
if m and n are no the same
main diagonal
where the pivot positions are located, the diagonal line from the upper left hand to the lower right hand corner
row echelon form
- )if Ei, consists entirely of zeros, then all rows below Ei are also entirely zero, i.e. all zero rows are at the bottom
- ) if the first nonzero entry in Ei* lies the nth position then all entries below the ith position in columns E1, E2…E*j are zero
rank
in echelon form, the number of pivots, number of nonzero rows in E, number of basic columns in A
basic columns
those columns in A that contain the pivotal positions
reduced row echelon form(EA)
- ) E is in row echelon form
- ) the first nonzero entry in each row(each pivot) is 1
- )all entries above each pivot are 0
consistent
a system of m linear equations in n unknowns that posses at least one solution
- rank[A|b]=rank(A)
- b is a nonbasic column in [A|b] or is a combination of the basic columns in A
inconsistent
a system of m linear equations in n unknowns that has no solutions, when a row of all zeros produces a nonzero solution
homogeneous system
the right hand side consists entirely of 0’s
-consistency is never an issue because the zero solution is always a solution
nonhomogeneous system
there is at least one nonzero number on the right hand side
trivial solution
the solution consisting of all zeros
basic variables
when there are more unknowns then equations, we have to pick “basic” unknowns and solve for these in terms of the other unknowns
- there are r basic variables
free variables
whose values must remain arbitrary or free
- there are n-r free variables
general solution
use Gaussian elimination to reduce to row echelon form. identify basic and free variables. apply back substitution and solve for the basic variables in terms of the free variables
x=P(particular solution)+xf1h1 +xf2h2+…
equal matrices
when A and B are the same size and corresponding entries are equal
column vector
an array consisting of a single column
row vector
an array consisting of a single row
addition of matrices
if A and B are mxn the sum is the mxn matrix A+B, by adding corresponding entries
additive inverse (-A)
the matrix obtained by negating each of the entries
transpose (A^T)
of a mxn matrix, is the nxm matrix A^T obtained by interchanging rows and columns. A^T= aji properties: (A+B)^T= A^T+B^T (sA)^T=sA^T
conjugate matrix (A^-)
a^-ij,
conjugate transpose (A^-T)(A*)
a^-ji
properties:
(A+B)* =A+B
(sA)= s^-A
diagonal matrix
entries are symmetrically located about the main diagonal
symmetric matrix
A =A^T, when aij=aji
skew-symmetric matrix
A=-A^T, when aij= -aji
hermitian matrix
A=A*, when aij=a^-ji. the complex analog of symmetry
skew-hermitian matrix
A=-A*, when aij= -a^-ji. the complex analog of skew symmetry
linear function
- ) f(x+y)= f(x)+f(y)
2. ) f(sx)=sf(x)
conformable
in AB when A has exactly as many columns as B has rows
matrix product
for comfortable matrices Amxp= aij and Bpxn=bij, AB is the mxn matrix whose i,j entry is the inner product of the ith row of A with the jth column in B
-matrix multiplication is NOT commutative
cancellation law
when sB=sY and s=/0 implies B=Y
linear system
Ax=b
distributive and associative laws
for comfortanble matrices:
A(B+C)=AB+BC
(D+E)F=DF+EF
A(BC)=AB(C)
identity matrix (I)
nxn matrix with 1’s on the main diagonal and 0’s everywhere else
AIj=Aj
reverse order law for transposition
for comfortable A and B
(AB)^T=B^TA^T
(AB)=BA*
Trace
for a square matrix, is the sum of its main diagonal entires
trace(AB)=trace(BA)
block matrix multiplication
A and B are partitioned into sub matrices, referred to as blocks. if the pairs (Aik, Bkj) are comfortable then A and B are comfortably partitioned
reducible systems
block-triangular systems
inverse of A
given, A and B are square, AB=I and BA=I
the inverse of A is, B=A^-1
nonsingular matrix
an invertible square matrix
singular matrix
a square matrix with no inverse
matrix equations
if A is nonsingular then there is a unique solution for X, (Anxn)(Xnxp)=Bnxp and the solution is:
X=A^-1(B)
for system of n linear equations and n unknowns:
(Anxn)(Xnx1)=bnx1
X=A^-1(b)
existence of an inverse
for nun the following are equivalent:
- A^-1 exists (nonsingular)
- rank(A)=n
- A——>(Guass Jordan)—>I
- Ax=0 (implies x=0)
computing the inverse
[A|I}—->GJ—>[I|A^-1]
properties of matrix inversion
For nonsingular A and B: (A^-1)^-1=A AB is nonsingular (AB)^-1=B^-1A^-1 (A^-1)^T= (A^T)^-1 (A-1)*=(A*)^-1
Sherman-Morrison Formula
if Anxn is nonsingular and c and d are nx1 columns such that 1+d^TA^-1c is not 0, then the sum of A+cd^T;
(A+cg^T)^-1= A^-1 - (A^-1cd^TA^-1)/(1+d^TA^-1c)
sherman morrison woodbury formula
if C and D are nxk such that (I+D^TA^-1) exists then:
(A+CD^T)^-1 = A^-1C(I+D^TA^-1C)^-1D^TA^-1
Neumann Series
if lim n—>infinity, then I-A is nonsingular and
(I-A)^-1=I+A+A^2… sum of A^K.
provides approximation of (I-A)^-1 when A has entires of small magnitude.
ill conditioned
if a small relative change in A can cause a large relative change in A^-1.
condition number
how the degree of ill conditioning is gauged.
k=||A|| ||A^-1|| where ||*|| is a matrix norm.
sensitivity
of the solution of Ax=b to perturbations(or errors) in A is measured by the extent to which A is an ill conditioned matrix.
elementary matrices
matrices in the form I-uv^T, where u and v are nx1 columns such that v^tu=/1
they are nonsingular and
(I-uv^T)^-1 = I- (uv^T)/(v^Tu-1)
type 1
interchanging rows
type 2
multiplying rows(columns) by a scalar
type 3
adding a multiple of a row(column) i to a row(column)j
products of elementary matrices
A is a nonsingular matrix if and only if A is the product of elementary matrices of type 1,2, and 3
equivalent matrices(~)
when B can be derived from A by a combination of elementary row and column operations A~B
row equivalent (~row)
when B can be obtained from A by preforming a sequence of elementary row operations only A~rowB
column equivalent (~col)
when B can be obtained from A by preforming a sequence of elementary column operations only A~colB
transitive
A~B and B~C——> A~C
Rank normal form (Nr)
A~Nr= (Ir 0)
(00)
LU factorization of A
A=LU, product of lower triangle matrix L and an upper triangle matrix U.
the decomposition of A into A=LU
- matrices L and U are called LU facts of A
elementary lower triangular matrix
Tk = I-cke^Tk, where ck is a column with zeros in the first k positions
leading principle sub matrices
the sub matrices taken from the upper left hand corner
positive definite
A symmetric matrix A possessing an LU factorization in which each pivot is positive
band matrix
aij=0
bandwidth
when |i-j|>w for some positive integer w