perron-frobenius theory Flashcards
if A and B are mxn real matrices:
A>B if all aij>bij and A>=B if all aij>=bij
nonnegative:
A>=0 (so all elements more than or equal to 0)
positive:
A>0 (all elements more than 0)
vector of absolute values of another:
|v|=(|vi|)
matrix of absolute values of another:
|A|=(|aij|)
fun facts:
|Ax|<=|A||x|
A>0 and x>=0, x!=0 => Ax>0
A>=0 => A^k>=0 for all k>=1
A>0 => A^k>0 for all k>=1
A>0 and x>=y with x!=y => Ax>Ay
perron’s theorem:
if A is square and A>0, then p(A)>0 (the spectral radius), p(A) is an eigenvalue of A, there exists an eigenvector with x>0 and Ax=p(A)x, p(A) has algebraic multiplicity 1, p(A) is the eigenvalue of maximum modulus
powers of positive matrices:
if A>0, x is any positive eigenvector corresponding to p(A), and y is any positive eigenvector of A^T corresponding to p(A)=p(A^T) then lim(k->∞)(A/p(A))^k = (xy^T)/(y^(T)x) > 0
reducible matrix:
a square matrix where there exists a permutation matrix such that P^(T)AP=[ X Y
0 Z], where X and Z are both square, a matrix is irreducible otherwise
graphs and matrices!!:
a digraph of an nxn matrix is found by making n points P1,…,Pn, then drawing a directed line between Pi and Pj if aij!=0
strongly connected:
a graph is strongly connected if for any two points, there is a finite sequence of directed links between them
graphs connections and reducability (not a word lmao):
A>=0 is irreducible iff the digraph is strongly connected
perron-frobenius theorem:
if A>=0 is irreducible then p(A)>0, p(A) is an eigenvalue of A, there’s an eigenvector x>0 such that Ax=p(A)x, p(A) has algebraic multiplicity 1
perron root:
λmax(A)=p(A)
perron vector:
the unique vector p such that Ap=p(A)p, p>0, ||p||1=1
stochastic matrix:
a nonnegative square matrix P in which each row sum is equal to 1, also written (n)Σ(j=1)pij = 1, i=1,2,…,n
stochastic matrices and eigenpairs:
P>=0 is stochastic iff P has eigenvalue 1 with associated eigenvector e=[1 1 … 1]^T, and p(P)=1 for a stochastic matrix P
markov chain:
a probabilistic process in which the future development of the process is completely determined by the present state and Not how it arose, every markov chain defines a stochastic matrix and vice versa
probability distribution vector:
a vector p>=0 such that e^(T)p=1
stationary probability distribution vector:
P^(T)p=p, e^(T)p=1
kth step probability distribution vector:
p^((k))=P^(T)p^((k-1))=(P^T)^(k)p^((0))
p^((n)):
actually written p^(n), we are writing it with extra brackets to distinguish it from other powers, using p^((0)) as an example, p^((0))=[p1^((0)) … pn^((0))], where pi^((0)) is the probability that the chain starts in state i, the 0 here is the number of steps so for 0 is the start, for 1 is the probability of being in state whatever (j) after one step and so on
stochastic matrices and limits:
yeah sorry bad front side but I don’t have anything better
assume a markov chain has a positive stochastic matrix P, then independent of the initial probability distribution vector p^((0)), lim(k->∞)p^((k))=p, where p^((k))=(P^T)^(k)p^((0)) and p is the stationary probability distribution vector