Prediction of ARMA processes Flashcards
Steps to find the optimal prediction
1) find the optimal model that describes at best the measured data-set
2) using the optimal model, find the optimal prediction
steps of the prediction error method
1) Experiment design, data collection, preprocessing, to obtain
{y(1), y(2), y(3), … , y(N)}
2) select a class of parametric models m(θ): in the case of ARMA processes: y(t) = W(z; θ) * e(t) e ~ WN
3) select a performance index J(θ) > = 0, such that if J(θ1) < J(θ2), then m(θ1) is better that θ2
for example,
JN(θ) = 1/N sum(t=1,N) (y(t) - y^(t|t-1; θ))^2
where y^(t+1|t; θ) is the predictor computed using the model, as a function of the parameters
4) find the best parameter vector θ^n as:
θ^n = argminθ {JN(θ)}
using the optimal estimated model m(θ^n), it is possible to compute the optimal predictor:
y^(t+k|t; θ^n)
5) validation: possible final step to check the validity of the estimated model.
def of all pass filter
An all pass filter T(z) is a 1st order digital filter having the following form:
T(z) = 1/a z+a / z+1/a , a app R
def canonical representation
Given a s.s.p. y(t), modeled as an ARMA process
y(t) = C(z)/A(z) * e(t) , e ~ WN
C(z)/A(z) * e(t) is the canonical representation of y(t) if
1) C(z) and A(z) have the same degree
2) C(z) and A(z) are monic
3) C(z) and A(z) are coprime
4) all the roots of C(z) and A(z) are strictly inside the unitary circle
definition of k steps ahead predictor
y^(t+k|t)
predictor at time t+k given the data up to time t
definition of prediction error
ε(t+k) = y(t+k) - y^(t+k|t) where y(t+k) is the true value at time t+k
optimality condition for the predictor
the predictor y^(t+k|t) is optimal if the predictor and its error are un-correlated:
E[y^(t+k|t)*ε(t+k)] = 0
k-step-ahead predictor from data for ARMA(m,n) processes
y(t) = C(z)/A(z) * e(t)
C(z)/A(z) = E(z) + R~(z)/A(z)*z^-k
y^(t|t-k) = R~(z)/C(z) * y(t-k)
prediction error for a k-steps-ahead predictor of an ARMA process
ε(t) = y(t) - y^(t|t-k) = E(z)*e(t)
1-step-ahead predictor from data for an ARMA process
y^(t|t-1) = ( C(z) - A(z) ) / C(z) * y(t) ε(t) = e(t)
variance of the prediction error as a function of the prediction horizon
var[y(t) - y^(t|t-k)]
0 if k=0
variance of the input noise if k=1
monotonically increasing function for incresing k
for k - > inf is equal to the variance of the process y(t)
definition of error to signal ratio
ESR(k) = var[ε(t)] / var[y(t)]
where ε(t) is the prediction
0 < = ESR(k) < = 1
ESR = 0 - > perferct prediction
ESR = 1 - > no prediction
if k = 0 - > ESR(0) = 0
if k - > inf - > ESR(k) - > 1
ARIMA models
special class of ARMA models, when A(z) has roots in +1
y(t) = C(z) / [ (z-1)^d * A~(z) ] * e(t)
y ~ ARIMA(m,d,n)
Special case of ARIMA process
ARIMA(0,1,0)
integrator of a WN
y(t) = 1/z-1 * e(t) y(t) = y(t-1) + e(t)
it’s also called random walk
it’s predictor exists and it is as. stable
types of optimization problems
There are three possible situations when minimizing JN(θ) wrt θ:
1) JN(θ) is a quadratic function of θ
- the solution exists and it is unique
- the solution can be found explicitly in one shot
2) JN(θ) is not a quadratic function of θ, but it is a convex function
- the function has one minimum
- the solution can be found by an iterative minimization method
3) JN(θ) is not quadratic and not convex
- the function has a global minimum and also one or more local minima
- the solution obtained by an iterative method depends on the initial condition
- the attainment of global minimum is not guaranteed