a time series Flashcards
what is the definition of a time series? What is the realization(sample path) and original random variable sequences? Why use time series? Difference type of Time Series(univariate and multivariate)?
- time series definition: a sequence of observations of a stochastic process over time, usually index by t;
stochastic process definition: sequence of random variables; - realization/sample path: {y_t}^T_t=1 = {y_1, y_2, …, y_T};
Random Variable itself: {Y_t}^T_t=1 = {Y_1, Y_2, …, Y_T}; - The reason use time series: sometimes when it is not iid, use time series
- univariate time series: y_t belong to R;
multivariate time series: y_t belong to R^k;
Definition of Strict Stationarity?
Definition of Weak/Covariance Stationarity?
Definition of the jth order autocovariance(gamma j) ?
- Definition of Strict Stationarity?- for any value j1, j2, …, jn, the joint distribution of (Y_t, Y_t+j1, Y_t+j2, …, Y_t+jn) only depend on the intervals separating j1, j2, …, jn but NOT on t.
- Definition of Weak/Covariance Stationarity?- if (1) E(Y_t)= u independent of t, and (2) the jth order autocovariance(gamma j) exists, finite, and depends only on j but is independent of t.
- Definition of the jth order autocovariance(gamma j) : Cov(Y_t, Y_t-j = E[(Y_t - E(Y_t))(Y_t-j - E(Y_t-j))] )
Q&A about Stationarity
- For covariance stationarity process, can we say something about gamma_-j if we know gamma_j ?
- Does strict stationarity imply covariance stationarity?
- Does covariance stationarity imply strict stationarity?
- gamma_-j = gamma_j
- No, except under condition that the first two moments are finite, then yes.
- No, for example, a process has time-invariant mean and autocovariances, but time-varying third moment. But there is special case when it does, if the process is Gaussian process, that is, {Y_t, Y_t+j1, Y_t+j2, …, Y_t+jn} is Gaussian for any j1, j2, …, jn.
- definition of Gaussian process: Gaussian process, that is, {Y_t, Y_t+j1, Y_t+j2, …, Y_t+jn} is Gaussian for any j1, j2, …, jn.
property of Gaussian process: a multivariate normal distribution is completely characterized by its mean and covariance matrix.
4 types of asymptotics? Extend to matrix form? Relationship of four types?
- 4 types of asymptotics: (1) Convergence in probability (2) Almost sure convergence (3) Convergence in mean square (4) Convergence in Distribution
- (1) Convergence in probability (2) Almost sure convergence (3) Convergence in mean square : all these definitions extend to matrices(vectors) by requiring element-by-element convergence.
but element-by-element convergence in (4)Convergence in Distribution doesn’t hold for matrix form. - (2) Almost sure convergence and (3) Convergence in mean square implies (1) Convergence in probability; (1) Convergence in probability implies (4)Convergence in Distribution
Definition of Ergodicity
Ergodicity: vertical average = horizontal average
Definition of Ergodicity for the mean?
Definition of Ergodicity for the second moments?
other properties:
Definition of Ergodicity for the mean? if 1/T sum over t y_t = E(Y_t)
Definition of Ergodicity for the second moments? if 1/T-j sum over t (y_t - u)(y_t-j - u) = gamma_j for all j
other properties:
- a stationary process is ergodic for the mean if sum over j |gamma_j| < infinity
- if {Y_t} is a Gaussian process, then the absolute summability of autocovariances implies that it is ergodic for all moments.
Definition of White Noise Process?
Definition of Independent White Noise Process?
Definition of Independent and Identically Distributed White Noise Process?
Definition of Gaussian White Noise Process?
Definition of White Noise Process?
- E(epsilon_t) = 0 and E(epsilon^2) = sigma^2 and E(epsilon_t*epsilon_tau) = 0 for t!=tau is called white noice
- Notation: epsilon ~ WN(0, sigma^2)
Definition of Independent White Noise Process?
- if additionally, epsilon_t and epsilon_tau are independent for t != tau, then it is an independent white noice process
- note: cov(epsilon_t*epsilon_tau)=0 doesn’t mean independence, they could be correlated in the second moment
Definition of Independent and Identically Distributed White Noise Process?
- if additionally, epsilon_t is identically distributed, the it is iid white noise process
- Notation: epsilon ~ iid WN(0, sigma^2)
Definition of Gaussian White Noise Process?
- if additionally, epsilon_t ~ N(0, sigma^2), then it is a Gaussian white noise process
- Notation: epsilon ~ iid N(0, sigma^2)
Definition of MA(1)?
Definition of MA(1)?
Y_t = u + theta_1*epsilon_t-1 + epsilon_t
where u and theta_1 are constant scalars and epsilon_t is white noise, epsilon_t~WN(0, sigma^2)
Y_t is a first-order moving average process, or MA(1)
Means: E(Y_t) = E(u + theta_1*epsilon_t-1 + epsilon_t) = u [doesn’t depend on time]
Variances: E[(Y_t - u)^2] = (theta_1^2 +1) * sigma^2
First-order autocovariance: gamma1 = theta_1 * sigma^2 [doesn’t depend on time]
From second-order autocovariance: gamma_j = 0 for j>=2 [doesn’t depend on time]
Q&A about MA(1)?
- Is an MA(1) process stationary?
- Is it ergodic for the mean?
- Can you think of a condition that implies ergodicity for the second moments?
Property of MA(1)?
Y_t = u + theta_1*epsilon_t-1 + epsilon_t
1. Is an MA(1) process stationary?
weak stationary. yes, mean and autocovariances are finite and time-invariant.
2. Is it ergodic for the mean?
yes, summary of gamma j is finite
3. Can you think of a condition that implies ergodicity for the second moments?
Property of MA(1):
MA(1) is weak stationary, and ergodic for the mean,
if {epsilon_t} is a Gaussian white noise, then MA(1) is also ergodicity for the second moments.
Definition of Autocorrelation?
Autocorrelation vs Autocovariance?
Definition of MA(q)?
Mean:
Variannce:
jth order autocovariance:
jth order autocovariance(for j > q), gamma_j = 0
Property of MA(q)?
why MA(q) always stationary for the mean?
why MA(q) always ergodic for the mean?
If epsilon_t is Gaussian White Noise, it is ergodic for all moments.
Definition of MA(q)?
Y_t = u + theta_1epsilon_t-1 + theta_2epsilon_t-2 + … + theta_q*epsilon_t-q + epsilon_t
where u, theta_1, theta_2, …, theta_q are constant scalars, and epsilon_t ~ WN(0, sigma^2)
Mean:
Variannce:
jth order autocovariance:
jth order autocovariance(for j > q), gamma_j = 0
Property of MA(q)?
why MA(q) always stationary for the mean?
-doesn’t depend on the time, all finite
why MA(q) always ergodic for the mean?
the sum of … is finite
If epsilon_t is Gaussian White Noise, it is ergodic for all moments.
property that used: if {Y_t} is Gaussian process, then the absolute summability of autocovariances implies that it is ergodic for all moments.
MA(q) is always stationary and ergodic for the mean, if epsilon_t is Gaussian WN, then ergodic for all moments.
Definition of MA(infinity)?NOTICE MA(infinity) show up very often, have some surprising property
Mean:
Variannce:
jth order autocovariance:
jth order autocovariance(for j > q), gamma_j = 0
Property of MA(infinity)?
Definition of MA(infinity)?
Mean:
Variannce:
jth order autocovariance:
jth order autocovariance(for j > q), gamma_j = 0
Property of MA(infinity)?
if MA coefficients are absolutely summable, then all previous results for MA(q) hold.
Definition of AR(1)?
Y_t = c + fi_1* Y_t-1 + epsilon_t
where epsilon_t ~ WN(0, sigma^2)
Explain why AR(1) can be rewritten as an MA(infinity)?
The property of AR(1)?[we want to show AR(1) is MA(infinity) so have MA(innfinity) property]
Why |fi_1| need to be < 1 ?
Explain why AR(1) can be rewritten as an MA(infinity)?
-first approach: recursive backward substitution
The property of AR(1)?
if |fi_1| < 1, the AR(1) process is stationary and ergodic for the mean
if epsilon_t is Gaussian white noise additionally, AR(1) is ergodic for all moments
Note: |fi_1| < 1 is crucial for these properties
Note: if |fi_1| > 1, we need to use forward substitution
Why |fi_1| need to be < 1 ?
- you will get something negative for variance
- the mean is negative for the positive c if |fi_1| > 1
Autocorrelation of White Noise, MA(1), and AR(1)?
Autocorrelation of White Noise: first-auto correlation is zero
Autocorrelation of MA(1): first-auto correlation is not zero, other-auto correlation is zero
Autocorrelation of AR(1)? other-auto correlation is not zero
Definition of AR(p)?
Derivating the property of AR(p) is cumbersome, what tools could we use?
Definition of AR(p)?
Y_t = c + fi_1* Y_t-1 + fi_2Y_t-2 + fi_3 Y_t-3 + … + fi_p* Y_t-p + epsilon
where epsilon_t ~ WN(0, sigma^2)
Derivating the property of AR(p) is cumbersome, what tools could we use?
- difference equation
- lag operator