3. Time-Series Analysis Flashcards
What is an autoregressive model (AR model)? Give an example.
When will you want to use an AR model?
When the dependent variable is regressed against one or more lagged values of itself, the resultant model is called as an autoregressive model (AR). For example, the sales for a firm could be regressed against the sales for the firm in the previous month.
In the presence of serial correlation in a time-series model.
Recall that statistical inferences based on ordinary least squares estimates for an autoregressive time series model may be invalid unless the time series being modeled is covariance stationary.
Which are the three conditions for a time series be covariance stationary?
- Constant and finite expected value. The expected value of the time series is constant over time.
- Constant and finite variance. The time series’ volatility around its mean (i.e., the distribution of the individual observations around the mean) does not change over time.
- Constant and finite covariance between values at any given lag. The covariance of the time series with leading or lagged values of itself is constant.
ALL COVARIANCE STATIONARY TIME SERIES HAVE A FINITE MEAN-REVERTING LEVEL.
Explain how autocorrelations of the residuals can be used to test whether the autoregressive model fits the time series.
Test for significance the autocorrelation(2), using standard error = 1/(Number do observations)^(1/2) with (T-2) degrees of freedom.
Pág. 275
Which is the formula for calculate the mean-reverting level for an AR(1) model?
x(t) = b0/(1-b1)
All covariance stationary time series have a finite mean-reverting level. An AR(1) time series will have a finite mean-reverting level when the absolute value of the lag coefficient is less than 1, ABS(b1)<1.
True or false?
True.
Describe characteristics of random walk processes. Describe the equation. Is this series present variance stationary characteristic?
If a time series follows a random walk process, the predicted value of the series in one period is equal to the value of the series in the previous period plus a random error term.
x(t) = x(t-1) + E
The best forecast of x(t) is x(t-1).
Notice! b1=1
So, this time series do not exhibit covariance stationarity. This series do not have a FINITE MEAN REVERTING LEVEL:
x(t) = b0/(1-b1) = b0/0
BUT,
For statistical reasons, you cannot directly test whether the coefficient on the independent variable in an AR time series is equal to 1.
You need perform the Dicker-Fuller test.
Describe a time series defined as ‘random walk with a drift’. Is this series present variance stationary characteristic
If a time series follows a random walk with a drift, the intercept term is not equal to zero. That is, in addition to a random error term, the time series is expected to increase or decrease by a constant amount each period. A random walk with a drift can be described as:
x(t) = b0 + x(t-1) + E
Notice! b1=1
So, this time series do not exhibit covariance stationarity. This series do not have a FINITE MEAN REVERTING LEVEL:
x(t) = b0/(1-b1) = b0/0
BUT,
For statistical reasons, you cannot directly test whether the coefficient on the independent variable in an AR time series is equal to 1.
You need perform the Dicker-Fuller test.
How to determine whether a time series is covariance stationary?
Perform the Dickey-Fuller test.
For statistical reasons, you cannot directly test whether the coefficient on the independent variable in an AR time series is equal to 1.
Dickey and Fuller transform the AR(1) model to run a simple regression. To transform the model, they start with the basic AR(1) model and subtract x(t-1) from both sides:
x(t) - x(t-1) = b0 + (b1 - 1).x(t-1) + E
Then, rather than directly testing whether the original coefficient is different from 1, they test whether (b1 - 1) is different from zero using a modified t-test.
If (b1 - 1) is not significant different from zero, they say that b1 must be equal to 1,0 and, therefore, the series must have a unit root.
IN THE EXAM, THEY USUALLY USES
(b1 - 1) = g
So, if the null (g=0) cannot be rejected, your answer is that the time series has a unit root.
Explain ‘first differencing’ process.
When you need perform? How does it works?
If we believe a time series is a random walk (have a unit root), we can transform the data to a covariance stationary time series using a procedure called ‘first differencing’.
The method involves subtracting the value of the time series (the dependent variable) in the immediately preceding period from the current value of the time series to define a new dependent variable, y.
If the original time series of x has a unit root, the change in x, x(t) - x(t-1) = E, is just the error term:
y(t) = E
b1=b0=0
And b0/(1-b1) = 0/1 = 0
How to detect seasonality?
Runs t-tests for lagged autocorrelations. If we can reject the hypothesis of the autocorrelation br equal to zero, seasonality is present.
Example: quarterly hotel occupancy.
There is significant autocorrelation in the fourth lag.
It means that the occupancy in any quarter is related to occupancy in previous quarter and the same quarter in the previous year.
How to correct for seasonality?
Give an example.
To adjust for seasonality in an AR model, an additional lag of the dependent variable (corresponding to the same period in the previous year) is added to the original model as another independent variable.
Example: to model hotel occupancy where the significant residual correlation at lag 4 indicates seasonality in the quarterly time series, we add a lagged value of the dependent variable to the original model that corresponds to the seasonal pattern.
To model the autocorrelation of the same quarters from year to year, we use an AR(1) model with a seasonal lag:
ln x(t) = b0 + b1.ln x(t-1) + b2.ln x(t-4) + E
NOTE THAT THIS SPECIFICATION, THE INCLUSION OF A SEASONAL LAG, DOES NOT RESULT IN AN AR(2) MODEL. It results in an AR(1) model incorporating a seasonal lag term.
Explain Autoregressive Conditional Heteroskedasticity.
When examining a single time series, such as an AR model, autoregressive conditional heteroskedasticity exists if the variance of the residuals in one period is dependent on the the variance of the residuals in a previous period. When this condition exists, the standard errors of the regression coefficients in AR models and the hypothesis tests of these coefficients are invalid.
How to test whether a time series is ARCH(1)?
To test whether a time-series is ARCH(1), the squared residuals from an estimated time-series model, E(t)^2, are regressed on the first lag of the squared residuals E(t-1)^2.
The ARCH(1) regression model is expressed as:
E(t)^2= a0 + a1 * E(t-1)^2 + m
m is the error term
If the coefficient a1 is statistically different from zero, the time series is ARCH(1).
Occasionally an analyst will run a regression using two time series (I.e., time series utilizing two different variables). For example, using the market model to estimate the equity beta for a stock, an analyst regresses time series of the stock’s return, y(t), on a time series of returns for the market, x(t):
y(t) = b0 + b1 * x(t) + E(t)
To test whether the two time series have unit roots, the analyst runs separate Dicker-Fuller tests. Which are the five possible results? How to proceed in which scenario?
- Both time series are covariance stationary
- Only the dependent variable time series is covariance stationary
- Only the independent variable time series is covariance stationary
- Neither time series is covariance stationary and the two series are not cointegrated (economic related or follow the same trend)
- Neither time series is covariance stationary and the two series are cointegrated
In scenario 1 the analyst can use linear regression, and the coefficients should be statistically reliable, but regressions in scenarios 2 and 3 will not be reliable. Whether linear regression can be used in scenarios 4 and 5 depends upon whether the two time series are cointegrated. If yes (scenario 5), ok.
What means “the null is reject” in a Dicker-Fuller test?
If the null is reject in a Dicker-Fuller test, the time series (of error terms in this case) is covariance stationary.