Wronged Questions: Time Series Flashcards
Standard deviation of random walk is ______ than the differenced series
Larger
Differencing the logarithmic time series will likely result in a time series that is stationary in the _____________ and ___________.
Mean, variance
A logarithmic transformation will likely result in a time series that is stationary in the ______.
Variance
Differencing a time series will likely result in a time series that is stationary in the ______.
Mean
Computing the differences between consecutive observations to make a non-stationary series stationary
Differencing
Properties of stationary time series
- properties do not depend on the time of observation
- includes cyclic behaviour
- constant variance, no predictable patterns
Properties of non-stationary time series
- includes time trends
- random walks
- seasonality
Model with no trend
Yt = B_0 + e_t
Model with linear trend
Y_t = B_0 + B_1t + e_t
Model with quadratic trend
Yt = B0 + B1t + B2t^2 + e_t
Two criteria for a weakly stationary model
1) E[Yt] does not depend on t
2) Cov[Ys, Yt] depends only on |t-s|
A series has strong stationarity if the entire distribution of Yt is ____ over time.
Constant
______ _______ can be used to identify stationarity
Control charts
Function of a filtering procedure
Reduces observations to a stationary series
Name two filtering techniques
1) Differencing
2) Logarithmic transformation
White noise process
Stationary process that displays no apparent patterns through time, is IID
T/F: For white nose process, forecasts do not depend on how far into the future we want to forecast
True
Random walk is the partial sum of a _____ ______ process.
White noise
T/F: ME and MPE detect trend patterns that are not captured by the model
True
T/F: MSE, MAE, and MAPE can detect fewer trend patters than ME
False
T/F: MPE and MAPE examine error relative to the actual value
True
T/F: For AR(1), the range of possible values for p is 0<=p<=1.
False. For AR(1), the range of possible values for p is -1<=p<=1.
T/F: For AR(1), the range of possible values for B0
is 0<B0 < inf.
False. The range of possible values for B0 is -inf < B0 < inf.
T/F: For AR(1), if B1 = 1, then is a non-stationary time series.
True. If B1 = 0, then Yt is stationary (white noise) process. B1 =1 means it’s a random walk
T/F: For AR(1), pk decreases linearly as k increases.
False, it decreases geometrically.
T/F: An AR(1) process is a meandering process.
False. There are cases where AR(1) is not a meandering process. Parameter B1 must be positive and significantly different from 0 (but still less than 1) for it to be a meandering process.
Meandering process characteristics
- positive, significant autocorrelation at lag 1
- B1 needs to be significantly different from 0
- B1 needs to be less than 1
T/F: w=1 results in no smoothing
False. w=0 results in no smoothing
SS(w)
Sum of squared one step prediction error
T/F: Comparing the SS(w) for different values of w can help in choosing the optimal value of w.
True.
T/F: When exponential smoothing is used for forecasting, the smoothed estimates are also called discounted least squares estimates.
True. Exponential smoothing can be expressed as weighted least squares. The weight used is w_t = w^(n-t), which is why the estimates are also called discounted least squares estimates.
T/F: All white noise processes are non-stationary.
False
T/F: As time, t, increases, the variance of a random walk increases.
True
T/F: First-order differencing a random walk series results in a white noise series.
True
T/F: A white noise process is weakly stationary.
True, anything that is strongly stationary is also weakly stationary
T/F: Adding a constant alpha to a white noise process {c1, c2, …, ct}, yt = ct + α, results in a random walk.
False
T/F: pk cannot be negative.
False. pk can be negative, indicating an inverse relationship between observations k time units apart
T/F: pk is only defined for stationary processes
False. Autocorrelation can be calculated for both stationary and non-stationary processes.
T/F: pk always increases as k increases.
False, pk typically decreases as k increases
T/F: pk measures the correlation of the series with itself at different times.
True
T/F: A process with pk > 0 for all k is non-stationary.
False, a process can have positive pk values and be stationary
Unit root test
Tests whether a time series variable is non-stationary and possesses a unit root
Null hypothesis of unit root test
Autoregressive polynomial of zt has a root equal to unity (0)?
Random walk is a good fit for the model.
T/F: Unit root tests are primarily used for assessing seasonality in time series data.
False. Unit root tests are used to assess fit of a random walk model, not seasonality.
T/F: A unit root evaluates the fit of a white noise process.
False. A unit root evaluates the fit of a random walk model, not the white noise process.
T/F: The Dickey-Fuller test is used to detect the presence of a unit root in the time series.
True. The Dickey-Fuller test specifically tests for a unit root, helping to determine if a random walk model is a good fit for the observed data.
T/F: Unit root tests confirm the absence of volatility clustering in financial time series.
False. Unit root tests assess the fit of a random walk model, not volatility clustering.
T/F: If Φ = 1, then the model reduces to a stationary process.
False. If Φ = 1, then the model reduces to a random walk, which is nonstationary.
T/F: The moving average technique cannot be used for forecasting.
False. Moving averages can be used for forecasting.
T/F: Moving averages typically assign higher weights to older observations
False. Moving averages typically assign equal weights to all observations
T/F: Moving averages are easy to compute but challenging to interpret.
False. Moving averages are both easy to compute and easy to interpret.
T/F: Moving averages can be expressed as weighted least squares estimates.
True. Moving averages can also be expressed as weighted least squares estimates where recent observations within the window are given higher weights than observations that are not in the window.
T/F: If the moving average estimate at time t is based on the latest k observations up to and including time t, a larger k results in less smoothing of the time series.
False. The choice of k will depend on the amount of smoothing desired; the larger the value of k, the smoother the estimate will be (because more averaging is done).
T/F: If bar(y) =/= 0 for white noise process, then the time series is nonstationary in the mean.
False. A white noise process is stationary; therefore, it is always stationary in the mean.
T/F: If s^2 > 0 for a white noise process, then the time series is nonstationary in the variance
False. A white noise process is stationary; therefore, it is always stationary in the variance.
T/F: Exponential smoothing cannot handle data with a linear trend.
False. Exponential smoothing can be adapted for trends. If there is a linear trend in time, this can be handled using double exponential smoothing.
T/F: The initial value of the series, y_0, must be chosen to be 0.
False. Y_0 can be chosen to be either 0, y1, or bar(y), not always 0 for simplicity.
T/F: Exponential smoothing gives equal weights to all past observations.
False. Exponential smoothing gives exponentially decreasing weights to older observations.
T/F: The weight in exponential smoothing is traditionally chosen between 0.1 and 0.3.
False. W is traditionally chosen to be within the interval (0.70, 0.95).
T/F: Goodness of fit for exponential smoothing can be assessed using the sum of squared one-step prediction errors.
True
T/F: The time series is assumed to follow a stationary process under the null hypothesis. (For unit root test?)
False. The time series is assumed to follow a random walk (non-stationary) process under the null hypothesis.
T/F: The test statistic does not follow the usual t-distribution.
True. The test statistic does not follow the usual t-distribution. Rather, it follows a special distribution with critical values developed by Dickey and Fuller.
T/F: The Dickey-Fuller test is a two-tailed hypothesis test.
False. The Dickey-Fuller test is a left-tailed (i.e., one-tailed) test that has the following null and alternative hypotheses:
H0: Φ = 1
H1: Φ < 1
T/F: The disturbance term in the model is assumed to be serially correlated.
False. The disturbance term in the model, is assumed to be serially uncorrelated.
T/F: If the test statistic is statistically significant, we conclude the random walk model is a good fit.
False. If the test statistic is statistically significant, we reject the null hypothesis. Rejecting the null hypothesis means that we conclude the random walk model is not a good fit.
T/F: Control charts are used to detect non-stationarity in a time series.
True. Control charts are useful graphical tools for detecting trends and identifying unusual points. They detect nonstationarity in a time series.
T/F: A control chart has superimposed lines called reference limits.
False. Control charts have superimposed lines called control limits. Two well-known control limits are the upper control limit and the lower control limit.
T/F: An R chart examines the stability of the mean of a time series.
False. An R chart helps examine the stability of the variability of a time series.
Y_n+l for white noise model
se y_(n+l) for white noise model
Wt for random walk forecast
Y_n+l for random walk model
se y_(n+l) for random walk model
Mean Error (ME)
Mean Percent Error (MPE)
Mean Squared Error (MSE)
Mean Absolute Error (MAE)
Mean Absolute Percent Error
E[Yt] for AR(1) model
Var[Yt] AR(1) model
Pk for AR(1) model
b1 estimation for AR(1)
r1 (autocorrelation lag 1)
b0 estimation for AR(1)
s^2 for AR(1) model
hat(Var)[Yt]
se y_(n+l) for AR(1) model
Single smoothing with moving average
b0 prediction for single smoothing with moving average
Yt for single smoothing with moving average
Yt for double smoothing with moving average
Double smoothing with moving average
b0 prediction for double smoothing with moving average
b1 prediction for double smoothing with moving average
Y_n+l for double smoothing with moving average
Yt for single exponential smoothing
b0 prediction for single exponential smoothing
hat(S)_t for single exponential smoothing
Yt for double exponential smoothing
hat(S)_t for double exponential smoothing
b0 prediction for double exponential smoothing
b1 prediction for double exponential smoothing
Y_n+l for double exponential smoothing
Seasonal Autoregressive Models, SAR(p)
ARCH(p) model
Var[ε_t] for ARCH(p) model
GARCH(p) model
Var[ε_t] for GARCH(p) model
T/F: k = 1 results in no smoothing.
True. The larger the k, the smoother the estimate.
T/F: It is risky to choose a small value for k because we may lose sight of the real trends due to oversmoothing.
False. A small k doesn’t cause oversmoothing
T/F: When smoothing with moving averages is used for forecasting, the model is called a globally constant mean model.
False. When smoothing with moving averages is used for forecasting, the model is called a locally constant mean model.
Autocorrelation Standard Error
Autocorrelation test statistic
T/F: Seasonal effects in time series can be captured using categorical variables.
True
T/F: Both Mean Error and Mean Absolute Error statistics can detect more trend patterns than Mean Square Error can.
False. MSE, MAPE, and MAE can detect more trend patterns than ME.
T/F: GARCH models assume a constant mean level for the series.
True
T/F: GARCH models are used to model the conditional variance of the series.
True
T/F: Volatility clustering can be modeled with GARCH models.
True
T/F: GARCH models predict volatility based solely on long-term volatility parameters.
False. GARCH models predict volatility based on both long-term volatility parameters (the intercept) and short-term effects from past variances and past squared residuals, not solely on long-term parameters.
T/F: A weakly stationary process is not stationary in the variance.
False. A weakly stationary process is stationary in the variance
T/F: Applying logarithmic transformation to a series can help stabilize its mean.
False. Applying logarithmic transformation to a series can help stabilize its variance.
T/F: The stochastic component of a linear trend in time model is non-stationary.
False. The stochastic component of a linear trend in time model is a stationary white noise process, while the stochastic component of a random walk model is itself a non-stationary random walk process (sum of white noise processes).
T/F: The sample variance of a white noise process is greater than the sample variance of the differenced series of a white noise process.
False. The sample variance of a white noise process is smaller than the sample variance of the differenced series of a white noise process.
T/F: ARCH and GARCH models predict the mean level of the series without considering volatility.
False. ARCH/GARCH models specifically focus on modeling time-varying volatility.
T/F: ARCH models assume constant conditional variance across all time periods.
False. ARCH models allow conditional variance to change over time, contrary to the assumption of constant unconditional variance.
T/F: GARCH models allow the variance of residuals to depend on past squared residuals and past variances.
True.
T/F: Volatility clustering is a phenomenon best modeled by fixed seasonal effects models.
False. Volatility clustering is addressed by ARCH/GARCH models, not by seasonal models.
Phenomenon where periods of high volatility tend to be followed by more periods of high volatility, and periods of low volatility tend to be followed by more periods of low volatility.
Volatility clustering
T/F: Seasonal autoregressive models use a fixed seasonal effect that does not change over time.
False. Seasonal autoregressive models account for seasonality by including terms for specific seasonal lags.
T/F: Fixed seasonal effects models cannot represent seasonality as a trigonometric function of time.
False. Fixed seasonal effects models can and often do use trigonometric functions to represent seasonal effects.
T/F: Seasonal smoothed exponential models allow for the seasonal component to adapt over time.
True. Seasonal smoothed exponential models, like the Holt-Winter additive model, adapt the seasonal component based on past data, allowing it to change over time.
T/F: The Holt-Winter additive model ignores seasonality in its forecasts.
False. The Holt-Winter additive model explicitly includes a seasonal component in its forecasts.
T/F: SAR(P) models include observations from non-seasonal periods to model seasonality.
False. SAR(P) models specifically focus on seasonal lags, not non-seasonal periods.
T/F: Conditional least squares can be used to estimate
and β1 and β2 for AR(1).
True
T/F: The residuals are calculated as yt + (b0 + b1yt-1) for AR(1).
False. The residuals are calculated as yt - (b0 + b1yt-1).
T/F: Forecast intervals remain constant regardless of the number of steps ahead for AR(1).
False. Forecast intervals widen as the number of steps into the future increases.
T/F: For stationary AR(1) model, the variance of the residual is greater than the variance of the time series model.
False. The variance of the residual is less than the variance of the time series model.
T/F: If β1 = 0, then yt is a random walk model.
False. If β1 = 0, then yt is a white noise model.
T/F: For a stationary AR(1) model, the lag k autocorrelation converges to 0 as k increases.
True
Mean of random walk
Variance of random walk