Time Series #1 Flashcards

1
Q

What are time series data?

A

Observations of a variable over time, often correlated across time (serial correlation).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is serial correlation (autocorrelation)?

A

The correlation of a time-series variable with its lagged values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are common frequencies of time-series data?

A

Intra-day, daily, monthly, quarterly, annually etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How do time-series differ from cross-sectional data?

A
  1. Indexed by time (t) rather than entities (i).
  2. Observations are inherently sequential.
  3. Bound by start and end dates.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What kinds of patterns might time-series exhibit?

A

Trends, seasonality, cycles and random noise.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How are time-series used in finance?

A
  1. Forecasting asset prices
  2. Testing financial models like CAPM
  3. Measuring volatility
  4. Analyzing market efficiency
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the linear model assumption for CLRM?

A

Yt = α + βXt + Ut

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What does the random sample assumption state?

A

Covariance across the residuals must be zero, ensuring no serial correlation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What does the sample variation assumption ensure?

A

Variance is larger than 0, requiring variability in X.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the no endogeneity assumption?

A

Xt and Ut have to be uncorrelated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the homoskedasticity assumption?

A

The variance of errors has to be constant.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the normality assumption?

A

The residuals have to be normally distributed, to ensure reliable hypothesis testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What happens to small and large samples if the normality assumption is violated?

A

Small sample results become unreliable, however, large samples allow the drop of normality assumption.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Why is having no endogeneity in time-series challenging?

A

Endogeneity often occurs, with Xt and Yt determined simultaneously.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is asymptotic normality?

A

With large T, the OLS estimates are approximately normally distributed, even without assuming normality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How is normality tested?

A

Using the Jarque-Bera test, which checks skewness and kurtosis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What does a JB test statistic represent?

A

It compares the residual distribution’s skewness and kurtosis to a normal distribution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What does a JB test result indicate?

A

A high JB statistic or a low p-value indicates that the residuals deviate from normality.
If the p-value is above the significance threshold, normality is not rejected.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What does it mean if the JB test shows borderline rejection of normality?

A

It suggests slight deviations from normality, which may be acceptable in large samples but could affect small-sample reliability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What are alternatives if normality is rejected?

A
  1. Transform variables with logs f.e.
  2. Winsorize data: use lower and upper bounds
  3. Use dummy variables for outliers to exclude them
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Why does normality matter more for small samples?

A

Non-normal errors distort hypothesis tests when sample size is insufficient.

22
Q

How do we detect serial correlation?

A

Using the Breusch-Godfrey test or residual autocorrelation plots.

23
Q

How do you interpret the results of a Breusch-Godfrey test?

A

If the test statistic exceeds the critical value or the p-value is below the significance level, reject the null hypothesis of no serial correlation. This suggests that autocorrelation exists in the residuals.

24
Q

What are common patterns in residual plots?

A
  1. Cyclical patterns: positive autocorrelation
  2. Alternating signs: negative autocorrelation
  3. Random scatter: no autocorrelation
25
Q

What are the consequences of serial correlation?

A

The OLS estimators are inefficient, which means they didn’t choose the smallest variance. This means the standard errors are biased.

26
Q

How is serial correlation corrected?

A
  1. Newey-west standard errors
  2. Adding lagged variables to the model
27
Q

What is the Newey-West adjustment?

A

A correction for standard errors to account for heteroskedasticity and autocorrelation.

28
Q

What are the limitations of Newey-West corrections?

A

They perform poorly in small samples and require careful lag selection.

29
Q

What do significant results in a serial correlation test imply for a regression model?

A

The presence of serial correlation indicates that the error terms are not independent. This violates CLRM assumptions and necessitates corrections, such as Newey-West standard errors.

30
Q

How do you interpret corrected standard errors after applying Newey-West adjustments?

A

Larger standard errors suggest that the original estimates underestimated variability due to serial correlation or heteroskedasticity. Interpretation of coefficients should account for this adjustment.

31
Q

What does it mean if the Newey-West standard errors change significance levels of coefficients?

A

This indicates that serial correlation or heteroskedasticity significantly impacted the original standard errors. The corrected significance levels are more reliable.

32
Q

How should deviations from normality in residuals impact model interpretation?

A

In small samples, non-normality can invalidate hypothesis tests. In large samples, reliance on asymptotic normality may still permit valid inference.

33
Q

What is stationarity?

A

A time series with constant mean, variance and autocorrelation over time.

34
Q

Why is stationarity important?

A

Non-stationary data lead to spurious regressions.

35
Q

What are common causes of non-stationarity?

A

Trends, seasonality and structural breaks.

36
Q

How is stationarity tested?

A

Using tests like the Augmented Dickey-Fuller test.

37
Q

How can non-stationarity be addressed?

A

By differencing the data or detrending.

38
Q

How are structural breaks identified?

A

By splitting the sample into subperiods or observing residual patterns.

39
Q

What is the Chow test?

A

A method to test if coefficients differ across subperiods.

40
Q

How should you interpret the results of a Chow test?

A

If the null hypothesis is rejected, it indicates that the parameters are not stable across the subperiods, suggesting structural breaks in the relationship.

41
Q

What are examples of structural breaks in finance?

A

Market crashes or policy changes.

42
Q

Why test for parameter stability?

A

To ensure coefficients represent consistent relationships over time.

43
Q

What does parameter instability indicate?

A

Potential model misspecification or external shocks.

44
Q

How does measurement error in explanatory variables affect OLS?

A

It biases estimates, especially towards zero. (Fama MacBeth example)

45
Q

How is measurement error handled?

A

Using proxies or adjusting standard errors.

46
Q

Why is measurement error in dependent variables less critical?

A

It does not bias parameter estimates but may increase error variance.

47
Q

What causes measurement errors in financial variables?

A
  1. Illiquidity in markets
  2. Estimated quantities like GDP or inflation
48
Q

What is the role of portfolio betas in addressing measurement errors?

A

Aggregating betas reduces noise in explanatory variables?

49
Q

What does a high R2 value in time-series regression suggest?

A

A high R2 indicates that a substantial portion of the variation in the dependent variable is explained by the independent variables. However, it does not imply causation, especially if the data exhibit trends.

50
Q

What do we conclude if the p-values of lagged variables are significant in regression?

A

It implies that past values (lags) of the independent variable have explanatory power for the current dependent variable, suggesting temporal dependencies.

51
Q

How do you interpret p-values from parameter stability tests across subperiods?

A

If the p-value is above the significance level, the null hypothesis of equal coefficients cannot be rejected, indicating stable relationships across subperiods.