Time-Series of Returns Flashcards
What are time series returns / econometrics?
With time series econometrics, we study how returns develop over time. For example, if you see a good return today, does that mean the momentum will continue tomorrow or will it mean-revert. It all comes down to: Can you forecast returns ahead of time?
A time series of returns refers to a sequence of data points representing the returns of a financial asset, recorded at successive points in time, usually at regular intervals such as daily, monthly, or annually.
Advantages of Time Series Models
–> use only information from the variable’s own past (opposite multivariate structural models)
–> attempt to capture empirically relevant features of the observed data that may stem from a variety of different (but unspecified) structural models.
–> can be useful when structural models are inappropriate (e.g., structural variables are observ-able at lower time frequency only).
Components of Time Series Models:
Trend: The long-term direction of the series. In financial markets, this might reflect sustained movements up or down in asset prices.
Seasonality: Regular and predictable changes that recur every calendar period, such as quarterly or annually.
Cyclic Variations: These are fluctuations occurring at irregular intervals, influenced by economic cycles, differing from seasonality.
Random or “Noise”: These are irregular or stochastic components that are unpredictable and cannot be systematized.
The Rational Expectations (RE) Model
Implication of the EMH! Under the EMH, the stock price P_t already incorporates all relevant information (i.e., markets are informationally efficient and market participants hold rational expectations).
The security price at time t+1 is a rational expectation conditional on all available information at time t plus unanticipated shocks (ε):
P_(t+1)=E_t (P_(t+1) )+ε_(t+1),E_t [P_(t+1)-E_t (P_(t+1) )]=E_t (ε_(t+1) )=0
An economic theory that assumes that individuals form their expectations for the future in a way that optimally utilizes all available information.
This means that on average, people’s forecasts of future economic variables (like financial returns) are accurate, and any errors are random and not systematically biased. In financial markets, this model implies that the current prices of assets fully reflect all publicly available information, and future price movements are only driven by new, unpredictable information, making the markets efficient.
The model’s implication that E_t (ε_(t+1) )=0 means that the forecast of P_(t+1) is unbiased (i.e., on average the forecast equals the realised price). Further, ε_(t+1) is assumed to be independent of any information known at time t (or earlier), which is known as the orthogonality principle
White Noise Process
A white noise process (ε_t) has no discernible structure. A (zero-mean) white noise process is:
1. E[ε_t ]=0(zero mean)
2. “Var” [ε_t ]=σ^2<∞ (finite constant variance)
3. “Cov” [ε_t,ε_s ]=0,”for” s≠t (uncorrelated increments, zero autocovariances)
It’s a sequence of random variables where each variable has a mean of zero, constant finite variance, and no autocorrelation between any two different times. This means that each value in the sequence is random, does not depend on previous values, and is statistically uncorrelated with other values in the sequence. White noise serves as an important concept in time series analysis, providing a baseline model of purely random variations around a constant mean.
Random Walk
Baseline Model for EMH
General random walk: Is defined as y_t = μ+y_(t-1)+ε_t
1. E[ε_t ]=0
2. “Var” (y_t )=σ^2<∞
3.”Cov” [ε_t,ε_s ]=0 for s≠t.
A stochastic process where the future path of a variable, such as a financial asset’s price, is independent of its past path and evolves in a series of independent, random steps. This implies that changes in asset prices are unpredictable and devoid of any systematic patterns, making them essentially random. Consequently, under a random walk hypothesis, it is impossible to consistently predict future price movements based on historical data, reflecting the efficient market hypothesis.
Comparing RE and RW
The RW (random walk) model is an example of an RE (rational expectations) model. More specifically, the general RW is an example of an RE model with
E_r [P_(t+1) ]=μ+P_t
Even the general RW model is more restrictive than the RE, as it assumes a constant drift μ (for time-series tests we don’t need to impose any structure on what μ actually is, but you might have a compensation for risk in mind). Because the RW model is more restrictive, it makes testing easier.
Stationarity
Stationarity means a time series process does not wander off, i.e., if its statistical properties remain constant over time, so sample mean and covariance are roughly the same over different time intervals.
A process is weakly/covariance stationary if
1. E(y_t )=μ (mean is finite and constant across t, i.e., no trend)
2. Var(y_t )=σ^2 (variance is finite and constant across t
3. Cov(y_t,y_(t-s)) for s≠0 (covariance is finite and a function of s but not of t)
Comparison cross-sectional vs time-series models
A time series model captures correlation between y_t and past values y_(t-s). This differs from OLS models, where the dependent variable y is explained by the independent variable(s) x. Cross-sectional models capture correlation between y and x. Univariate time series models capture the correlation between y_t and lagged value(s)
Autocorrelation of a stationary process
Autocorrelations of a stationary process are defined as
ρ_s=γ_s/γ_0 ∈[-1,1]
We use autocorrelation over autocovariance y_s = Cov(yt, yt-s) as the autocovariance is scale dependent.
γ_0=Cov(y_t,y_t )=Var(y) is simply the variance. The autocorrelations describe the short-run dynamics within the time-series. In contrast, a trend describes the long-run dynamics
AR(p) process - Definition
An AR process is an autoregressive process.
An AR(1) process models the current value as
y_t=μ+ϕy_(t-1)+ε_t,
where ε_t~”WhiteNoise” (0,σ^2 )
An AR process is a linear combination of the variable’s past value. ϕ (phi) determines the persistence. The closer ϕ is to zero, the more the AR(1) resembles a noise process. The closer ϕ is to one, the more it wanders off.
When |ϕ|<1, the AR(1) process is stationary.
Special cases are:
When ϕ=1, the AR(1) is a random walk
When ϕ=0, the process is called AR(0)
MA(q) process - Definition
An MA(1) process is a moving average process which models the current value as a linear combination of lagged error terms, that is:
y_t = μ + θ*ε_(t-1)+ε_t, where ε_t~”WhiteNoise” (0,σ^2 )
= linear combination of the current and past white noise error terms.
Stationarity for AR (p)
Start from AR(1) and substitute in for y_(t+1). Infinite periods later:
y_(t+T) = ϕ^Ty_t + μ(1-ϕ^T)/(1-ϕ)+∑ϕ^I * ε_(t+T-i)
From this equation, we can determine the impact of ϕ on y_(t+T):
If ϕ>1, impact increases exponentially in T, explosive case. No constant mean, not stationary
If ϕ<-1, the absolute impact grows exponentially in T, but the sign oscillates from - to +. (not stationary)
If ϕ=1, the impact is permanent and independent of T, i.e., random walk (not stationary).
If |ϕ|<1, the impact decrases in T, i.e., weakly stationary case.
Stationarity for MA (q)
When ε_t is a white noise process with variance σ^2, we can show the following. The MA(q) process is stationary for all parameter values θ
(1) E(y_t )=μ
(2) Var(y_t )=γ_0=(1+θ_1^2+θ_2^2+⋯+θ_q^2 ) σ^2
(3) γ_s=Cov(y_t,y_(t-s) )
The World Decomposition Theorem
Some AR(p) processes can be expressed as MA(∞) process. As an example, we can restate an AR(1) process as a MA(∞) process. For this we apply a Taylor series expansion (expanding around zero)
This only works for stationary AR(1) processes. The same applies to AR(p) process .
Representing a MA(q) as an AR(∞) only works is |θ|<1 (this is not a stationarity condition). The condition under which MA(q) is inversible is that the roots of θ(z)=0 are outside the unit circle.