Lecture 10 Flashcards
What are the distinction between condition vs unconditional distribution ?
- Time varying volatility → fat tails in unconditional distribution but not enough to fit actual data
- Introduce non-normal distribution to adjust data
How to extend normal distribution into these two directions of time varying volatility and non-normal distribution
- Natural extensions to normal distribution allowing fat tails (student)
- But distribution not designed to capture asymmetry
If r(t) is a time series of realization of log-returns, how to break down its dynamics in 3 ?
- Conditional mean = location parameter
- Conditional variance = scale parameters
- Conditional distribution = shape parameters
In r(t) = μ(θ) + ϵ(t), what does θ include ?
All parameters associated with conditional mean and variance equations
What are the several issues related to modeling of non-normal returns ?
• Unconditional distribution = non-normal → conditional distrribution g(.) also non-normal ?
Whatever you do, you generate non-normal returns so the answer is no. Even if you filter for the volatility
• Conditional distribution non.normal → model it explicietly ?
Yes we can, we can have a model where we don’t have the conditional distribution but we can estimate parameters.
• Model conditional explicitly → asymmetry and fat-tailedness ?
What is one of the attractive feature of Garch model ?
Even if conditional distribution of innovations z(t) = normal, unconditional distribution of ϵ(t) has fatter tails than normal.
What does symmetric conditional distribution imply ?
Garch effects do not imply asymmetric distribution
What does an asymmetric conditional distribution imply ?
|Su| > |Sc|
What happens if conditional distribution g not normal ?
ML approach not directly used since based on normality assumption
What if the first and second moments are correctly specified ?
Consitent estimate θ by maximizing normal likelihood under conditional normality assumption even if true distribution not normal
What is the difference between MLE and QMLE ?
- MLE : maximize assuming true conditional distribution of errors = normal
- QMLE : maximize normal likelihood even when true distribution not normal
→ same estimates of θ since same maximization problem
→ covariance matrices of estimator differs → QMLE computed without assuming conditional normality
What is the asymptotic of θ(QMLE) ?
√T (θ(QML) - θo) ̴ N(0,Ω)
What does QMLE provide ?
robust standard errors → asymptotically valid confidence intervals for estimators
How is the sandwich estimator obtained ?
As square roots of diagonal elements of matrix
- Ω = A^-1BA^-1
- A = hessian
- B = outer product of gradients
What is different between the sandwich estimator for finite sample ?
It is evaluated at θ(QML)
By what is the asymptotic covariance matrix of ML estimator given ?
By inverse of Hessian since B = A under normality :
√T (θ(ML) - θo) ̴ N(0,A^-1)
What happens for normal likelihood when using QMLE ?
QMLE = consistent since robust w.r. to true distribution of model
What happens when the true distribution is not normal while using the QMLE ?
It is inefficient because:
• Degree of inefficiency increases with degree of departure from normality
o Simulation evidence 84% loss under normality instead of MLE
o Loss of efficiency of 59% if t with 5 degrees of freedom
→ Use correct distribution of Zt = improve efficiency of estimator