Lecture 10 Flashcards

1
Q

What are the distinction between condition vs unconditional distribution ?

A
  • Time varying volatility → fat tails in unconditional distribution but not enough to fit actual data
  • Introduce non-normal distribution to adjust data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How to extend normal distribution into these two directions of time varying volatility and non-normal distribution

A
  • Natural extensions to normal distribution allowing fat tails (student)
  • But distribution not designed to capture asymmetry
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

If r(t) is a time series of realization of log-returns, how to break down its dynamics in 3 ?

A
  • Conditional mean = location parameter
  • Conditional variance = scale parameters
  • Conditional distribution = shape parameters
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

In r(t) = μ(θ) + ϵ(t), what does θ include ?

A

All parameters associated with conditional mean and variance equations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the several issues related to modeling of non-normal returns ?

A

• Unconditional distribution = non-normal → conditional distrribution g(.) also non-normal ?
Whatever you do, you generate non-normal returns so the answer is no. Even if you filter for the volatility
• Conditional distribution non.normal → model it explicietly ?
Yes we can, we can have a model where we don’t have the conditional distribution but we can estimate parameters.

• Model conditional explicitly → asymmetry and fat-tailedness ?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is one of the attractive feature of Garch model ?

A

Even if conditional distribution of innovations z(t) = normal, unconditional distribution of ϵ(t) has fatter tails than normal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What does symmetric conditional distribution imply ?

A

Garch effects do not imply asymmetric distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What does an asymmetric conditional distribution imply ?

A

|Su| > |Sc|

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What happens if conditional distribution g not normal ?

A

ML approach not directly used since based on normality assumption

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What if the first and second moments are correctly specified ?

A

Consitent estimate θ by maximizing normal likelihood under conditional normality assumption even if true distribution not normal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the difference between MLE and QMLE ?

A
  • MLE : maximize assuming true conditional distribution of errors = normal
  • QMLE : maximize normal likelihood even when true distribution not normal

→ same estimates of θ since same maximization problem

→ covariance matrices of estimator differs → QMLE computed without assuming conditional normality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the asymptotic of θ(QMLE) ?

A

√T (θ(QML) - θo) ̴ N(0,Ω)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What does QMLE provide ?

A

robust standard errors → asymptotically valid confidence intervals for estimators

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How is the sandwich estimator obtained ?

A

As square roots of diagonal elements of matrix

  • Ω = A^-1BA^-1
  • A = hessian
  • B = outer product of gradients
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is different between the sandwich estimator for finite sample ?

A

It is evaluated at θ(QML)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

By what is the asymptotic covariance matrix of ML estimator given ?

A

By inverse of Hessian since B = A under normality :

√T (θ(ML) - θo) ̴ N(0,A^-1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What happens for normal likelihood when using QMLE ?

A

QMLE = consistent since robust w.r. to true distribution of model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What happens when the true distribution is not normal while using the QMLE ?

A

It is inefficient because:
• Degree of inefficiency increases with degree of departure from normality
o Simulation evidence 84% loss under normality instead of MLE
o Loss of efficiency of 59% if t with 5 degrees of freedom

→ Use correct distribution of Zt = improve efficiency of estimator

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the common practice for inefficiency ?

A

Using non-normal distribution + robust covariance matrix

20
Q

When is the consistency of non normal QMLE achieved ?

A
  • Assumed & true errors pdfs = unimodal and symmetric around 0
  • Conditional mean of return process = identically 0

→ if 2 conditions not satisfied = inconsistent QMLE since fails to capture effect of asymmetry of distribution on conditional mean

21
Q

How can we handle this problem of unconsistency ?

A

By introducing additional location parameter making QMLE robust to asymmetry

22
Q

What is a crucial issue when non normal likelihood used for QMLE ?

A

Adequacy tests confirm assumed distribtuion fits data

23
Q

What is the moment problem ?

A

Sequence of moments {μ(j)} → necessary and sufficient conditions for existence and expression for positive pdf.

24
Q

What is the Hamburger moment problem ?

A

nsc = moment matrices = positive definite for alll n

→ det ||μ|| ≥ 0 where ||μ|| = Henkel matrix

25
Q

What are the maximal values of skewness and kurtosis ?

A

μ(3) ≤ μ(4) - 1 with μ(4) ≥ 1

26
Q

What are the several ways of dealing with asymmetry or fat tails in distribution ?

A
  • Non or semi parametric estimation → capture non normality directly
  • Asymmetry introduced using expansion about symmetric distribution → can be applied to any symmetric distribution
  • Exists distributions with asymmetry or fat tails → skewed student t distribution
27
Q

What does it mean when we have a semi-parametric ARCH model ?

A
  • 1st and 2nd moments given by ARMA process and ARCH model

* Condional density approximated by non parametric and ARCH model

28
Q

What is the procedure to maximize the log likelihood function with semi parametric estimation ?

A
  • Initial estimate of θ given by θ’ (obtained by QMLE)
  • Fitted resituals and fitted variances used to compute standardized residuals with mean zero and unit variance
  • Density g(z(θ)) estimated using non parametric method to get density g(t)
  • Compute log-likelihood

→ maximize with g(t) fixed and iterate steps 2-4 until convergence

29
Q

What does semi-paramteric method avoid ?

A

Problems of distribution mis-specification, since using non normal distribution may lead to inconsistent parameter estimates if distribution incorrect

30
Q

What if we believe that the true pdf of the random variable Z is close to normal ?

A

Use approximation of pdf around normal density : g(z|η) = φ(z)p(n)(z|η)

31
Q

What does φ(z) represent ?

A

standard normal density with mean 0 and unit variance

32
Q

How is p(n)(z|η) chosen ?

A

s.t. g(z|η) same firest moments as pdf of z

33
Q

What is the special case fore series expansion ?

A

Gram-Charlier type A expansion

→ describe deviations from normality of innovations in GARCH framework

34
Q

What are the properties of the Gram- Charlier distribution ?

A

• GC expansions allow for additional flexibility over normal distribution since introduce skewness and kurtosis of distribution as unknown parameters

•	First four moments of Z
      o	E(Z) = 0
      o	V(Z) = 1
      o	Sk(Z) = m(3)
      o	Ku(Z) = m(4)
35
Q

What are the drawback of GC expansions ?

A
  • For moments (m3,m4) distant from normality (0,3) → negative g(.) for some z
  • Pdf may be multimodal
  • Domain of definition = small
36
Q

What are the characteristics of the student t distribution ?

A
  • Capture fat tails of financial returns
  • Symmetric distribution
  • Normalized
37
Q

What are the characteristics of the skewed student t distribution ?

A
  • Capture fat tails and asymmetry of distribution
  • Introduces generalized student-t distribution with asymmetries and zero mean and unit variance
  • Parameters depend on past realizations + subsequently higher moments may be time varying
38
Q

What is the shape parameter ?

A

η = (v,λ)’ with
• v = degree of freedom parameter

• λ = asymmetry parameter
o = 0 → traditional student
o = 0 and v = ∞ → normal distribution

39
Q

When does the desity and the various moments exist for all parameters ?

A
  • Only for v > 2 and -1 < λ < 1
  • Skewness exists if v > 3
  • Kurtosis exists if v > 4
40
Q

What are the constraints of the domain of definition of distribution ?

A

(v,λ) ϵ ]2,∞[x[-1,1[

41
Q

On what are the adequacy tests based ?

A

Distance between empirical distribution and assumed distribution

42
Q

On which steps is the adequacy test based ?

A

• Test whether ut is serially correlated using standard LM test
o Regress [u(t)- û]^I on k lags of variable
o LM statistic

• Test Ho: ut is U(0,1)
o Cut empirical and theoretical distribution into N cells
o Test if 2 distributions same using Pearson’s test statistic

43
Q

On what are based the simple estimate of standard error of estimated number observations ?

A

Binaomial distribution

44
Q

What happens to Fn under Ho ?

A

Fn ̴ drawn from binomial distribution B(T,p) where T = number of observations and p = 1/N = probability to fall in cell n

45
Q

What is the expectation of Fn and its variance ?

A
  • E[Fn] = Tp = T/N

* V[Fn] = Tp(1-p)

46
Q

What are the confidence band for Fn ?

A

E[Fn] ± 2√V[Fn]

47
Q

How is the binomial distibution in Pearson’s test statistic ?

A

≅ N(Tp,Tp)