Chapter 3 Flashcards

1
Q

Why is there difficulty of specifying ignorance priors in multi-parameter problems?(1)

A

Using uniform or improper priors then the prior for g(theta) is not constant so we are not truly ignorant. Same applies for multiparamter problems.
Suppose we represent prior ignorance about θ = (θ1, θ2, . . . , θp)^T using π(θ) = constant. Let φi = gi (θ), i = 1, . . . , p and φ = (φ1, . . . , φp)^T be a 1–1 transformation. Then, in
general, the prior density for φ is not constant and this suggests that we are not ignorant about φ. However, if we are ignorant about θ then we must also be ignorant about g(θ).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Determining posterior for large n.(2)

A
θ|x ∼ Np(θˆ, J(θˆ)^−1) approximately.
J(θ) is the observed
information matrix, with (i, j)
th element
Jij = −∂^2/∂θi∂θj(log(f x|θ))
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Explain the similarities and differences between the asymptotic posterior distribution
and the asymptotic distribution of the maximum likelihood estimator.(2)

A

This limiting result is similar to one for the maximum likelihood estimator in Frequentist Statistics:
I(θ)^1/2(θˆ− θ)D−→ Np(0, Ip) as n → ∞,
where I(θ) = EX|θ[J(θ)] is Fisher’s information matrix. Note that this statement about the distribution of θˆ for fixed (unknown) θ, whereas the results above is a
statement about the distribution of θ for fixed (known) θˆ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Asymptotic posterior distribution.(1)

A

Suppose we have a statistical model for data with likelihood function f (x|θ), where
x = (x1, x2, . . . , xn)^T and θ = (θ1, θ2, . . . , θp)^T
, together with a prior distribution withdensity π(θ) for θ. Then J(θˆ)^1/2*(θ − θˆ)|xD−→ Np(0, Ip) as n → ∞,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Covariance calculation between parameters.(1)

A

Covariance/sd of one*sd of other.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly