Week 11 Flashcards
Lecture 31
Bayesian point estimators
Steps in Bayesian point estimation
Estimating parameter h(θ) € Rq
For θ € Rp
1) define loss function: L(θ,δ) = cost of using delta when param to be estimated is θ
(In freq this would be constant, in Bayesian it is RV)
2) as L is rv can’t minimise directly -> define conditional risk: Eπ(θ|x)[L(θ,δ) ] = integral ( L(θ,δ) π(θ|x) )dθ = R(δ)
3) point estimator: ^θB = argminδR(δ)
Def quadratic loss
Maximum a posteriori
MAP is defined as mode of posterior dist
Absolute loss
Construct Bayes’ estimator from quadratic loss
Summary table of Bayes’ point estimators
When n -> inf, Bayes estimates go?
As n -> posterior variance
-> 0
Posterior dist concentrates around a point
The last written page on lecture 31
Is not examinable
Def the prior predictive dist
General distinction between (1) prior prediction and (2) posterior prediction
(1) not seen data, have prior geuss
(2) marginal conditional, given data wrt θ
In general integrate wrt prior/posterior
How to use kernel
When you find a kernel under an integral you know that when you integrate it it will be equal to the inverse of the normalisation constant as the full dist is integrated to 1
Lecture 32 ‘a longer example’ , no informative priors
Not examinable
Posterior predictive dist