Chapter 1 Flashcards

1
Q

Combining a random sample of size n from a normal N(mu, 1/tau) (with known h) with a normal prior (N(b, 1/d) results in what distribution? Outline any parameters. (2)

A

mu~N(B, 1/D)
Where B=(db+ntauxbar)/(d+ntau)
D=d+n
tau

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

General case of Poisson data Xi|theta~Po(theta) and Gamma Ga(g,h) prior gives what posterior?

A

θ|x ∼Ga(G = g + n ̄x,H = h + n).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Issues with substantial prior knowledge.(2)

A

We have substantial prior information for θ when the prior distribution dominates the posterior distribution, that is π(θ|x) ∼π(θ).
1. The intractability of the mathematics in deriving the posterior distribution — though
with modern computing facilities this is less of a problem,
2. the practical formulation of the prior distribution — coherently specifying prior beliefs in the form of a probability distribution is far from straightforward.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Limited prior knowledge approach. (1)

A

Using conjugate pairs:
Poisson random sample, Gamma prior distribution −→Gamma posterior distribution
•Normal random sample (known variance), Normal prior distribution −→ Normal posterior distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Define conjugacy.(1)

A

Suppose that data x are to be observed with distribution f (x|θ). A family Fof prior distributions for θ is said to be conjugate to f (x|θ) if for every prior distribution π(θ) ∈F,
the posterior distribution π(θ|x) is also in F.
Notice that the conjugate family depends crucially on the model chosen for the data x.
For example, the only family conjugate to the model “random sample from a Poisson distribution” is the Gamma family.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Vague prior.(1)

A

We represent vague prior knowledge by using a prior distribution which is conjugate to the model for x and which is as diffuse as possible, that is, has as large a variance as possible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Asymptotic posterior.(2)

A

J(θ) = − ∂2/∂θ2 log f (x|θ).
This means that, with increasing amounts of data, the posterior distribution looks more and more like a normal distribution. The result also gives us a useful approximation to the posterior distribution for θ when n is large:
θ|x ∼N{ˆθ,J( ˆθ)−1} approximately.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are bayesian confidence intervals sometimes called?(1)

A

Bayesian confidence intervals are sometimes called credibleregions or plausible regions.
Clearly these intervals are not unique, since there will be
many intervals with the correct probability coverage for a given posterior distribution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Predictive distribution.(1)

A

Implicit in the Bayesian framework is the concept of the predictive distribution. This distribution describes how likely are different outcomes of a future experiment. The
predictive probability (density) function is calculated as
f (y|x) =∫Θf (y|θ) π(θ|x) dθ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Candidates formula.(2)

A

However, when the past data x and future data y are independent (given θ) and we use a conjugate prior
distribution, can calculate predictive distribution through candidates formula:
f (y|x) = [f (y|θ)π(θ|x)]/π(θ|x,y).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Mixed density functions definition.(1)

A

A mixture of the distributions πi(θ) with weights pi (i = 1,2,…,m) has probability
(density) function:
π(θ) =m∑i=1{pi*πi(θ)}.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Mean and var for mixed distributions.(3)

A

Ei(θ) =∫Θ[θπi(θ)]dθ and Vari(θ) =∫Θ[{θ −Ei(θ)}^2πi(θ) ]dθ
sum of mean=pi
mean for Ei.
meanwhile Var of mixed:
E(theta)^2=sum pi*(var i dist+mean dist i^2)
Hence Var= Ethetasquared-=Ethetaallsquared

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Combining mixed prior with data x gives what posterior? What about if this was a pi(theta) prior?

A

π(θ|x) = [π(θ)f (x|θ)]/f (x)
=m∑i=1{[pi
πi(θ)*f (x|θ)}/[f(x)]

π(θ|x) =m∑i=1[pifi(x)/f(x)]*πi(θ|x).
p∗i=pifi(x)/f (x).

Hence, combining data x with a mixture prior distribution (pi,πi(θ)) produces a posterior mixture distribution (p∗i ,πi(θ|x)). The effect of introducing the data is to “update” the mixture weights (pi →p∗i ) and the component distributions (πi(θ) →πi(θ|x)).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Bayes theorem.(1)

A

P(A|B)=[P(B|A)*P(A)]/(P|B)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly