10) Bayesian learning in practise Flashcards

1
Q

What is the pdf of the beta distribution

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the parameters of the prior known as

A

Hyperparameters, to distinguish them from the parameters of the likelihood function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

In terms of mean parameterization Beta(πœ‡β‚€, π‘˜β‚€), how are the prior concentration parameter and the prior mean parameter set

A
  • π‘˜0 = 𝛼1 + 𝛼2
  • πœ‡0 = 𝛼1/π‘˜0
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the prior mean and prior variance for the beta-binomial model

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the posterior distribution for the beta-binomial model

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the posterior updates in the beta-binomial model

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What does pseudodata mean in the context of Bayesian inference

A

Pseudodata refers to treating the prior distribution as if it represents additional data points. Specifically:
* Ξ±1 andΞ±2 act as pseudocounts that influence both the posterior mean and variance
* These pseudocounts influence the posterior distribution similarly to how real observed data does.
* The prior adds β€œvirtual” observations, stabilizing estimates when actual data is sparse

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is shrinkage intensity in the Beta-Binomial model

A

Shrinkage intensity, Ξ», is a factor in the Beta-Binomial model that determines the weight of the prior mean in the posterior mean calculation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What does the shrinkage intensity indicate

A

This factor indicates how much the prior information influences the posterior mean compared to the observed data
When Ξ» =0: The posterior mean is equal to the ML estimate
When Ξ»β†’1: The posterior mean corresponds closely to the prior mean (πœ‡0)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is shrinkage

A

The adjustment of the ML estimate towards the prior mean is called β€œshrinkage,” as πœƒ^ ML is β€œshrunk” towards πœ‡0 which is often the target mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a conjugate prior

A

If the prior and posterior belong to the same distributional family the prior is a conjugate prior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Why are conjugate priors useful

A
  • Conjugate priors are computationally convenient
  • They allow Bayesian learning by only updating prior parameters
  • This avoids complex calculations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What happens to the posterior mean and variance in the Beta-Binomial model as the sample size n becomes very large

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the Bayesian Central Limit Theorem

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly