Week 7 - Bayesian inference Flashcards
Frequentist probability
Probability is based on how often something happens in the long run
Recall to the Law of Large Numbers (LLN)
Bayesian probability
Probability as a measure of belief or uncertainty about an event, based on both prior knowledge and new evidence
Bayesian inference
Approach that uses Bayes’ theorem to update the probability estimate for a hypothesis as more evidence or data becomes available
It provides a flexible way to model uncertainty by combining prior beliefs with observed data, giving a posterior distribution—the updated belief after observing data
P(θ∣data) = [P(data∣θ) ⋅ P(θ)] / P(data)
- P(θ∣data): Posterior -> the updated belief about the parameter 𝜃 after observing the data
- P(data∣θ): Likelihood – the probability of the observed data given the parameter 𝜃
- P(θ): Prior – the initial belief or information about the parameter
𝜃 before observing the data - P(data): Evidence – the total probability of observing the data, used for normalisation
Bayesian inference steps
- Specify a Prior Distribution
- Collect Data and Specify the Likelihood
- Apply Bayes’ Theorem to Combine Prior and Likelihood
- Determine the conjugate pair
- Workout posterior distribution using conjugate pair
- Summarize the Posterior Distribution using the prior and data
Credible interval
Simply take probability intervals from the POSTERIOR, which will represent the uncertainty
The posterior probability of the parameter being in the credible interval is 95%
Conjugate prior
A prior distribution that, when combined with a specific type of likelihood function, results in a posterior distribution belonging to the same family as the prior
This pairing is known as a conjugate pair
Prior–likelihood conjugate pairs
Beta-Binomial
Beta-Bernoulli
Gamma-Poisson
Gamma-Exponential
Normal-Normal (mean)