Bayesian Inference & Decision Theory Flashcards

1
Q

What is decision theory?

A

The practice of making decisions under uncertainty

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the different perspectives on distribution parameters between Frequentist Statistics and Bayesian Statistics?

A

In Frequentist Statistics, parameter theta is considered to be fixed at some point estimate. In Bayesian Statistics, theta is considered to be something that can vary based on some distribution (i.e. posterior probabilities)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the equation of P(A|B)?

A

[P(A and B)]/P(B)

or

[P(B|A).PA]/[P(B|A).P(A) + P(B|not A).P(not A)] (for 2 possible events)
*the denominator is the same as P(B)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the equation for P(A and B)?

A

P(A|B).P(B)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the equation for the expected value of a pmf?

A

summation(x.p(x)) through all outcomes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the objective of Bayesian Inference?

A

To logically and coherently update state probabilities as new evidence becomes available

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Is it always logical to maximise profit or minimise loss?

A

No. We have to consider the scale of the downside. Sometimes risk-tolerance makes us less perfectly rational

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is EVPI?

A

The expected value of perfect information difference between the best expected outcome with the prior information and the best possible outcome perfect information

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is predictive probability?

A

The overall probability of observing some X in the data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a posterior probability?

A

A probability representing theta equalling some value after factoring additional information

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the Excel function for weighted summation?

A

= SUMPRODUCT(number cells, weight cells)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the equation for m(x(k))? (I.e. the predictive probability of x(k)

A

summation through states of [P(x(k) | theta).P(theta)]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What does the Latin term “a posteriori” mean?

A

Dependent on empirical evidence or experience

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What does the Latin term “a priori” mean?

A

Independent of experience

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is EVSI?

A

The expected value of sample information is the difference between the best outcome with prior probabilities and the best outcome with posterior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the difference between subjective Bayesian Inference and more traditional objective Bayesian Inference?

A

More objective Bayesian Inference uses objective statistical inference to determine the distributions of observations conditional on the underlying states (i.e. P(X|theta))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are the conditions for a sample from binomial sampling to follow a binomial distribution?

A
  • The population must be so large that the sample does not disturb the population proportions
    or
  • We sample with replacement
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is binomial sampling?

A

When a number of observations are sampled and grouped into one of two levels (traditionally success or failure)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is a subjective probability?

A

A probability indicating the current assessment of how likely it is that the tree value for theta is some value. Normally shown as pi(theta=n)

20
Q

What is the Excel function for calculating binomial probabilities?

A

=BINOMDIST

21
Q

What would happen to our posterior distribution for theta if we observe an outcome of x=5 in our binomial sample of 100?

A

The posterior distribution of theta would be centered approximately on theta=0.05 (i.e. the proportion of success in the sample [5/100])

22
Q

What do we do in our setting up of the prior distribution when we want to consider values over some continuous interval as opposed to some set of discrete values?

A

We assign a probability density function to theta

23
Q

How do we set up our prior distribution of theta if we want to consider all continuous values between some interval with equal likelihood a priori?

A

We use a uniform distribution for theta (i.e. pi(theta) = 1/[a+b] for a < theta < b)

24
Q

What is the posterior distribution of theta when using a uniform prior and binomial sampling?

A

pi(theta|x) = k(theta^x)[(1-theta)^(n-x)]

  • this is actually a beta distribution where k would be gamma(n+2)/[(gamma(x+1).gamma(n-x+1))]
  • where k is a constant that comes about from canceling terms in joint/predictive and scaling to ensure that the resulting distribution integrates to 1
25
Q

Assume we use a beta distribution to model the prior distribution for theta, what is the posterior distribution of theta?

prior = k.[(theta^(a-1)).(1-theta)^(b-1)]

A

pi(theta|x) = k.[(theta^(a+x-1)).(1-theta)^(b+n-x-1)]

26
Q

What is the expected value of a theta which follows a beta distribution?

A

E[theta] = a/(a+b)

27
Q

What is the expected value of 1-theta from a beta distribution?

A

E[1-theta] = b/(a+b)

28
Q

What is the variance of a theta which follows a beta distribution?

A

var[theta] = ab/[(a+b)²(a+b+1)]

29
Q

How do we calculate what the variance of a theta posterior distribution should be based on the prior expectations?

A

We use the approximate number of standard deviations in the given range to give the quantity for a single deviation. We then square that and make it equal var(theta)

30
Q

Is the uniform distribution a special case of the beta distribution? If so, what are its parameters?

A

Yes. The uniform distribution is a beta distribution with alpha=beta=1.

31
Q

What can we say about the centrality of the posterior distribution if we use a relatively informative prior? (like a uniform distribution)

A

The posterior distribution will be more-or-less centered around the sample means

32
Q

What are “iid” observations?

A

Observations that are independent and identically distributed

33
Q

What is a likelihood function?

A

A likelihood function is kind of like the reverse of a probability density function. Instead of taking parameters and giving the probability of x given those parameters (i.e. f(x|theta)), the likelihood function takes the data (x) and returns the likelihood that theta equals some value given the data.

L(theta|x) = product over all n of f(x|theta)

34
Q

Why is the requirement of a prior distribution seen as both a strength and a weakness of the Bayesian approach to inference?

A
  • We sometimes do have real and meaningful prior information which we can then exploit (especially if the results of the analysis are to be used as a basis for decision making)
  • Some argue that prior information introduces an element of subjectivity which may conflict with a desire for objectivity in the data analysis.
35
Q

What is the probability density function for theta as using a likelihood function?

A

pi(theta) = [L(theta|x1,x2,…xn).pi(theta)]/[m(x1,x2,…xn)]

  • where m is the predictive probability density function given by the integral over -inf to int of the numerator d(theta).
  • The role of the denominator is basically to scale/standardize the posterior distribution so that it integrates to 1.
  • Because of this, we often write the posterior in the form of simply k.L(theta|x1,x2,…xn) and say the posterior distribution of theta is directly proportional to L(theta|x1,x2,…xn)
36
Q

What is an advantage of using a normalization constant?

A

Any factors in L(theta|x1,x2…xn) or pi(theta) which do not depend on theta can be absorbed into the normalization constant and effectively ignored.

37
Q

What are the 3 main areas of statistical inference that we might use the posterior distribution of theta for?

A
  • Hypothesis Testing
  • Interval Estimation
  • Point Estimation
38
Q

What is a type-I error?

A

The rejection of H0 when it is in fact true

39
Q

What is a type-II error?

A

The acceptance of H0 when it is in fact false

40
Q

Given a null hypothesis of theta being some value (theta0) with probability pi0, what are the odds of theta0 being the correct value?

A

It will be the ratios of the probabilities of theta being theta0 and theta not being theta0.

= [L(theta0|data).pi0]/[L(theta1|data).(1-pi0)]

This represent the product of [the a priori odds] and a likelihood ratio also known as a Bayes Factor.

41
Q

Assume that the cost of a type-I error is CI and that the cost of a type-II error is CII, what is the decision which minimizes expected cost?

A

Reject H0 if CI.pi(theta0|data) <= CII.pi(theta1|data)

42
Q

How can we approximate values for L and U such that the probability of theta belonging to the interval [L,U] is equal to 100(1-alpha)%? A.k.a Interval Estimation or calculating a credibility interval

A

We set the integral of the posterior distribution for theta equal to 1-alpha with the bounds L and U then numerically approximate.

43
Q

What is the Bayesian estimate?

A

The expected value of the posterior distribution. Under certain circumstances, other measures such at the mode or the median will be used as the estimator

44
Q

What is the likelihood function of lambda for a sample from the Poisson distribution?

A

L(lambda | data) is directly proportional to [lambda^(sum of xi over 1 to n)].[e^-n.lambda]

  • We know for sure that lambda is strictly positive so the prior distribution should be restricted to positive values only
45
Q

Assume a prior distribution of theta to be a gamma distribution with parameters alpha and phi. What is the posterior distribution of this prior

A

Gamma distribution with parameters (alpha+sum of xis) and (phi + n)

46
Q

What is the posterior expectation of of lambda given a gamma posterior distribution?

A

a/b

*If you expand this, it equates to a weighted average of the prior and sample estimates

47
Q

DO NORMAL SAMPLING

A

K