245 Proofs Flashcards

1
Q

Efficiency definition

A

Given two unbiased estimators θˆ 1 and θˆ 2 of a parameter θ, with variances V(θˆ 1) and V(θˆ 2), respectively, then the efficiency of θˆ 1 relative to θˆ 2, denoted eff (θˆ 1, θˆ 2), is defined to be the ratio:

eff(θ^ 1, θ^ 2) = V ( θˆ 2 ) / V ( θˆ 1 )
.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Likelihood definition

A

Suppose that the likelihood function depends on k parameters θ1, θ2, …, θk.

Choose as estimates those values of the parameters that maximize the likelihood:

L(y1, y2,…, yn |θ1,θ2,…,θk).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Invariance Property of MLE

A

If θ is the paramter associated with a distribution of some random variable Y, then for any function t(θ), where the maximum likelihood estimator of θ is θ^, then the MLE of t(θ) will be given by (t(θ))^ = t(θ^)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Power defintion

A

Suppose that W is the test statistic and RR is the rejection region for a test of a hypothesis involving the value of a parameter θ. Then the power of the test, denoted by power(θ), is the probability that the test will lead to rejection of H0 when the actual parameter value is θ. That is,
power(θ) = P(W in RR when the parameter value is θ).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Neyman-Pearson Lemma

A

Suppose that we wish to test the simple null hypothesis H0 : θ = θ0 versus the simple alternative hypothesis Ha : θ = θa , based on a random sample Y1,Y2,…,Yn from a distribution with parameter θ.

Let L(θ) denote the likelihood of the sample when the value of the parameter is θ. Then, for a given α, the test that maximizes the power at θa has a rejection region, RR, determined by

LR = L(θ0)/L(θa) < k

Now any other test that has a significance level less or equal to a, will have a power less or equal to the likelihood ratio test LR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Joint Distribution for n=2

A

Proof

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Generalised Likelihood Ratio test

A

Definition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Conjugate Prior Distributions

A

Let L be a likelihood function (θ|x) . A class Π of prior distributions is said to form a conjugate family if the posterior density

p(θ|x) /= p(θ)(θ|x)

is in the class Π for all x whenever the prior density is in Π.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Suppose that θ ~ N(θ0,Φ0) and that X = (X1,…,Xn), where the Xi are independent and Xi ~ N(θ,Φ). Xi | θ ~ N (q , f ).

2.2

A

Definition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Remark 1 of Thm 2.2

A

Proof

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Remark 2 of Thm 2.2

A

Proof

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Remark 3 of Thm 2.2

A

Proof

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Remark 4 of Thm 2.2

A

The posterior distribution can be used to calculate probabilities concerning θ. For a given probability 1- a, the interval A = [a1,a2] can be calculated such that P(θ an element of A) = 1- a. Alternatively, for a given interval B = [b1,b2], the probability b = P(θ an element of B), can be found.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Suppose that Φ ~ Inv-G(a,b) and that X = (X1,…,Xn), where the Xi’s are independent and Xi |Φ ~ N(μ,Φ).

2.3

A

Proof and Definition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Remark 1 Thm 2.3

A

Proof

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Remark 2 Thm 2.3

A

Definition

17
Q

Suppose that π ~ Beta(a,b) and that you have an observation X such that X ~ Bin(n,p).

3.3

A

Definition and Proof

18
Q

Remark 1 Thm 3.3

A

Statement

19
Q

Remark 2 Thm 3.3

A

Proof

20
Q

Remark 3 Thm 3.3

A

Proof

21
Q

Remark 4 Thm 3.3

A

Proof

22
Q

Suppose that λ ~ G(a, 1/b) and that X = (X1,…,Xn), where the Xi’s are independent and Xi|λ ~ Poiss

3.4

A

Defintion and Proof

23
Q

Remark 1 of 3.4

A

Definition

24
Q

Suppose that θ ~ Pa(ξ ,γ) and that X = (X1,…,Xn) where the Xi’s are independent and
Xi | θ ~ U(0, θ).

3.5

A

Definition and Proof

25
Q

Suppose that p(θ) = c, -∞ < q < ∞ and that X = (X1,…,Xn), where the Xi’s are independent and Xi | θ ~ N (θ , Φ)

4.1

A

Definition

26
Q

Suppose that p(Φ) = 1/Φ and that X = (X1,…,Xn), where the Xi’s are independent and Xi|Φ⇠N(μ,Φ)

4.2

A

Definition

27
Q

Suppose that π ~ Beta(0,0) and that you have an observation X such that X ~ Bin(n, π).

4.4

A

Definition

28
Q

Suppose that p(λ) /= λ^-1/2 and that X = (X1,…,Xn) where the Xi’s are independent and Xi|λ~Poiss(λ)

4.5

A

Definition

29
Q

Suppose that θ ~ Pa(0,0) and that X = (X1,…,Xn), where the Xi are independent and Xi | θ ~ U (0, θ).

4.6

A

Definition

30
Q

Loss Function without observations

A

Definition

31
Q

Loss functions with observations

A

Definition

32
Q

Bayes Estimator

A

Suppose now that the value x of the random vector X can be observed before estimating θ, and let p (θ | x) denote the posterior ff or df of θ . For anyZ estimate a, the expected loss in this case will be

E[l(θ,a)|x] = Integral of l(θ,a)p(θ|x)dq

Here an estimate a should now be chosen for which this expectation is a minimum.

For each possible value x of the random vector X , let δ(x) denote a value of the estimate a for which the expected loss in equation 4.1 is a minimum. Then the function δ(X) for which the values are specified in this way will be an estimator of θ. This estimator is called a Bayes estimator of θ. In words, for each possible value x in X , the value δ(x) of the Bayes estimator is chosen so that

E[l(θ,δ(x))|x]=minE[l(θ,a)|x].

33
Q

Squared Error Loss Function

A

Definition

34
Q

E[(θ-a)^2)|x] is a minimum when a is chosen to be equal to the mean of the posterior distribution

A

Proof

35
Q

Absolute Error Loss Function

A

Definition

36
Q

E[|θ-a||x] is a minimum when a is chosen to be equal to the median of the posterior or equals F(θ|x)^-1(1/2)

A

Proof