SU3 - Elements of Finite Sample Properties, Asymptotic Theory, Confidence Intervals and Hypothesis Testing Flashcards
An estimate is a ?, but an estimate is a ?
An estimate is a number, but an estimator is a random variable
What are finite sample properties?
Properties of an estimator when the sample size is not arbitrarily large
What is an unbiased estimator?
An estimator that does not over or under-estimate the true population parameter
if we could draw repeated random samples on Y from the population and then compute an estimate repeatedly, the average of these estimates should be close to θ.
What is i.i.d?
Independently and identically distributed
Var (Y bar) = variance / n
What happens to Var (Y bar) when sample size (n) increases?
Var (Y bar) (bias) goes towards 0.
Increasing sample size will lead to a decrease in sampling variance of the sample mean
If there are two unbiased estimators, how to see which is more efficient?
Relative efficiency is used. Smaller variance = more efficient
What is used to compare estimators that are bias?
Using Mean Squared Error.
Large MSE means variance or bias is large.
what are asymptotic properties?
Statistical properties when n is very very large.
It enables us to answer the following questions:
Does the variance of some unbiased estimator decrease as n↑?
Does the estimator become more precise as n↑?
Does the bias of an estimator shrink towards zero as n↑?
What is the distribution of the estimator when n is large?
𝑷(|𝜃𝑛−𝜃|>𝜀)→0 as 𝑛→∞
What is consistency?
if 𝜃n is consistent, it will become more improbable for the error |𝜃𝑛−𝜃| to exceed 𝜀
If (𝜃𝑛) is unbiased and 𝑉𝑎𝑟(𝜃𝑛)→0 as 𝑛→∞, then (𝜃𝑛) is consistent.
what is plim?
“probability limit” of an estimator. It is the value that the estimator converges to in probability when the sample size becomes arbitrarily large.
What is slutsky theorem?
If 𝑝𝑙𝑖𝑚(𝑇𝑛)=𝛼and 𝑝𝑙𝑖𝑚(𝑈𝑛)=𝛽, then – 𝒑lim(𝑇𝑛+𝑈𝑛)=𝑝𝑙𝑖𝑚𝑇𝑛+𝑝𝑙𝑖𝑚𝑈𝑛=𝛼+𝛽 – 𝒑lim(𝑇𝑛𝑈𝑛)=𝛼𝛽 – 𝒑lim(𝑇𝑛/𝑈𝑛)=𝛼/𝛽provided 𝛽≠0.
What is the central limit theorem?
the sample average, when standardized, has an asymptotically standard normal distribution, provided the population mean and variance exist
what is weak law of large numbers?
𝒑lim(𝑌𝑛)=𝜇
In other words, LLN states that the sample average converges in probability to its expectation.
Difference between consistency and asymptotic normality?
Consistency refers to the convergence of an estimator in probability to a single point. Asymptotic normality refers to the convergence of the distribution of an estimator to the normal distribution.
What is an estimator?
an estimator of θ is a rule that assigns each possible outcome of the sample a value of θ. It is a function of random variables. Plugging the sample into an estimator gives us an estimate.
If it’s unbiased, must it be consistent as well?
Unbiased estimators are not necessarily consistent. Likewise, biased estimators are not necessarily inconsistent.
What is asymptotic normality?
If an estimator does not have a normal distribution when the sample size is finite, but has a distribution that resembles the normal as the sample size
increases, we say that it is asymptotically normal.
Between unbiased estimators, which is most efficient?
One with smallest variance or one with biggest variance?
Smallest
Reason why MSE can increase?
1) large variance
2) large bias
A positively biased estimator always produces estimates that are greater than the parameter that we are interested in estimating. Yes or no?
Not always, only on average
As the sample size increases, the variance shrinks towards zero When the sample size is infinitely large, the variance is zero. Why?
This makes sense because if you have all the sample, the estimator should have no uncertainty
What is the sampling distribution?
the distribution of the estimator
Why is an estimator a random variable?
This is because it can take on a range of values depending on the sample.
What is Chebyshev’s inequality?
If 𝜃𝑛 is unbiased and 𝑉𝑎𝑟𝜃𝑛→0as 𝑛→∞, then 𝜃𝑛 is consistent.
What is the formula for MSE?
MSE(Y) = Bias(Y)^2 + Var(Y)
Bias(Y) also known as expected value
Three things to construct a confidence interval?
- Point estimate,
- Standard deviation (or standard error, which is an estimate of the standard deviation),
- Critical values.
If our test statistic has a p-value of 0.02, what does it mean?
There is only a 2% chance that our null hypothesis is true. The smaller the p-value, the less likely our null is true.
This also means there is a 2% chance of rejecting the null wrongly.
if p > a, do we reject the null?
we do not reject the null hypothesis based on our chosen level of significance 𝛼
When is the t-test/z test used to find a confidence interval?
Standard deviation known > z test
Standard deviation unknown > t test
What are the two types of errors in hypothesis testing?
Type 1: rejecting null hypothesis H0 when H0 is true
Type 2: rejecting alternative hypothesis H1 when H1 is true
What is the power of the test?
Probability of accepting H1 when H1 is true
What are the steps to conduct a hypothesis test?
1) specify the null and alternative hypothesis
2) construct the test statistic
3) specify the level of significance
4) based on 3, state the critical values or rejection region
5) reject the null hypothesis if p<a></a>