Hypothesis Testing, Statistical Inference and Estimation Flashcards

1
Q

What does a Test of Significance determine?

A

It determines the probability that the observed difference between a sample statistic value and the parameter value stipulated by the Null Hypothesis (p hat - population proportion) is due only to random sampling variation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Is the test of significance based in the sampling distribution for the statistic of interest?

A

Yes, it is. This is why the formula is (sample statistic - hypothesized population parameter) / Standard Deviation of the sample statistic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the center (expected value) of the distribution of the sample statistic (statistic of interest)?

A

The center is assumed to be equal (or no real difference) to the null hypothesis value for the true population parameter

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How do you compute the spread of the sampling distribution of the statistic of interest?

A

The spread is computed differently depending on your statistic of interest. For sample proportion (sigma p hat), it is calculated as square root of Po (1-Po)/n

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

If we assume the null hypothesis is true (which we do when conducting all tests of significance), the shape of the sampling distribution of the statistic of interest will be approximately normal if the following conditions are met:

A
  • n*Po is greater than or equal to 10
  • n*(1-Po) is greater than or equal to 10
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the overall output from a test of significance and how do you get it?

A

The p-value! After completing the test of significance formula, you get a test statistic. Test statistics always have a known probability distribution that allows you to assign a probability to the observed value of the sample statistic if the value of the population parameter stated in the Null Hypothesis is true.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the p-value?

A

It is the probability that the observed difference between the sample statistic value and the value of the population parameter stipulated by the Null Hypothesis is due only to random sampling variation, assuming the Null Hypothesis is really true.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are One-tailed tests of significance and when are they appropriate to use over Two-tailed tests?

A

These tests evaluate sample data as evidence for a difference in a specific direction. These tests are appropriate only if there is a theory-based reason, well documented in the scientific literature, for expecting a specific kind of difference.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Provide additional interpretations/definitions of the p-value?

A
  • It is a measure of the strength of the evidence against the Null Hypothesis
  • It is also the probability that a conclusion to reject the Null Hypothesis value is incorrect
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Is the difference between statistically significant and practically important relevant?

A

Yes! Even though a difference between a (observed) sample statistic and a hypothesized population parameter (the null hypothesis value) may be statistically significant, the magnitude of the difference between them must be big enough to have real-world consequences; thus, be practically important

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is Effect Size?

A

It is the difference between an observed sample statistic and the hypothesized population parameter. It measures the magnitude of the effect (the difference) documented by a study.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Are the p-values of One-tailed tests smaller or larger than that of Two-tailed tests? And will your overall conclusion be affected between which one you pick?

A

They are smaller, which also makes them more likely to support a conclusion that a significant difference exists. Also no, picking either a one-tailed or two-tailed won’t affect the nature of your final conclusion as to whether there is a significant difference. ONLY in cases where the evidence against the null hypothesis is borderline (p-value is close to 0.05)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

When do you usually used Two-tailed tests of significance?

A

When there is insufficient basis to predict the direction of the difference prior to the study.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What means that a sample statistic is “significant”?

A

It means that the more likely explanation for the observed difference between the statistic value and the null hypothesis value is that the true population parameter differs from the null hypothesis value.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Why are the only two possible explanations for why the value of a sample statistic differs from the hypothesized parameter value ?

A
  1. Difference due to Random Sampling Variation only (Null Hypothesis)
  2. Real Difference (Alternative Hypothesis aka p-value below 0.05)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What does the Type 1 Error Rate (alpha) give you?

A

It gives you the probability that a conclusion to reject the null hypothesis value of the true population parameter is actually incorrect. It is quantified by the p-value (meaning it is the same value as the p-value)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

When is the maximum allowable Type 1 Error rate (alpha) fixed?

A

When the investigator decides how small the p-value must be before he or she will conclude that there is sufficient evidence to reject the Null hypothesis value for the population parameter.

18
Q

What is the Power of a test of significance (1 - beta)?

A

The power is the probability that a test of significance will correctly reject the Null hypothesis value for the population parameter when there actually is a specified real difference between the Null hypothesis value and the true population parameter value.

19
Q

What is the specified real difference based on when discussing Power?

A

It is based in the investigator’s judgment about what would constitute the minimum important difference.

20
Q

What does is the Type II error rate (beta)?

A

It is the probability that a test of significance will fail to reject the null hypothesis value when there really is a difference. In other words, the test will fail to detect a real difference.

21
Q

What are the two possible reasons for why a test of significance fails to reject the null hypothesis?

A
  1. There really is no difference.
  2. The test of significance had insufficient power to detect it, usually due to inadequate sample size.
22
Q

Absolute Effect Size vs Relative Effect Size

A

The first one is the actual difference between the observed sample statistic and the null hypothesis value of the population parameter; while the second one, is this difference but it is presented as a percent.

23
Q

Absolute Effect Size vs Relative Effect Size

A

The first one is the actual difference between the observed sample statistic and the null hypothesis value of the population parameter; while the second one, is this difference but it is presented as a percent.

24
Q

What two steps you need to complete in order to calculate Power? Explain them.

A
  1. Determine the MSD: The minimum significant difference refers to the rejection region, depicted by the critical value of the sample statistic which defines the range of values for the sample statistic that would provide sufficient sufficient evidence to reject the Null hypothesis. This first step only states that it is unlikely that the difference isn due only to random sampling variation.
  2. Determine the probability of rejecting the Null hypothesis value if a specified MID exists: the minimum important difference is defined by the investigator and it refers to a difference that is sufficiently large and interesting to prompt a change in theory or standard procedures.
25
Q

Write the mathematical formula for the two-step process of computing power.

A
  1. Critical value of the sample statistic= (Z or t score) [based on the type 1 error and whether it is a one-tailed or two-tailed] * (standard deviation of the sample statistic) + (the null hypothesis value)
  2. P[Z or t is greater than or equal to OR less than or equal to (the critical value of the sample statistic - minus the alternative hypothesis value) divided by the standard deviation of the sample statistic (recomputed using the alternative hypothesis).
26
Q

What is a confidence interval?

A

It is an interval that specifies a range of values, within which the true value of the population parameter will occur with a specified probability (confidence level)

27
Q

What is the confidence level?

A

It is the probability that confidence intervals produced by the specified method will include the true population parameter value.

28
Q

What is the error rate of a confidence interval (alpha)?

A

The error rate (alpha) is the probability that a confidence interval produced by the specified method will NOT include the true population parameter value.

29
Q

What is the confidence level [100*(1 - alpha)] and the error rate (alpha) fixed on?

A

They are fixed by the investigator’s decision regarding how “confident” that person wants to be that the interval will include the true parameter value. The investigator’s choice of error rate determines the critical value used to calculate the interval.

30
Q

Are the confidence level and the error rate affected by the sample size or by the size of random sampling variation in the statistic?

A

Nope, they are not affected these things.

31
Q

How are 100*(1 - alpha)% confidence interval computed?

A
  1. Compute the standard error of the sample statistic (which is the sample standard deviation)
  2. Compute the margin of error= (Z_alpha/2)*(standard error of the sample statistic)
  3. Compute the confidence interval for the true population parameter value= observed sample statistic +- the margin error.
32
Q

What happens if we want to be more confident that the interval will include the true value of the population parameter?

A

The interval widens, which then makes the estimate of the parameter value less precise. This happens because higher confidence levels have lower rates, but larger critical Z-values (which you use to compute the margin of error). In other words, higher confidence reduces precision.

33
Q

How can you compute the sample size required to obtain a confidence interval with a desired margin of error (aka m* aka desired precision)?

A

Two ways:
1. Depends whether a prior estimate of the sample statistic for the population parameter is available:

n= (Z_alpha/2 divided by m)^2 *[sample statistic (1 - sample statistic)]

  1. Depends whether there is not an available prior estimate of the sample statistic for the population parameter:

n= (Z_alpha/2 divided by m)^2 * [0.5*(1 - 0.5)]

34
Q

When there is no prior estimate of the sample statistic for the parameter value, what do you do?

A

You use the sample statistic = 0.5 . Doing this, will produce confidence intervals with a margin of error that is always less than or equal to the specified value of m*.

For example, a margin of error of plus minus 0.05 is associated with the 95% confidence level.

35
Q

What happens to the p-value when you increase the effect size (the difference between the sample statistic and the null hypothesis value)

A

The p-value from a test of significance decreases. Large effects are unlikely to be due to random sampling variation only.

36
Q

What happens to the p-value when you increase the sample size?

A

Since increasing the sample size decreases the random sampling variation (e.g., the standard deviation of the sample statistic), the p-value also decreases (all other things being held constant). Remember that a smaller p-value indicates that the difference between a sample statistic and the hypothesized population parameter is less likely due to random sampling variation.

37
Q

What happens to power when you increase the sample size?

A

Random sampling variation is reduced (e.g., the standard deviation of the sample statistic); hence, the the power of a test of significance increases.

38
Q

Do one-tailed tests have greater power than two-tailed tests of significance?

A

Yes, since they have a higher probability of rejecting the Null hypothesis (smaller p-values).

39
Q

What happens to the power of a test of significance when you specify a smaller minimum important difference (MID)?

A

It decreases since small differences are less likely to be detected.

40
Q

What happens to the power of a test of significance and the Type II error rate (beta) when you decrease the allowable Type I error rate (alpha)?

A

It sets a more rigorous standard of evidence for rejecting the Null hypothesis. By making it more difficult to reject the null hypothesis, the power is reduced and the Type II error rate (beta) is increased.

41
Q

What happens to the width of confidence intervals when you increase sample size?

A

The width decreases since the standard error of the sample statistic decreases by increasing the sample size.

42
Q

What happens to the error rate (alpha) when you increase the desired confidence level?

A

It decreases the error rate (alpha) of confidence intervals.