Lecture 2 Flashcards
Hypothesis testing and its implications
What parameters define a normal distribution?
1. Standard deviation and z test
2. Median and Mean
3. Mean and standard deviation
4. Mean and Z test
Mean and standard deviation
What’s the shape of a normal distribution?
1. Linear
2. Bell
3. Horizontal
4. Upside bell
Bell
What does the z score reflect?
1. The probability of an event occurring in a normal distribution.
2. The variability of a data point from the mean in standard deviation units.
3. The frequency of a particular value in a dataset.
4. The range of values in a sample.
The variability of a data point from the mean in standard deviation units.
the number of standard deviations above or below the mean a particular score is.
What does the standard error of the mean (SEM) represent?
1. The average value in the sample.
2. The variability of individual data points.
3. The precision of the sample mean estimate.
4. The range of values in the population.
The precision of the sample mean estimate.
If the standard error of the mean (SEM) is large, what does it suggest about the sample?
1. The sample mean is likely accurate.
2. The sample size is small.
3. The sample is highly variable.
4. The standard deviation is small.
The sample is highly variable.
How is the standard error of the mean (SEM) affected by an increase in sample size?
- Increases
- Decreases
- Remains constant
- Becomes negative
Decreases
If two samples have the same standard deviation but different sample sizes, how does their standard error of the mean (SEM) compare?
- The one with the larger sample size has a smaller SEM.
- The one with the smaller sample size has a smaller SEM.
- Both have the same SEM.
- The SEM is unrelated to sample size.
The one with the larger sample size has a smaller SEM.
When constructing a 99% confidence interval for the mean, how would the width of the interval change compared to a 95% confidence interval?
- The 99% interval will be wider.
- The 99% interval will be narrower.
- The widths will be the same.
- It depends on the sample size.
The 99% interval will be wider.
A higher confidence level requires a wider interval to capture more extreme values.
If the standard deviation of a sample increases, what happens to the width of the 95% confidence interval for the mean?
- The interval becomes narrower.
- The interval becomes wider.
- The interval remains unchanged.
- The width depends on the sample size.
The interval becomes wider.
A larger standard deviation increases the uncertainty, leading to a wider confidence interval.
How does a smaller sample size affect the width of a confidence interval?
- The interval becomes wider.
- The interval becomes narrower.
- The width remains the same.
- It depends on the confidence level.
The interval becomes wider.
Smaller sample sizes lead to less precision, resulting in wider confidence intervals.
You have two 95% confidence intervals, one for the mean of Sample A and one for the mean of Sample B. If the intervals do not overlap, what can you conclude?
- The means of Sample A and Sample B are significantly different.
- The sample sizes for A and B are different.
- Both samples come from the same population.
- The confidence level is too low.
The means of Sample A and Sample B are significantly different.
If the intervals do not overlap, it means that the range of values for one sample’s mean does not include the mean of the other sample, and vice versa.
This suggests that the means are likely to be significantly different, as the ranges of values do not overlap. In statistical terms, there is evidence to reject the null hypothesis that the population means are equal.
A researcher is interested in estimating the average height of two different groups of plants, Group X and Group Y. After collecting the necessary data, the researcher constructs 95% confidence intervals for the average heights. The confidence interval for Group X is wider than the confidence interval for Group Y. What does this difference in width suggest about the precision of the height estimates?
- The height estimates for both groups are equally precise.
- The height estimate for Group X is more precise than for Group Y.
- The height estimate for Group Y is more precise than for Group X.
- The precision of height estimates cannot be determined without additional information.
The height estimate for Group Y is more precise than for Group X.
The width of a confidence interval is inversely related to precision. A narrower interval indicates greater precision, while a wider interval suggests lower precision. In this case, since the confidence interval for Group Y is narrower than that for Group X, it implies that the height estimate for Group Y is more precise than for Group X. This question tests the understanding that the width of the confidence interval reflects the degree of precision in the estimate.
A researcher is investigating the average scores of two groups of students on a challenging exam. After collecting data, the researcher constructs 95% confidence intervals for both groups. If the researcher wants to increase the precision of the confidence intervals without changing the confidence level, what strategy could be employed?
- Increase the sample size for both groups.
- Choose a lower confidence level.
- Use a different formula for calculating the margin of error.
- Select a smaller critical value from the standard normal distribution.
Increase the sample size for both groups.
By increasing the sample size (n), the standard error in the formula for the margin of error decreases. As a result, the margin of error becomes smaller, leading to a more precise confidence interval. This strategy is commonly used to improve the precision of estimates without changing the confidence level.
In Null Hypothesis Significance Testing, if the p-value is extremely small (close to 0), what conclusion can be drawn regarding the fit of the model to the data?
- The model does not fit the data well, and the null hypothesis is rejected.
- The model fits the data well, and the null hypothesis is accepted.
- The model fits the data well, and the alternative hypothesis is rejected.
- The model does not fit the data well, and the alternative hypothesis is rejected.
The model does not fit the data well, and the null hypothesis is rejected.
A very small p-value suggests that the observed data is unlikely under the assumption that the null hypothesis is true, leading to the rejection of the null hypothesis.
If a researcher selects a significance level of 0.01, what does this mean in the context of hypothesis testing?
- There is a 1% chance of Type I error.
- There is a 1% chance of Type II error.
- The probability of obtaining a significant result is 0.01.
- The null hypothesis will be accepted 99% of the time.
There is a 1% chance of Type I error.
The significance level (0.01) represents the probability of making a Type I error, rejecting the null hypothesis when it is true.
what does it mean if the probability value (p-value) is greater than the chosen significance level?
- The null hypothesis is rejected.
- The null hypothesis is not rejected.
- The alternative hypothesis is accepted.
- The alternative hypothesis is rejected.
The null hypothesis is not rejected.
If the p-value is greater than the significance level, there is not enough evidence to reject the null hypothesis.
What mistake are we making if we believe there is a statistically significant effect, but in reality, there isn’t?
- Type I error
- Type II error
- Fisher’s error
- Variance illusion error
Type I error
Type I error occurs when we mistakenly believe there is a significant effect (reject the null hypothesis) when, in reality, there isn’t.
According to Fisher’s criterion, what is the “acceptable” level of Type II error?
- p < 0.05
- p = 0.2
- α-level = 0.05
- β-level = 0.2
p = 0.2
OR
β-level = 0.2
Type II error occurs when we believe there is no effect in the population when, in reality, there is. According to Fisher’s criterion, the acceptable level of Type II error is often set at p = 0.2 (β-level = 0.2).
What distinguishes Bayesian estimation from frequentist statistics?
1. Bayesian estimation provides point estimates, while frequentist statistics offer probability distributions.
2. Bayesian estimation incorporates prior knowledge and beliefs, unlike frequentist statistics.
3. Bayesian estimation relies on p-values for hypothesis testing, whereas frequentist statistics use posterior probabilities.
4. Bayesian estimation exclusively deals with meta-analysis, while frequentist statistics focus on effect sizes.
Bayesian estimation incorporates prior knowledge and beliefs, unlike frequentist statistics.
Bayesian estimation involves updating prior beliefs with observed data, allowing researchers to incorporate existing knowledge into their analyses.
One of the key benefits of incorporating meta-analysis, as highlighted by EMBeRS, is:
- Reducing the need for registration in research studies.
- Enhancing the reliability of overall effect size estimates.
- Focusing exclusively on p-values in hypothesis testing.
- Increasing the likelihood of Type I errors.
Enhancing the reliability of overall effect size estimates.
Meta-analysis combines results from multiple studies, providing a more robust and reliable estimate of the overall effect size.
If the power of a statistical test is 0.90, what does this indicate?
1. There is a 90% chance of making a Type I error.
2. There is a 10% chance of making a Type II error.
3. The test has a 90% chance of detecting a true effect if it exists.
4. The p-value is 0.90.
The test has a 90% chance of detecting a true effect if it exists.
Power is the probability of correctly detecting a true effect.
How does increasing the sample size affect the power of a statistical test?
1. Increases power.
2. Decreases power.
3. Has no effect on power.
4. Makes the test more sensitive to Type I errors.
Increases power.
A larger sample size generally increases the power of a statistical test.
In the context of hypothesis testing, what is the primary purpose of the alpha-level?
- To control for Type II errors.
- To determine effect size.
- To set the threshold for statistical significance.
- To calculate power.
To set the threshold for statistical significance.
The alpha-level determines when we consider an effect statistically significant.
What role does effect size play in the power of a statistical test?
- Larger effect sizes decrease power.
- Smaller effect sizes increase power.
- Effect size has no impact on power.
- Larger effect sizes increase power.
Larger effect sizes increase power.
A larger effect size makes it easier to detect a true effect, thus increasing power.