1.6 Tests Concerning a Single Mean Flashcards
what is a t-test commonly used for?
commonly used to test the value of an underlying population mean
The t-distribution is similar to the standard normal distribution, but it is impacted by the degrees of freedom
Smaller degrees of freedom produce fatter tails
The z-test can be used for what?
be used for a large sample, even if the population variance is unknown
when do the t and z-test as the nyumber of observations increases?
when the sample size increases
a z-test rejection point of z0.10 is equal to what?
1.28
a z-test rejection point of z0.05 is equal to what?
1.645
a z-test rejection point of z0.025 is equal to what?
1.96
a z-test rejection point of z0.01 is equal to what?
2.33
a z-test rejection point of z0.005 is equal to what?
2.575
First way (two-sided test):
H0: u1 - u2 = 0
Ha: u1 - u2 =/= 0
Second way (one-sided test):
H0: u1 - u2 <= 0
Ha: u1 - u2 > 0
Third way (one-sided test):
H0: u1 - u2 => 0
Ha: u1 - u2 < 0
A paired comparison test
a statistical test for differences in dependent items
The difference between two random variables taken from dependent samples, denoted di, is calculated. Then, the list of differences is statistically analyzed.
First way (two-sided test)
Paired comparison test
H0: ud = udo
Ha: ud =/= ud0
second way (one-sided test)
Paired comparison test
Ho: ud <= ud0
Ha: ud > ud0
third way (one-sided test)
Paired comparison test
Ho: ud => ud0
Ha: ud < ud0
Other than the population mean, analysts are also often interested in performing hypothesis tests on the population variance.
what can we use for this?
Chi-square test
can be used to test the relationship between an observed sample variance and its hypothesized value
Chi-square test
First way (two-sided test)
H0: σ^2 = σ0^2
Ha: σ^2 =/= σ0^2
Chi-square test
second way (one-sided test)
H0: σ^2 <= σ0^2
Ha: σ^2 > σ0^2
Chi-square test
third way (one-sided test)
H0: σ^2 => σ0^2
Ha: σ^2 < σ0^2
why do we use the F-test?
to examine the equality/inequality of two population variances
The F test is used based on the ratio of the sample variances.
The F-distribution is bounded below by 0 and defined by two values of degrees of freedom–for the numerator and for the denominator
F test
First way (two-sided test)
H0: σ1^2 = σ2^2
Ha: σ1^2 =/= σ2^2
F test
second way (one-sided test)
H0: σ1^2 <= σ2^2
Ha: σ1^2 > σ2^2
F test
third way (one-sided test)
H0: σ1^2 => σ2^2
Ha: σ1^2 < σ2^2
a parametric test
The hypothesis testing procedures that deal with parameters and are dependent on assumptions
Parametric tests are better for precise conclusions but are limited due to the underlying assumptions
a nonparametric test
not concerned with parameters or makes minimal assumptions on the underlying population
Nonparametric procedures are used when:
- The data do not meet distributional assumptions
- The data are subject to outliers
- The data are given in ranks or use an ordinal scale
- The hypotheses do not concern a parameter.
correlation coefficient
measure of strength between variables
If there is no linear relationship between the two variables, the correlation coefficient will be
0
Assuming the correlation coefficient is denoted p:
First-way (two sided-test)
H0: p = 0
Ha: p =/= 0
Assuming the correlation coefficient is denoted p:
second way (one-sided-test)
H0: p <= 0
Ha: p > 0
Assuming the correlation coefficient is denoted p:
third way (one-sided-test)
H0: p => 0
Ha: p < 0
The parametric correlation coefficient (a.k.a. the Pearson correlation or bivariate correlation) of two variables can be tested based on what?
the sample correlation
A sample is taken from a normally distributed population with known variance. The observations in this sample are sorted according to an ordinal scale. To test a hypothesis regarding the sample mean, an analyst would most likely use a:
A
t-test.
B
z-test.
C
nonparametric test.
C
nonparametric test.
The Spearman rank correlation coefficient
similar to the correlation coefficient but does not rely on the same underlying assumptions
the standardized residual (a.k.a. Pearson residual
cells that have observations that deviate significantly from their expectations.