Tests Flashcards
How do you Conducts a Group(ing) of Signs Test?
- Count number of positive deviations (n1) of estimated values compared with observed values - i.e. (Oi - Ei)
- Count number of negative deviations (n2)
- Count number of groups of runs of positive deviations **(t) **
- Look in standard table - for n1 and n2 gives maximum number of runs ncritical for which
p(t) < 0.05 (i.e. 5% significance),
if t <= ncritical, reject H0,
if t > ncritical, accept H0
More runs = better fit to data!
How do you Conduct a Individual Standardised Deviations Test on Graduations for Data?
The Individual Standardised Deviations test looks for individual large deviations at particular ages.
If the graduated rates were the true rates underlying the observed rates we would expect the individual deviations to be distributed Normal (0,1) and therefore:
- EITHER only 1 in 20 zx should lie above 1.96 in absolute value
- OR none should lie above 3 in absolute value
Least Significant Difference (Fisher’s LSD Test)
Fisher’s LSD is a method for comparing treatment group means after The ANOVA null hypothesis of equal means has been rejected using the ANOVA F-test.
Pairwise comparison of the means
The LSD test involves pair wise testing of means, i.e. in turn testing hypotheses of the sort H0 : µA = µB, H0 : µA = µC, etc. The LSD method for testing the hypothesis H0 : µA = µB proceeds as follows:
Comparing µA and µB
Group A contains nA members,
Group B contains nB members
MSW = Mean of Squares within (Mean Residual Sum of Squares) = RSS / d.f.(within) - as calculated for ANOVA
LSDAB = t0.05/2,DFW x √MSW(1/nA + 1/nB)
(two-sided t-test at 5% significance, with DFW degrees of freedom, where DFW is the degrees of freedom for within squares - i.e. total number of outcomes - total number of groups)
If | µA - µB | > LSDAB
reject H0, i.e. reject µA = µB
What is a Z Test
A Z-test is any statistical test for which the distribution of the test statistic under the null hypothesis can be approximated by a normal distribution.
How do you Do a One Way Analysis of Variance (ANOVA)?
In order to compare the means of different groups of observations of random variables with a common variance, create ANOVA table and then:
Calculate F ratio score and compare with null hypothesis F score for significance level α given by:
F0 = Fk-1, n-k, α
F is calculated as follows:
What is a t-test?
A t-test is any statistical hypothesis test in which the test statistic follows a Student’s t distribution if the null hypothesis is supported.
It can be used to determine if two sets of data are significantly different from each other, and is most commonly applied when the test statistic would follow a normal distribution if the value of a scaling term in the test statistic were known.
This is usually where a sample is too small to assume an approximation to the normal distribution.
What is a chi-squared goodness-of-fit test, when should it be used?
The goodness of fit chi-square tests are used to test whether a set of data (a sample) fits a stated probability distribution.
The null hypothesis H0 is that the data fits the distribution, H1 that it does not. H0 is rejected if the chi-square score exceeds the critical value from the table for degrees of freedom ν, given by:
ν = c - 1 - k
where c is the number of categories for data, k is the number of population parameters estimated to calculate frequency.
If graduating the data using standardised tables, drop a couple of degrees of freedom overall and then state this has been done.
The chi-square statistic is the sum of squares of difference between observed and expected values, divided by the expected value:
When are you Not Allowed to do a Chi-Square Test?
- When more than 20% of the expected counts are less than 5 or
- when any expected count is less than 1.
In other words, you can only do a chi-square test when:
No more than 20% of the expected counts are less than 5 and all individual expected counts are 1 or greater” (Yates, Moore & McCabe, 1999, p. 734).
How do You Test Whether the Variance of two Samples is Equal?
If two samples, A and B, have nA and nB observations, with Sample Variances SA2 and SB2, then it is possible to test the null hypothesis H0 (variances equal) with a two-sided F-Test:
What is the Cumulative Deviations Test
The Cumulative deviations test is a test for bias in data graduations
What is the t-test for comparing the means of two samples of unequal size from distributions with equal variance?
t-test for samples of unequal size
from distributions with equal variance
What is the t-test for comparing the means of two samples from distributions whose variances cannot be assumed to be equal?
AKA Welch’s t-test
What is a Signs Test?
A signs test is used to determine whether a data graduation is biased - i.e. systematically higher or lower than the data it is graduating.
In order to do the test:
- Comparing (Oi - Ei) - observed value less expected value - if this difference is not zero, record the sign (positive / negative)
- Count the number of non-identical pairs (m) and the number of positive signs (p)
- Test p against a Binomial(m, 0.5) distribution (from tables in back of formula book)
* for large samples > 25, can use normal approximation to binomial*