Selecting the most appropriate statistical test Flashcards
Variance
difference in each indiiviaul measuement value and the groups mean
standard deviation
the mean, and how close everyone is to it…
Normally distributed
symmetrical….. parametric tests… are equal near equal in mena median and mode….
Positively Skewed
asymmetrical distribuution one tail longer than another…. median differs from the mean. mean is higher than median, a relatively long upper right tail results, representing a positive skew mean>median
Blood pressure… aint normally distributed
NOT INTERVAL!
Negatively skewed
a distribution is skewed anytime the median differs from the mean…. mean is LOWER than median…. mean<median
skewness
a measure of the asymmetry of distibuution… the perfectly normal distribution is symmetric and has a skewness value of zero…. Needs to be greater than 2 for it really to be skewed…
Required assumptions of Interval data(for proper selection of a parmetric test)
Normally- distributed, equal variances…. randomly-derived and independent
3 tests that asses for equal variances btw groups
Bartlett’s test/ Levene’s test/ Kolmogorov-Smirnoff test
Errors
an error can be made when incorrectly accepting or rejectioning the null hypothessis
Type 1
Rejecting the nully hypothesis when we it is actually TRUE! and you should have accepted it…. like a FP
Type 2
Not rejecting the null hypothesis when it is actually false, and you should have rejected it!…. thinking there is no difference when there is no difference… FN
p value
statistical test compare differences in variables or to evaluate relatnships btw them…..
1. a test statitics value is calculated
2. the test statistic value is compared to the appropriate table of probabilites for that test
3. a probabilitiy value is obtained; based on the probablit of observing , due to chance alone, a test statistic value as extreme or more extreme than actuall obserev if the groups were equal(not different)
…. THe
P value is lower than the pre selected a priori value
5% than we say it IS statistically significant… the value of the test statitic could be as lg as it is by chance alone if the groups were similar… we reject null hypothesis if <5%.. the risk of experienceing a type one error is low
interpretation of p value
The probability of making a type 1 error if the null hypothesis is rejected…. ***
Power
the ability of a study design and its methodology and the selected test sttistc to detect a true difference if one turly exists and therefore the leve of accuracy isn correctly accepting rejectin the null hypothesis
sample size
the larger the sample size, the greater the likelihood ability of detecting a difference if one truly exists… increase in power..
difference btw groups deemed significant
the smaller the difference btw groups necessary to be considered significant the greateer the number needed… how big of a difference must there before before we care…
baseline rate of outcome
(known/estimated)
alpha error rates… a type one error is shown as an alpha error
the confidence of the test, or the elvel of signficance of the test…. the size of the test and is 1- the specificity of the test….
beta error rates
1-the power or 1- the sensitivity of the test….
Point estimate…
sigal best guess as to relationship
confidence interval… (most common 95%)
how the datat spreads around the point estimate… .. red like in single point estimate…
95% interpretation
We are 95% confident that the real group difference lies between these to interval numbers. *****
interpretation of a 95% CI without a p value
if it crosses 1.0 than it is NOT significant
if they are not ratios…
if they cross 0 they are not significant
Will have a P value of less than .05 if….
It does NOT cross over 1.0.
Does statistical significance confer meaningful clincial signficance?
ALWAYS aske this questions when reveiwng findings…