Evidence Based Medicine Flashcards
Allows us to draw from the sample, conclusions about the general population
Statistics
An efficient way to draw conclusions when the cost of gathering all of the data is impractical
Taking Samples
Assume that an infinitely large population of values exists and that your sample was randomly selected from a large subset of that population. Now use the rules of probability to
Make inferences about the general population
States that the sampling distribution of the mean of any independent, random variable will be normal or nearly normal, if the sample size is large enough
The Central limit theorem
What does the Central Limit Theorem say?
The sampling distribution of the mean of any independent, random variable will be normal or nearly normal, if the sample size is large enough
If samples are large enough, the sample distribution will be
Bell shaped (Gaussian)
Statistics come in what two basic flavors?
Parametric and Non-parametric
A class of statistical procedures that rely on assumptions about the shape of the distribution (i.e. normal distribution) in the underlying population and about the form or parameters (i.e. mean and std. dev) of the assumed distribution
Parametric Statistics
A class of statistical procedures that does not rely on assumptions about the shape or form of the probability distribution from which the data were drawn
Non-parametric Statistics
Summarize the main features of the data without testing hypotheses or making any predictions
Descriptive statistics
Descriptive statistics can be divided into what two classes?
Measures of location and measures of dispersion
A typical or central value that best describes the data
Measures of location
What are the measures of location?
- ) Mean
- ) Median
- ) Mode
Describe spread (variation) of the data around that central value
Measures of dispersion
What are the measures of dispersion?
- ) Range
- ) Variance
- ) Std. Dev
- ) Std. Error
- ) Confidence Interval
No single parameter can fully describe the distribution of data in the
Sample
The sum of the data points divided by the number of data points
- More commonly referred to as “the average”
- Data must show a normal distribution
Mean
What are often better measures of location if the data is not normally distributed?
Median and Mode
The value which has half the data smaller than that point and half the data larger
Median
When choosing the median for odd number of data points, you first
Rank the order, then pick the middle #
When choosing the median for even number of data points, you
- ) Rank the numbers
- ) Find the middle two numbers
- ) Add the two middle numbers and divide by 2
Less sensitive for extreme data points and is thus useful for skewed data
Median
The value of the sample which occurs most frequently
Mode
The mode is a good measure of
Central Tendency
Not all data sets have a single mode, some data sets can be
bi-modal
On a box plot, 50% of the data falls between Q1 (25th percentile) and Q3 (75th percentile), the area encompassing this 50% is called the
Interquartile range (= Q3-Q1)
Used to display summary statistics
Box plots
To find the quartiles, put the list of numbers in order, then cut the list into four equal parts, the quartiles are at the
Cuts
The second quartile is equal to the
Median
Do not provide information on the spread or variability of the data
Measures of location
Describe the spread or variability within the data
Measures of dispersion
Two distinct samples can have the same mean but completely different levels of
Variability
The difference between the largest and the smallest sample values
-Depends only on extreme values and provides no information about how the remaining data is distributed
Range
Is the range a reliable measure of the dispersion of the whole data set?
No
The average of the square distance of each value from the mean
Variance
Makes the bigger differences stand out, and makes all of the numbers positive, eliminating the negatives, which will reduce the variance
Squaring the Variance
When calculating the variance, what is the difference between using N vs. N-1 as the denominator?
N gives a biased estimate of variance, where as (N-1) gives an unbiased estimate
In the calculation for variance, what does N represent?
N = size of population (biased)
In the calculation for variance, what does (N-1) represent?
(N-1) = size of the sample (unbiased)
The most common and useful measure of dispersion
Standard deviation
Tells us how tightly each sample is clustered around the mean
Standard deviation
When samples are tightly bunched together, the Gaussian curve is narrow and the standard deviation is
Small
When the samples are spread apart, the Gaussian curve is flat and the standard deviation is
Large
Means and standard deviations should ONLY be used when data are
Normally distributed
How can we determine if the data are normally distributed?
Calculate the mean plus or minus twice the standard deviation. If either value is outside of the possible rage, than the data is unlikely to be normally distributed
Approximately what percentage of data lies within:
- ) 1 standard deviation of the mean
- ) 2 Standard deviations of the mean
- ) 3 Standard deviations of the mean
- ) 68.3%
- ) 95.4%
- ) 99.7%
If data is skewed, we should use
Median
What are two more sophisticated, yet more complex, methods of determining normality?
D’Agostino & Pearson omnibus and Shapiro-Wilk Normality tests
D’Agostino & Pearson omnibus and Shapiro-Wilk Normality tests are not very
Useful
What we want is a test that tells us whether the deviations from the Gaussian ideal are severe enough to invalidate statistical methods that assume a
-Normality tests don’t do this
Gaussian distribution
How can we determine whether our mean is precise?
Find the Standard Error
A measure of how far the sample mean is away from the population mean
Standard error
The standard error of the mean (SEM) gets smaller as
Sample size gets larger
If the scatter in data is caused by biological variability and you want to show that variability, use
Standard Deviation (SD)
If the variability is caused by experimental imprecision and you want to show the precision of the calculated mean, use
Standard Error of the mean (SEM)
Say we aliquot 10 plates each with a different cell line and measure the integrin expression of each, would we want to use SD or SEM?
SD
Say we aliquot 10 plates of the same cell line and measure the integrin expresion of each, would we want to use SD or SEM?
SEM
An estimate of the range that is likely to contain the true population mean
-combine the scatter in any given population with the size of that population
Confidence intervals
Generates an interval in which the probability that the sample mean reflects the population mean is high
Confidence intervals
Means that there is a 95% chance that the confidence interval you calculated contains the true population mean
95% confidence interval
If zero is included in a confidence interval for a change in a disease due to a drug, then it means we can not exclude the possibility that
There was no true change
An observation that is numerically distant from the rest of the data
An outlier
Can be caused by systematic error, flaw in the theory that generated the data point, or by natural variability
An outlier
What is one popular method to test for an outlier?
The Grubbs test
How do we use the Z value obtained by the Grubbs test to test for an outlier?
Compare the Grubbs test Z with a table listing the critical value of Z at the 95% probability level. If the Grubbs Z is greater than the value from the table, then you can delete the outlier
To test for an outlier, we compare the Grubbs test Z with a table listing the critical value of Z at the 95% probability level. If the Grubbs Z is greater than the value from the table, then the P value is
Less than 5% and we can delete the outlier
What constitutes “good quality” data
Data must be: reliable and valid
What measurements assess data reliability?
Precision, accuracy, repeatability, and reproducibility
In order for the data to be valid, it must be
Compared to a “gold standard,” generalisable, and credible
The degree to which repeated measurements under unchanged conditions show the same results
Precision
High precision results in lower
SD
The degree of closeness of measurements of a quantity to that quantity’s true value
Accuracy
High accuracy reflects the true
Population mean
Repeatability is the same as
Precision
The ability of an entire experiment or study to be duplicated either by the same researcher or by someone else working independently
-The cornerstone of research
Reproducibility
The extent to which a concept, conclusion, or measurement is well-founded and corresponds accurately to the real world
Validity
Assuming that data collected on small samples are indicative of the population, sampling errors (bias, size, etc), and instrument errors are all threats to
Validity
The generalizability of a study is called it’s
External validity
Thalidomide was tested on rodents and showed no effects on limb malformations. However, the effects on humans were very pronounced. This is an error in
External validity
Many studies using single cell lines are no longer
Acceptable
Are the methodologies acceptable? Do the investigators have the required expertise? Who paid for the research? What is the reputation of the investigators an the institution? These are all questions that challenge
Credibility
Caused by inherently unpredictable fluctuations in the readings of a measurement apparatus or in the experimenter’s interpretation of the instrumental reading
Random Error
Random error can occur in either
Direction
Error that is predictable, and typically constant or proportional to the true value
Systematic Errors
Caused by imperfect calibration of measurement instruments or imperfect methods of observation
Systematic Error
Systematic error typically occurs only in one
Direction
Say you measure the mass of a ring three times using the same balance and get slightly different values of 17.46 g, 17.42 g, and 17.44 g. This is an example of
Random error
-can be minimized by taking more data
Say the electronic scale you use reads 0.05 g too high for all of your measurements because it is improperly tare throughout your experiment. This is an example of?
Systematic error
If the sample size is too low, the experiment will lack
Precision
Time and resources will be wasted, often for minimal gain, if the sample size is
Too large
Calculates how many samples are enough
Power analysis
The calculation of power requires which three pieces of information?
- ) A research hypothesis
- ) The variability of the outcomes measured
- ) An estimate of the clinically relevant difference
Will determine how many control and treatment groups are required
A research hypothesis
What is the best option for showing the variability of the outcomes measured?
SD
A difference between groups that is large enough to be considered important
Clinically relevant difference
-set as 0.8 SD
What is the affect on sample size (n) for the following scenarios:
- ) More variability in the data
- ) Less variability in the data
- ) To detect small differences between groups
- ) Higher n required
- ) Fewer n required
- ) Higher n required
What is the affect on sample size (n) for the following scenarios:
- ) To detect large differences between groups
- ) Smaller α used
- ) Less power (smaller β)
- ) Fewer n required
- ) Higher n required
- ) Fewer n required
An important part of the study design
Statistics
What is the null hypothesis (Ho)
Ho: µ1 = µ2
Ho = null hypothesis µ1 = mean of population 1 µ2 = mean of population 2
Is presumed true until statistical evidence in the form of a hypothesis test proves otherwise
Null hypothesis
We want to compare our null hypothesis to the alternative hypothesis being tested. To do this, we must select the probability threshold, below which the null hypothesis will be rejected. This is called the
Significance level (α)
-Common values are 0.05 and 0.01
Once our significance level has been selected, we need to compute from the observations the
Observed value (tobs) of the test statistic (T)
Once we have calculated tobs, we need to decide whether to
Reject Null hypothesis in favor of alternative or not
The incorrect rejection of a true null hypothesis (false positive)
Type I error
Incorrectly retaining a false null hypothesis (false negative)
Type II error
What are the two ways to compare a sample mean to a population mean?
- ) z statistic: used for large samples (n > 30)
2. ) t statistic: used for small samples (n less than 30)
Any statistical test for which the distribution of the test statistic can be approximated by a normal distribution.
z statistic
Because of the central limit theorem (CTL), many test statistics are approximately normally distributed for
Large samples (n > 30)
Very similar to the z statistic and uses the same formula
t statistic
When a statistic is significant, it simply means that the statistic is
-does not mean it is biologically important or interesting
Reliable
Indicates strong evidence against the null hypothesis
-so we reject the null hypothesis
A small p-value (typically p
Indicates weak evidence against the null hypothesis
-so we fail to reject the null hypothesis
A large p-value (typically p > 0.05)
P-values close to the cutoff (0.05) are considered to be marginal (could go either way), thus we should always
Report our p-value so readers can draw their own conclusions
Can strongly influence whether the means are different
Variability
Most useful when comparing two means and N
Students t-test
The degreesof freedom are very important in a
Students t-test
Given two data sets, each characterized by it’s mean, SD, and number of samples, we can determine whether the means are significant by using a
t-test
A t-test is nothing more than a
Signal-to-noise ration
The degree of freedom is important in a t-test. How do we find degrees of freedom?
d.o.f. = N-1, but we have more than one N, so for a t-test, d.o.f. = 2N - 2
Will test either if the mean is significantly greater than x or if the mean is significantly less than x, but not both
One-tailed t-test
Provides more power to detect an effect in one direction by not testing the effect in the other direction
One-tailed t-test
Will test both if the mean is significantly greater than x and if the mean is significantly less than x
Two-tailed t-test
In a one-tailed t-test, the mean is considered significantly different from x if the test statistic is in either the
Top 5% or the bottom 5%, resulting in a p-value of less than 0.05
In a two-tailed t-test, the mean is considered significantly different from x if the test statistic is in the
Top 2.5% or bottom 2.5%, resulting in a p-value less than 0.05
If tcalc > than ttable, than we must
Reject the null hypothesis and conclude that the sample means are significantly different
We must reject the null hypothesis and conclude that the sample means are significantly different if
tcalc > than ttable
The observed data are from the same subject or from a matched subject and are drawn from a population with a normal distribution
Paired t-test
The observed data are from two independent, random samples from a population with a normal distribution
Unpaired t-test
If we are measuring glucose concentration in diabetic patients before and after insulin injection, we perform a
Paired t-test
If we are measuring the glucose concentration of diabetic patients versus non-diabetic patients, we perform an
Unpaired t-test
If you have more than two groups, than you must make more than two
Comparisons
If you set a confidence level at 5% and do repeated t-tests on more than 2 groups, you will eventually get a
Type I error
-i.e. reject the null hypothesis when you should not have
The more comparisons we have to make, the higher the
α value must be
Instead of doing multiple t-tests when we have more than two means to compare, we can do an
Analysis of Variance (ANOVA)
To compare three or more means, we must use an
Analysis of Variance (ANOVA)
In ANOVA, we don’t actually measure variance, we measure a term called
“Sum of squares”
For ANOVA, what are the three sum of squares that we need to measure?
- ) Total sum of squares
- ) Between-group sum of squares
- ) Within-group sum of squares
Total scatter around the grand mean
Total sum of squares
Total scatter of the group means with respect to the grand mean
Between-group sum of squares
The scatter of the scores
Within-group sum of squares
ANOVA and t-test are both essentially just
Signal-to-noise ratios
To calculate the sums of squares, we first need to calculate
- ) Group means
2. ) Grand mean
If Fcalc > Ftable,
We must reject the null hypothesis and conclude that the sample means are significantly different
We must reject the null hypothesis and conclude that the sample means are significantly different if
Fcalc > Ftable
When we have one measurement variable and one nominal variable, we use
One-Way ANOVA
When we have one measurement variable and two nominal variables, we use
Two-way ANOVA
If we measure glycogen content for multiple samples of the heart, lungs, liver, etc. We perform a
One-way ANOVA
If we measure a response to three different drugs in both men and women, we use a
Two-way ANOVA
Only tells us that the smallest and largest means differ from one another
ANOVA
ANOVA only tells us that the smallest and largest means differ from one another, if we want to test the other means, we have to run
Post hoc multiple comparisons tests
Post hoc tests are only used if the null hypothesis is
Rejected
Test whether any of the group means differ significantly
Post hoc tests
Don’t suffer from the same issues as performing multiple t-tets. They all apply different corrections to account for the multiple comparisons
Post hoc tests
When normal distributions can not be assumed, we must consider using a
Non-parametric test
Make fewer assumptions about the distribution of the data
Non-parametric tests
Less powerful, meaning it is difficult to detect small diferences
Non-parametric tests
Useful when the outcome variable is a rank score, one or a few variables are off-scale, or you’re sure that the data is non Gaussian (ex: response to drugs)
Non-parametric tests
What is the non-parametric alternative to the two-sample t-test?
Mann-Whitney U test
The Mann-Whitney U test does not use actual measurements, but rather it uses
Ranks of the measurements used
In the Mann-Whitney U test, data can be ranked from
Highest to lowest, or lowest to highest
Let’s say we want to test the two tailed null hypothesis that there is no difference between the heights of male and female students. What is
- ) Ho
- ) Ha
- ) U1
4) U2
1,) male and female students are the same height
- ) male and female students are not the same height
- ) U statistic for men
- ) U statistic for women
How do we analyze the Mann-Whitney U test?
Compare the smaller of the two U statistics to a table of U values. If Ucalc is less than U table than we reject the null hypothesis
The extent to which two variables have a linear relationship with eachother
Correlation
Useful because they can indicate a predictive relationship that can be exploited in practice
Correlations
Correlation is used to understand which two things?
- ) Whether the relationship is positive or negative
2. ) The strength of the relationship
A measure of the linear correlation between two variables, X and Y, which has a value between +1 and -!, where 1 is total positive correlation, -1 is total negative correlation, and 0 is no correlation
Pearson Correlation Coefficient (r)
The goal of linear regression is to adjust the values of slope and intercept to find the line that best predicts
Y from X
It’s goal is to minimize the sum of the squares of the vertical distances of the points from the line
Linear Regression
Linear regression does not test whether your data are
Linear
A unitless fraction between 0.0 and 1.0 that measures the goodness-of-fit of your linear regresion
-only useful in the positive direction
r^2
An r^2 value of zero means that
Knowing X did not help you predict Y (and vice-versa)