Standard errors, standard deviations and confidence intervals Flashcards
What is a standard error?
A measure of how uncertain we are about a statistic (like the mean) when we calculate it based on a sample rather than the entire population.
What is a standard deviation?
Tells you how spread out the numbers in a data set are.
A measure of variability - how much the numbers differ from the average.
Why are SDs useful?
Helps understand how consistent or variable the data is.
What does a small SD mean?
The numbers are close to the average.
What does a large SD mean?
The numbers are more spread out.
What does a large standard error mean?
The sample means are widely spread around the population mean - sample does not closely represent population.
What does a small standard error mean?
The same means are closely spread around the population mean - sample closely represents population.
What is the difference between standard deviation and standard error?
Standard deviation measure how spread out the values in a dataset are.
Standard error measures how much uncertainty there is in a samples mean when trying to estimate the population mean.
What is the connection between the standard error and the standard deviation?
Standard error is calculated using the standard deviation.
SE = SD/√n
Does a large sample size increase or decrease the SE?
Decrease because larger samples give more precise estimates of the population mean.
Does the standard deviation increase or decrease with a larger sample size?
Neither - stays the same.
Because it describes the spread of the data itself.
What is a confidence interval?
A range of values where the true value is very likely to lie
How are standard errors and confidence intervals connected?
Standard error is used to calculate the confidence interval