Stats Exam 3 Flashcards
Central Limit Theorem
- Random samples from a normally distributed population are normally distributed
- As n increases (> 30), random samples from skewed distributions become normally distributed
- The means of all sample means is the population mean. (also true for proportions)
- The standard deviation of normally distributed sample means and proportions are: ….
What can Z scores tell you about sample means
Z scores can also tell us how far a sample mean is from the population mean, and therefore how likely or unlikely a given sample mean is
What happens to Z as n increases
Z approaches zero
Central Limit Theorem also applies to…
CLT also applies to proportions which are used for categorical data
3 forms of inference
- Point estimation
- Confidence intervals
- Hypothesis Testing
Point Estimation
using a single values form a sample to estimate a population parameter
Confidence Intervals
(Interval Estimation)
- Using a range of values to estimate a parameter
- Stating our confidence that an interval captures a parameter
* smaller interval/rang, less confidence
Hypothesis Testing
using samples and probability to support or reject assumptions about population parameters
sampling error
random sampling produces samples that aren’t exactly like the population
interval estimation
incorporates the likely size of the sampling error associated with the point estimate
weakness of point estimation
without quantifying the likely among of estimation error, point estimates are of limited use (sampling error)
Confidence Interval (CI)
a range of plausible values for a parameter in addition to the level of confidence that the parameter is included within the interval
2 components of CI
- Interval
- Confidence
a range of values that is likely to include to u
principle of “confidence” is the same in both scenarios
margin of error for means
-an absolute quantity Size of an interval (precision) partly chosen --> z score partly natural --> sd partly experimentally determined --> n
confidence intervals downside
- created when we do NOT know the population mean
- Establish a range of values that “probably” includes the population mean
- how probable depends entirely on the choice of a z score
steps to find z score?
- Find Z score for a … CI
- Calculate Standard Error
- Calculate Margin of Error
- Apply Margin of Error to Point Estimate
hypotheses
claims or statement about population parameters (never about samples)
null hypothesis
the no effect, no difference, nothing special difference
generally does not reflect the researchers belief
alternative hypotheses
Ha
3 possible forms:
1. a parameter is greater than some value Ha:u>#
2. a parameter is less than some values: Ha:u
one tailed Ha
can be supported by sample statistics from only one tail of a distribution (less/greater)
two tailed Ha
can be supported by sample statistics from both tails (different)
critical regions
tail regions of sampling distributions that contain unlikely values, that when observed lead us to reject Ho
critical values
specific standardized scores (like Z scores) that separate critical regions form the rest of the curve
alpha
=significance level
- a= the area under a normal curve with unlikely (extreme) observations, such that when observed, we reject the null hypothesis and support the Ha
- a= the acceptable rate of a type 1 error, mistakenly rejecting the null
- a & Ha determine the critical values
One tailed Ha & alpha
C.V puts alpha in one tail
Two tailed Ha & alpha
2 C.V.s that split alpha into 2 tails
Hypothesis Testing Steps
- Write the Ha and Ho and statistical terms
- choose alpha and determine critical values
- calculate a test statistic. For tests with one sample, 3 choices
- Compare test statistic to a CV of calculate a p value
- State conclusions in context
p value
probability that the difference between the sample mean and the population mean occurred by chance alone
T scores for sample means
- almost identical to Z scores (same assumptions)
- used when we don’t know the population standard deviation
- substitute the sample standard deviation into the standard error expression
degrees of freedom
df=n-1
choose between t score and z score?
do z score it will be more accurate
Type 1 error
rejecting the null, when the null is actually true
- occurs when we get an extreme test statistic by chance alone
- p(type 1 error) = alpha
- alpha is chosen in advance
Type 2 error
failing to reject the null, when the null is false
- must be calculated
- p(type 2 error)= beta
Power
the ability to reject a false null hypothesis
Power calculations
are used to determine the sample size needed to reveal the smallest difference that is actually interesting between two hypothesized values of a parameter
ANOVA
Analysis of Variance
-used to compare >2 sample means
ANOVA hypotheses
Ho: the means all come from the same population
Ha: the means do not all come from the same population
why not multiple t-test?
the total risk of a type 1 error for a group of related tests is more important than the type 1 error risk for any one test
–multiple tests increase the total risk of a type 1 error
familywise error rate
the probabililty of observing at least one type 1 error for a group of related tests
ANOVA Ho
differences in sample means are explained by random sampling variation within groups
ANOVA Ha
differences in sample means are due in part to real variation between groups, because at least one group comes from a different population
F test
variation between groups/ variation within groups
F test statistic
- a ratio of any two variances
- F=1 means the variances are no different
- F is not =1 means the variances are different
If ANOVA Ho is true we usually see…
F= 1 or F>1
If ANOVA Ho is false we usually see..
F>1 or F»_space;»1
How much bigger must F be for an ANOVA?
- ANOVA is always a right tailed test
2. Like for t statistic, different F distributions and critical values exist for different degrees of freedom
F distribution degrees of freedom
numerator: k-1
denominator: N-k
Assumptions for ANOVA to be valid
- Normal distributions
2. Sample standard deviations are roughly equal: Largest sd/ smallest sd
SS(total) total sum of squares
is a measure of the total variation (around the grand mean) in all the sample data combined
SS(model)
=SS(groups) the variation between sample means, weighted by sample size
SS(within groups)
=SS(error) =SS(residuals)
variability common to all the populations being considered
SS(total) =
SS(model) + SS(residuals)
MS (something) =
Average variation. gives us a measure of relative variation allowing us to compare variation in different parts of a model
Post hoc tests (multiple comparisons)
are mostly modified t tests that reduce type 2 error rate
sampling variability
sample results change from sample to sample
parameter
a number that describes the population
statistic
a number that is computed from a sample
statistical inference
inferring something about the population based on what is measured in the sample
unbiased estimator
-x- can be an unbiased estimator for u and p^ can be an unbiased estimator for p if the distribution of sample means is exactly centered at the value of population mean
census
sample the whole population
margin of error (m)
represents the maximum estimation error for a given level of confidence
statistical hypothesis testing
assessing evidence provided by the data in favor or against some claim about the population
p value
reject Ho and accept Ha
results are statistically significant
p value >0.05
can’t reject H0 and reject Ha
results are not statistically significant
test statistic
a measure of how far the sample proportion is from the null value po, the values that the null hypothesis claims is the value of p
two independent samples
comparing two means
matched pairs
paired t test
samples are dependent
1 way ANOVA
comparing more than two means that are independent of each other
repeated measures ANOVA
comparing more than two means that are dependent on each other
three types of t-tests
1-sample
2-independent samples
2-dependent samples
independent samples
the individuals of one sample are not meaningfully connected to those of another sample
-two randomly selected groups
matched pairs (dependent) samples
the observations of one sample are somehow paired or related to those of another sample
-pre/post -twins -parents &children
homoscendasticity
assumes equal variance in our 2 samples
2 scenarios for dependent sample
- repeated measures
2. matched pairs
parameter
A characteristic of a population
statistic
A characteristic of a sample
sampling variability
Multiple samples taken from the same population will vary from each other due to chance events.
sampling distribution
The shape, center and spread of the values taken by all of the samples of a certain size taken from a single population
Proportion
A probability that a member from a population takes on a certain characteristic. Proportions are based on countable, categorical data = the number of individuals that display a characteristic / the total number of individuals.
Standard Deviation of Sample Proportions (AKA standard error)
When sample proportions are normally distributed they will vary from the population proportion with a standard error
Statistical Inference
The Process of Inferring something about a population based on something known about a sample
Confidence
In the context of interval estimation, “confidence” is the probability that our interval actually captures the population parameter. Confidence is, unfortunately, inversely related to the size (precision) of our interval.
Margin of Error
The maximum amount of error we give to a point estimate, and which is used to build a confidence interval. The margin of error size is influenced by two things: 1) our confidence, which in turn determines a z score, and 2) the sampling variation, which is determined by a standard error
Estimating the sample size needed to create a specific CI
This is something researchers do in the planning stages of research and can be used as justification for a certain amount of money in a research proposal. Simply solve the margin of error equation for n. The researcher must also decide on a confidence (see above), choose an acceptable margin of error, and plug in a best guess standard error value taken from previous research, or in the case of proportions, .25 can be used as a conservative estimate of the p*(1-p). Note: Whenever n is not a whole number, round up
statistical hypothesis testing
Assessing evidence provided by the data in favor of, or against some claim about the population.
4 steps to hypothesis testing
1) Stating the claims
a. Claim 1 (the null hypothesis, Ho): The mean or proportion is equal to some value.
b. Claim 2 (alternate hypothesis, Ha): The mean or proportion is is less than, or is greater than, or is not equal to some value.
2) Choosing a sample and collecting data
3) Assessing the evidence. Calculating the probability of observing a sample statistics at least as extreme as the one observed, if the null hypothesis (claim 1) is true and the alternate hypothesis is false.
4) Making conclusions. Choosing whether to reject, or fail to reject the null hypothesis (claim 1), or to support, or fail to support the alternate claim (claim 2)
assumptions of the independent sample t-test
- The 2 samples are independent
- The distribution of the response Y in both populations is normal
- Both samples are random
- The two populations being compared should have similar variances. If the sample sizes of the 2 groups are equal, the t-test is robust to the presence of unequal variances.
Assumptions of the matched pairs t-test
- The sample data consist of matched pairs
- Both samples are simple random samples
- The number of matched pairs is > 30 and/or the pairs of the values have differences that are normally distributed.