9 Significance Tests Flashcards
What is the main point of doing a significance test?
To see if we have convincing evidence against a claim (H0) or in support of a counter claim (Ha).
Interpret the p-value.
You only need to use this cookie cutter when asked to interpret the p-value OR “what does .03 mean in the context of this problem?”
Assuming that __________ (H0 is true (with parameter written out)), there is a ___
(p-value) probability of getting a sample ______ (mean or proportion) of _______ (xbar or phat) or
_______ (more or less (depends on Ha)) just by chance in a random sample of ___ (n units)
Basically, what’s a p-value?
The probability of getting evidence for the alternative hypothesis Hₐ as strong as or stronger than the observed evidence when the null hypothesis H₀ is true. The smaller the P-value, the stronger the evidence against H₀ and in favor of Hₐ provided by the data.
What is a standardized test statistic?
Value that measures how far a sample statistic is from what we would expect if the null hypothesis H₀ were true, in standard deviation units. So…the z-score of your sample compared to the null.
Formula for the test statistic in one sample proportion test
Note: use the p from the null on bottom, not the p-hat from the sample. This was the most missed MC last year.
Formula for the test statistic in two sample proportion test
Note: use the combined p-hat for all the parts on bottom, not the 2 different p-hats.
How do you find a p-value?
1 or 2 prop z-test. OR It’s just the probability of getting a z-score or more (or less) on a normal curve (normalcdf(lower: z-score, upper: 9999, mu=0, sigma = 1). OR approximate it using a simulation by counting how many dots are above (or below) the claim (H0).
What symbols are never allowed in hypotheses?
p-hat or x-bar. Also, never use numbers from observed counts either. In the NBA bubble, don’t use the fact that the home teams won about 57% of the games anywhere in the problem except for finding the z-score (or typing it into the calculator to find the z-score)
How do you tell which error Type could have happened?
Did you “fail to” reject? “Fail To” goes with Type 2. So if you didn’t “fail to” reject, then it must be a possible Type 1 error.
How do you explain an error in context?
Type 1: We were convinced that (HA in context), BUT actually the (HA) is not true (context)
Ex: We were convinced that the water was unsafe, but actually it was safe.
Type 2: (fail to) We didn’t find evidence that (Ha in context), BUT actually (it was true with context).
Ex: We weren’t convinced that the water was unsafe, but it actually was unsafe
How do you describe a consequence of an error?
Figure out what would happen if you reject and find enough evidence. Ex: switch to bottled water, sue the company for discrimination, use more coupons, etc. Then, Type 1: we switched but we shouldn’t have so now… Type 2: We didn’t switch but we should have and now…
How do you find the probability of a Type 1 error?
Since a type 1 error is when you reject but shouldn’t, it will happen alpha % of the time (so usually 5%). This is because we say that alpha is our standard of rare enough. Things that are 5% rare still happen 5% of the time.
How do you find the probability of a Type 2 error?
If they give you the power, just subtract it from 1. Otherwise, you can’t really calculate it because it depends on what the real parameter is (which we don’t know).
How can you reduce Type 2 errors?
Reducing a type to error is the same as increasing power. So, increase n, increase alpha, or increase the distance of the parameter in question (we will be less likely to make a type 2 error if the real free throw % is only 50% vs someone who is 60%
How do you increase the power of a test?
Increase n, increase alpha, or increase the distance of the parameter in question (There is more power in a test when the real free throw % is only 50% vs someone who is 60%).
More generally, power goes up if spread (standard error) goes down, but other than increasing sample size, we can’t control spread much.