OBJ - Statistical Inference Flashcards

1
Q

Hypothesis Testing

A

Do to account for chance & determine statistical significance

  • Estimate the probability of obtaining the
    observed result, or one more extreme, if
    there is no true difference.
  • Use this information to make decisions
    about the population

STEPS:
1. State hypotheses (null and alternative)
NULL - opposite of what want -> shows there is No difference/NO EFFECT
i.e. H0: μplacebo - μsimvastatin = 0
ALTERNATIVE (HA) = want to show/support
i.e. HA: μplacebo - μsimvastatin ≠ 0

  1. Choose significance level (α)
  2. Calculate appropriate test statistic and
    corresponding p value
    -> is the p value less than alpha?

if p < α (typically p < 0.05)
• Observed value of test statistic is unlikely if null hypothesis is true, so
• Null hypothesis is likely false
• Reject null hypothesis
• Conclude that the difference is not due to chance
• Difference is “STATISTICALLY SGNIFICANT”

if p ≥ α
• Difference is not statistically significant
• Cannot conclude that there is a true difference
• Evidence consistent with EITHER
– Null hypothesis is true (no difference) OR
– Not enough evidence to reject null hypothesis

  1. Decide whether evidence is sufficient to reject the null hypothesis
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Relationship between hypothesis tests and confidence intervals

A

Hypothesis testing - is our hypothesis true or not

Confidence interval - how confident are we in our results

RULE:
95% CI contains “null value” ↔ p > 0.05
99% 0.01

95% CI does not contain “null value” ↔ p < 0.05
where p is the p-value from the corresponding hypothesis test
->“Null value” usually represents no
difference between groups

For ratios, “null value” = 1
(no difference in ratio -> 1)

For differences, “null value” = 0
(no difference)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Standard Deviation vs. Standard Error of the Mean

A

Standard Deviation:
- measures the amount of variation or dispersion from the average

Standard Error of the Mean:
= Standard deviation of sample means
= population standard deviation ÷ square root of sample size
= σ / √n
(will always be smaller than the sample’s)
**measure of the precision of your sample estimate; more precise estimate

Population mean (μ)
= Mean of sample means 
  • Estimate of the standard deviation of all sample means
  • Describes the precision of the sample estimate
  • Measures “how far off” our estimate is likely to be from the population mean

Standard Error of the estimate
Standard error for any estimate is calculated from:
- Measure of variability
- Sample size

Large standard error → imprecise estimate
Small standard error → precise estimate

Increase sample size
→ decrease standard error
→ increase precision of estimate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Confidence intervals

A

Provides range of plausible values

confidence interval =
estimate ± critical value × standard error

critical error = 2 for 95% CI

standard error depends on variability and sample size

Critical value (95%) depends on sample size and 
confidence

Estimate a range of values that is likely to
include the true (population) value

• In repeated samples, 95% of intervals
constructed this way will contain the population value
• We can be 95% confident that a particular 95% confidence interval contains the population value
• Confidence interval provides a range of
“plausible values” for the true population value

Difference of the Means
= Placebo-treatemten = std error of difference

To get a narrow CI (you want):
Critical Value:
↓ confidence → ↓ critical value

Standard error:
↓ variability (standard deviation)
↑ sample size (n)

**take home message want larger sample size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Statistical significance

A

The low probability of obtaining at least as extreme results given that the null hypothesis is true

Helps decide if a null hypothesis can be rejected

p< 0.05

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Type I error

A

alpha = probability of type I error

Say there is a difference & there is not

**Don’t know if you made Type I error

False postive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Type II error

A

beta = probability of type II error

Conclude there is no difference when there is

False negative

PROBABILITY of saying there is NO effect when there is one

Power of the study = 1-Beta

Good power is less likely to “miss”
important differences

Power depends on:
1. Type I error rate α
Stricter significance => (lower alpha)
Fewer type I, but more type II

  1. Effect size (e.g. difference in means or proportions)
  2. Variability of outcome measure
  3. Sample size

• Usually 1-3 are fixed and sample size is
increased to achieve >80% power

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

P values

A

Probability of obtaining the observed test statistic, or one more extreme, IF the null hypothesis is true (there is no effect of the treatment)

(NOT the probability that the null hypothesis is true)

small p value = null hypothesis is false (your treatment worked)
-> typically p value < 0.05

p has to be between 0 & 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Weird circumstances

A

Too small sample size:
Need to account for “extra” imprecision
Use t distribution
“Critical value” > 1.96 for 95% CI

Not Normal distribution:
• Use “traditional” confidence interval for
proportion only when N is large AND p is
not close to 0 or 1
• Interpretations are the same

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Significance level (alpha)

A

α =
• P(Type I error)
• probability of rejecting the null hypothesis when it is true
• probability of concluding there is a difference when no
difference exists

Type I OR type II???? typo?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

One sided/tailed Hypothesis

vs.

Two sided/tailed Hypothesis

A

One-sided test:
Hypothesis test where the values for which we can reject the null hypothesis, H0, are located entirely in one tail of the probability distribution
i.e. treatment worked better than placebo

Two-sided test:
Hypothesis test where the values for which we can reject the null hypothesis, H0, are located entirely in both tails of the probability distribution
i.e. treatment worked better OR worse than placebo

Null Hypothesis being the peak of the normal distribution with no effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Test statistic & p value

A

Test statistic =
(Observed value – Hypothesized value)/
Standard error of observed value

similar to CI

If NULL is true (no effect) test statistic = 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Statistical Power

A

1-Beta (type II error)

probability of rejecting null hypothesis when it’s false (probability of detecting an effect when it’s there)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly