Hypothesis Testing and Statistical Interference EXAM 3 Flashcards

1
Q

What are the variables used in inferential statistics?

A

-Inferential means to make conclusions (decisions) based on evidence (data)

Variables: P-Value, Confidence Interval, alpha + beta error, Power

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Examples of tools to present descriptive statistics

A

Median, IQR, Box Plots

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does the p-value represent under the curve?

A

Probability of one outcome under the curve Vs any other outcome under the curve

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How much % of the data can be determined with an SD of 1, 2, and 3?

A

1 SD: 68% of the data
1.96 SD: (approx. 2): 95% of the data
3 SD: 99% of the data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

QUESTION

A

Does the bell-shaped curve only contain data points of the mean? Could the data point also be a point estimate?

Can you tell what the CI is just by looking at the p-value? f.e. p=0.05 -> so CI is 95%?
p=0.02 -> so CI is 98%?

How is the SD different from the CI
-> Maybe the CI is only used when comparing the drug group with the placebo, to see if it crosses 0 or 1

Isn’t there always the possibility to commit a type 1 error, even with very small p-values

If I refuse a new drug because (f.e. due to p=0.07), and the drug actually works (type II error), how will I know that I’ve committed a type II error?

What happens if the p-value of a finding crosses the beta (f.e. beta is set to 0.1 (10%) and the finding has a p-value of 11%?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What does the H0 state?

A

-The H0 states that the observed difference between the means is due to chance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

When can the H0 be rejected?

A

As the H0 states the differences are due to chance,
the findings have to be different to an extent that makes it allowable to reject the H0 even though there is a slim probability that is still due to chance (so the result of the experimental group has to be found in the tales)

-> it has to be different at least to a p-value of 5$

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the definition of the P-value?

A

The probability of finding an effect that is as big as the ones for the null

-> So not different (enough) from null (placebo)
->p=0.05 -> probability of 5% that the finding is due to chance, or not different from placebo

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What to look out for in p-values related to the validity of the research study!

A

The magnitude (strength) of the p only reflects the confidence to reject the null (so the certainty that the effect is different)

-it doesn’t tell how clinically significant the findings are

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a Type I and Type II error?

A

Type 1 (alpha) error: we claim a drug works but it actually doesn’t -> the findings were due to chance, and are not different from placebo -> NULL
FALSE POSITIVE

Type 2 (beta) error: we claim the drug does not work, but it actually works -> we falsely accepted the H0 (FALSE NEGATIVE)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the clinical consequences of Type I and Type II errors?

A

Type I: the patient uses a drug, that is believed to work, but it doesn’t -> the patient is exposed to the risk of the disease w/o appropriate treatment, and to potential toxicities of the drug
-> The FDA requires multiple studies to prove its efficacy

Type II: the drug (f.e. w/ p=0.55) is abandoned, but it actually works

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

When is it likely to commit a Type II error?

Possible Exam Question: Out of the p-values choose the one that is most likely associated with a Type II error

A

-With p-values slightly above 0.05

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Type II error

A

F.e.: the alpha is set to 0.05 and the beta is set to 0.2 -> the findings have a of p=0.1
we accept the H0 (refuse the drug)
->At this position we could have committed a type II error
->and we willingly accept the risk that we have committed a possible type II error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the POWER of a study?

A

-The ability of a study to detect a difference when there TRULY is a difference
->so to NOT refuse the drug falsely (prevent Type II error)

1 - beta = POWER
100% - 20%(beta) = 80 % POWER

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

When is it appropriate to assess the power of a study?

A

-Only when the P-value of a finding is bigger than 0.05

-if the p-value is 0.05 or less we have detected the difference already

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the minimum required POWER of a study?

A

Minimum beta = 20% -> so the minimum Power is 80%

17
Q

What are the factors affecting the statistical Power?

A

-Alpha error (significance criteria)
-Variance of the data
-Sample Size
-Effect Size

18
Q

How is the Alpha Error related to the statistical Power?

A

The lower (tight) the alpha the lower my statistical power (more likely I am to make a beta error)

19
Q

How is Variance related to the statistical Power?

A

If the Confidence intervals of the drug finding and the placebo overlap we can’t declare a difference -> bc there is the possibility the data point overlaps with a data point of the placebo

As the variance is reduced, the ability to detect a difference is enhanced -> POWER increased

20
Q

How is the Effect size related to the statistical Power?

A

The bigger the effect size, the easier it is to detect the difference from placebo -> the more POWER

21
Q

How is Power related to sample size?

A

The greater the sample size the greater the ability to pick up the TRUE difference

-greater sample size -> greater POWER
-> bc a great sample size shrinks the Confidence Interval and reduces the likelihood of crossing 0 or 1

22
Q

How is the Effect size related to sample size?

A

As the effect size rises, fewer patients are necessary to be enrolled for the study (low sample size needed)