Clinical trials- hypotheses, P values, Errors Flashcards

1
Q

Clinical Trials & Chance results

A

Clinical trial attempts to tell you something about the general population. Therefore results will be an estimate for the population since if trial was conducted multiple times, results would not be exactly the same

**must include this uncertainty about the tested treatment in the results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Null hypothesis

A

The hypothesis that there is “no difference in population” between the treatments being tested

**Rejection of Ho =that one treatment is significantly different than the other
>can never actually prove or accept the null hypothesis is true, just reject it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Null hypothesis and chance

A

There is always a chance that a clinical difference may occur to reject Ho. However, as the outcome measures/difference become large enough, it is more likely that it is not just due to chance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Alternative Hypothesis

A

Alternative hypothesis is always that there is a difference between the treatments being tested

Accepted by default if the null hypothesis is rejected

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Two tailed vs. one tailed test

A

Looks at whether something is better or worse.

Two tailed= both directions
One tailed= one direction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which tail test should be used in a clinical trial?

A

Should always use a two tailed test to determine differences. Ensures that you are looking at the difference being both worse or better

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why would someone use a one tailed test?

A

-Allows the researcher to get a statistically significant result with fewer animals (Ex. more power)

**but very few situations where it would be considered valid

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Hypothesis testing with statistical significance

A

Need a test of statistical significance AND a way to quantify the degree to which sampling variability may account for the results observed in a certain study

Ex. P value (standard distribution)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

P-value

A

A probability value of getting the observed effect in the outcomes measured (what is the likelihood of seeing these results by chance alone)
**if null hypothesis is true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Very small P-value

A

Means that it is unlikely that we could have obtained the observed results if the null hypothesis was true
**Therefore reject Ho

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Very large P-value

A

Means that there is a higher probability that we could have obtained the observed results if the null hypothesis were true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How small of a P-value is often needed for clinical trial?

A

-Needs to be decided prior to start of trial
-Typically use 5% of standard level (P=0.05) BUT this is arbitrary. Leads to question: why make it a yes/no? Maybe answer is it should be a continuum

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Does significance mean the same as effect?

A

No. Just because there is no significance, does not mean there is no effect AND if significance is present, it does not indicate anything about magnitude of the effect

**many articles mistakenly misinterpret data and assume no effect at all

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Comparing studies that are called statistically significant and statistically non-significant

A

Their significance does not necessarily mean they are contradicting studies/results.
>observed effect in two separate papers can be the exact same BUT one can be significant and the other can be non-significant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Types of errors in clinical trial data

A

Type I (alpha)

Type II (beta)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Type I error

A

FALSE CLAIM
-occurs if a study finds a treatment difference when in fact there is no difference
>the level of p value that one is willing to accept

ex. P value=5%= 5% of the time we will declare a difference when one does not actually exist. 95% certain that difference is not by random chance

17
Q

Type II error

A

MISSED OPPORTUNITY
-occurs if a study fails to find a treatment difference when in fact there is a difference
>less likely to occur in larger trials
>beta level usually set to 20% (0.20) - 20% of the time, it will be unable to detect a difference

18
Q

Type II errors and power

A

-Type II error only occurs if there is no significant difference detected between treatments and controls

Question: If there is no statistical difference, did the trial have enough power to detect a difference if one exists?

Power= 1-beta