Real and Illusory Relationships Flashcards

1
Q

Understand the factors help us decide if a phenomenon we see is likely produced by chance, if there is really no phenomena out there; clarity of results, size of sample, and prior ideas

A

What are the odds?
One in a million, we believe it is true, but if you take a million chances than randomless is less than we think

Clarity of image
Signal to noise. e.g. how much of the imahe is “signal” and how is interference or blanks (“noise”)

Size of sample
Is this a one off instance or is it something we are repeatedly seeing?

Prior ideas
Our own biases influence the results, what we see and believe

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Understand the bias we have for seeing results in chance, and how this relates to our mistaken idea that chance processes almost never throw our interpretable patterns

A

We think chance is less random than it is

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Be clear on what p-value (the chance your data is observed if there is no such effect in the population) and what it is not (it is not the truth value) and know the difference between these two, connecting this to affirming the consequent fallacy

A

Affirming the consequent fallacy: if the consequent is true than the antecedent must be true (‘If A, then C’, ‘C, therefore, A’)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Be aware that (and why) you need two additional things beyond p-value to estimate the truth value: the prior probability that the hypothesis is true, and the statistical power of the significance test

A

Prior probability: so you know the probability of being in the null hypothesis world in the first place

Statistical power: Need to know so as to detect significant effect, give the world where the null hypothesis is false

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Know how significance testing on events with a very low probability can be misleading

A

Base rate fallacy: low chance that you could have a random process create something like this, bomb sniffing dog example. Wrong 1 out 1000, the opposite is even lower. The error is that there is a much lower chance of someone carrying a bomb

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How can repeated-measures testing get statistical valid results from relatively few participants?

A

Due to repeatedly obtaining results from them, again and again, so if you have 20 participants and you test them again you now have 40 points of data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Be aware of how very high power can mean that a statistically significant result has a very low practical size

A

This occurs when there is low effect size in practical terms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

In interpreting non-significant results, be aware of the wrong interpretation, the right interpretation, and what is needed to apply the equivalence testing solution

A

Wrong interpretations of non-significant results it that it is proof the effect does not exist

The correct interpretations is that we just may not have been able to capture the effect, due to very high power

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Know how the prosecutors fallacy relates, both to errors from incomplete 2 x 2 information and to the correct interpretation of p-values

A

Prosecutor’s fallacy: logical error involving conditional probabilities, a measure of the chance or probability of X when Y has happened, with Y being the thing that modifies the chance. This occurs when the probability of innocence, given the evidence, is wrongly assumed to be equal to an infinitesimally small probability that the evidence would occur if the defendant was innocent. E.g. court room fingerprint example.

This is why you need to have a complete 2 x 2 contingency table, to be able to properly work it out.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the basis of Ioannidis’ claim that most published results are false, and how can that claim be criticised for the assumption it rests on?

A

Assumes that due to publication bias most negative findings go unpublished, thus the literature mainly compromises positive results. Arguing that most tests testing improbable hypotheses are false positives

Criticised: His assumptions is incorrect. As most published findings experiments would have to be like rare disease then; highly unlikely to generate a true positive. Science does not work like this as we can choose what hypothesis we test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly