Artikelen 2 Flashcards

1
Q

Sir Ronald Fisher (1890-1962) introduced the terms ‘null hypothesis’ and ‘significance’, urged the systematic distinction between sample and operation, suggested a level of 0.05 as an arbitrary but convenient level to judge a result significant, and proposed many techniques, including the analysis of variance.

A

oke

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Sometimes people say we only need to use probabilities in situations where we are ignorant; if we could know enough to make exact predictions, we would not need to talk in terms of probabilities. Popper argued that such subjective interpretations of probability did not apply in science. According to the objective interpretation of probability, probabilities exist in the world, independent of our states of knowledge. They are discovered by examining the world, not by reflecting on what we know or how much we believe.

A

oke

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Von Mises’s long-run relative frequency interpretation of probability says that the probability of a head coming up is the proportion of times the coin produces heads in a hypothetical infinite number of tosses. Because the long-run relative frequency is a property of all the events in the collective, the probability of the next toss being a head applies to the collective, not to any single event.

A

oke

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Sometimes significance is defined simply as ‘the probability of a Type I error’, but this is wrong. The probability of a Type I error when the null hypothesis is true is 0.05, but the probability of a Type I error when we have rejected the null hypothesis is 29 %.

A

oke

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Power is defined as …

A

the probability of detecting an effect given that the effect really exists in the population.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

in order to control B, you need to… 2 dingen!!

A

(1) Estimate the size of effect (e.g. mean difference) you think is interesting, given your theory is true.

(2) Estimate the amount of noise your data will have.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You can estimate the amount of noise in your data by looking at past similar studies, or by running a pilot study.

A

oke

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Most studies do not systematicallv use power calculations to determine the number of participants, but they should. Ignoring the systematic control of Type II errors leads to inappropriate judgments about what results mean.

A

oke

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

wat is heel belangrijk als je een studie gaat repliceren

A

meestal heb je meer participanten nodig, omdat de power verlaagd is, omdat het een andere sample is!!! mean is anders, sd etc.

But assuming the population effect was estimated exactlv bv the American stud,
and the ,vithin-group nriance ,vas exactly as estimated by the American study of replicating with the same number of subjects as the original studv was about 0.67

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Some reviewers will be tempted to think that the result is thrown in doubt by all the non-significant studies, but if the null hypothesis were true, one would expect an equal number of studies showing a significant effect in one direction as in the other.

A

oke

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

If your study has low power, a null result tells you nothing. If you set power at a high level in designing the experiment, you are entitled to accept the null hypothesis.

A

oke

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You can deduce the probability of the experimental hypothesis being true.

A

DIT IS EEN BESCHRIJVING VAN POWER, NIET VAN DE P VALUE OF DE WAARHEID VAN DE H0!!!!!!!!!!!!!!!!!!!!!!!!11

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Sensitivity can be determined in three ways:

A

power, confidence intervals and finding an effect significantly different from another reference one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A stopping rule is a condition under which you will stop collecting data for a study. Running until you have a significant result is not a good stopping rule because you are guaranteed to obtain a significant result eventually.

A

OKE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A researcher might mainly want to look at one particular comparison, but threw in some other conditions out of curiosity. If she planned one particular comparison in advance, she can test at the 0.05 level, but the other tests must involve a correction, like Bonferroni.

A

OKE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

The Neyman-Pearson approach viewed significance testing as a property of the testing procedure, not of the sample, and Fisher wanted a smaller p-value to indicate stronger evidence against the null hypothesis

A

OKE

17
Q

A frequentist notion of probability does not follow from the p-value, because the relevant probabilities are the long-run error rates you decide are acceptable. If you obtain a p of 0.009 and report it asp 0.01, that is misleading.

A

oke

18
Q

The null hypothesis is never proved or established, but is possibly disproved in the course of experimentation.

A

belangrijkkkkkkkk

19
Q

Decision rules are laid down before data are collected, and significance level is decided in advance. A result is significant or not, full stop, and a sample size is decided in advance. If you run enough participants, a difference of 0.001 ms could be detected between two conditions. A p-value of 0.10 does not mean the effect was smallish, despite what papers sometimes say.

A

oke

20
Q
A