Artikelen Flashcards

1
Q

Baron (202) ch 3 in Sternberg & Halpern (2020).pdf:

mensen negeren belangrijke psychologische bevindingen, zoals kinderen die moeten leren hoe je engels schrijft en dat computers veel betere decisions kunnen maken dan mensen

A

oke

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

wat voor studies hebben vooral last van de replication crisis

A

priming studies (moral disgust and cleaniness, people who were primed with older words walked slower) en studies of ego depletion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

studies of ego depletion =

A

idea: doing mentally tiring tasks reduces mental resources and causes a lack of self-control, so that subjects are more impulsive.

In another example, initial studies found that preventing subjects from thinking about a complex decision task resulted in better decisions than if the subjects were allowed to think, an “unconscious thought advantage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

wat zeggen Baron et al dat het lastige is aan psychologie

A

we study ourselves: As a result of being human, and existing in a social context of other humans, we already know a lot about why we do things, how we feel, and what we believe. Although it is possible to find errors in
our understanding of our own psychology, most of it is pretty accurate.

but… science is focused on teaching us things that we did not already know. but surprising results tend to be false.

+ small samples -> low power

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

wat weten we dmv NHST (formule)

A

p(D|Ho)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

en wat willen we weten? (formule)

A

p(H1|D)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

wat moeten we weten om er achter te komen wat p(H1|D) is

A

P(H1) en p(D|H1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

The hypothesis of interest is less likely to be true, given the data, if it was less likely to be true from the outset. Surprising results are less likely to be real even when the significance level (.05) is held constant.

A

oke

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

kleine samples zorgen vooor….

A

lagere p(H1|D) en dus lagere p(H|D)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

(t) these fields put a premium on surprising results, and (2) experiments use small samples with high variability.

A

oke

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

p hacking

A

A second reason has to do with the behavior of researchers. Notice the criterion for statistical significance is p(DIHo ). If we look at all the studies
with that number (often called just p) equal to .05, and in which Ho is true, we can expect that rout of 20 to be “significant.” If you keep doing the same experiment over and over, eventually you are likely to get a significant result, just by chance. The same thing happens if you test a lot of hypotheses on the
same study. Or test them in different ways. Thus, the significance level is distorted because researchers do not take into account the other things they did that didn’t work

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

file drawer effect

A

One form of statistical distortion can happen when several studies, undertaken by the same researcher or different researchers, test the same hypothesis.
When one of them doesn’t work, the researcher sighs and puts the data in a “file drawer”, never to be published. Perhaps she tries again, a different way, or
some other researcher tries, with or without knowing about the initial failure. Eventually, even if the hypothesis is false, one of these studies will probably be significant. At that point, whoever does it, will exclaim “Eureka!” (or words to that effect) and send the result to a journal to be published (she hopes). The
result is that the studies published may be a small sample of those that have been done, and the published p-values may not truly reflect what they are
supposed to reflect. This is called the “file drawer effect.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

bible code

A

where researchers convinced themselves (and others) that the Hebrew Bible contained hidden messages if you read it from top to bottom instead of right to left (as Hebrew is written).

-> it is easy for researchers to convince themselves that the method that happened to yield a significant result was really the most obvious method to use, even if this was not immediately apparent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

cognitive dissonance

A

when scientists started to defend a certain point, they cannot stop, even though they might not have defended it if they had never advocated it at all

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

wat is lastig aan de relatie tussen vaccineren en later dood gaan

A
  • causation of correlation?
    -The problem here is that their measure of the relevant aspects of health are not perfect. Even a standard full physical examination does not pick up all the relevant risk factors. When you include an imperfect measure in a regression, you fail to eliminate the effect of interest.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

wat was er mis bij de kinderen met error rates

A

But what is really happening is that the
problems were too easy for the older children and they made very few

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

wat is het kutte aan de media

A
  • only report surprising findings: meestal fout
  • geven alleen de positieve resultaten, niet replicaties die geen effect vonden
  • rapporteren correlatie als causatie
  • news reports may emphasize agreement or disagreement, without explaining where either one comes from. Some reports (e.g., about climate change) emphasize a “consensus of scientists,” without telling us how chis consensus was achieved.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

actively open-minded thinking (AOT)

A

thinkers must not only be open to challenges that come their way but must also seek such challenges actively, by thinking of alternative possible conclusions, by looking for reasons that favor the alternatives or impugn their favored conclusions, and by asking questions about ultimate criteria for evaluating conclusions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

wat voor bias voorkom je door middel van AOT

A

myside bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

eerste stap AOT

A

searching: possibilities, evidence and goals

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

3 general properties of AOT

A
  • search must be sufficient
  • search and inference must be fair
  • confidence should be responsive to the thinking done so far (heel veel thinking, goede evidence en inference = veel confidence)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

karl popper

A

· Confirmations can be obtained for almost any theory, if only one seeks such.
· Valid confirmation is only confirmation of a forecast that involves risk.
· A good scientific theory is one that forbids the occurrence of certain things.
· A theory that no event can refute is not scientific.
· A real test of a theory is an attempt to disprove or contradict it.
· Confirmatory evidence is only that obtained in an unsuccessful attempt to disprove the theory.
· Adding an ad-hoc reinterpretation to a theory in order to succeed in confirming it destroys its scientific status.

23
Q

african traditional thought…

A

had no method for setting itself straight
when it was wrong (wat science wel heeft)

24
Q

Karl Popper (1962), an influential philosopher of science, argued that
theories are never “confirmed.” Rather, successful theories are those that
survive repeated attempts to disconftrm them

A

oke

25
Q

other explanations are always possible!!!!

A

oke

26
Q

Five years after he co-authored a paper showing that Democratic candidates could get more votes by moving slightly to the right on economic policy, Gelman got the sign wrong on one of the variables.

A

oke

27
Q

Gelman traces his error back to the fallibility of the human brain. He says that the results seemed perfectly reasonable, so he missed the error.

A

oke

28
Q

Even an honest person is a master of self-deception. This talent makes it all too easy to find false patterns in randomness, to ignore alternative explanations for a result or to accept ‘reasonable’ outcomes without question.

A

oke

29
Q

Failure to understand our own biases has led to a crisis of confidence about the reproducibility of published results, says statistician John Ioannidis.

A

oke

30
Q

Although it is impossible to document how often researchers fool themselves in data analysis, the study of 100 psychology papers shows that a large proportion of the problems can be explained only by unconscious biases.

A

oke

31
Q

When crises like this issue of reproducibility come along, scientists can advance their scientific tools by implementing the double-blind standard.

A

oke

32
Q

Researchers are trying a variety of creative ways to debias data analysis, including collaborating with academic rivals and getting papers accepted before the study has even been conducted.

A

oke

33
Q

what are 2 reasons for cognitive bias

A
  • The human brain and its cognitive biases have been the same for as long as we have been doing science, but today’s academic environment is more competitive than ever. Researchers are motivated to produce statistically significant results.
  • Another reason for concern about cognitive bias is the advent of staggeringly large multivariate data sets. Our brain’s methods are even worse, says Keith Baggerly. Andrew King says that researchers are using point-and-click data-analysis software to sift through massive data sets without fully understanding the methods, and are finding small p-values that may not actually mean anything.
34
Q

hypothesis myopia=

A

Researchers often focus on collecting evidence to support one hypothesis, neglecting to look for evidence against it, and failing to consider other explanations. This is called hypothesis myopia, and can lead to missing the real story entirely.

35
Q

voorbeeld hypothesis myopia

A

In 1999, a woman was found guilty of murdering two of her sons, but the statistical evidence that they died of SIDS was a factor in her conviction. However, considering just one hypothesis leaves out an important part of the story. Mathematician Ray Hill estimated that a double SIDS death would occur in roughly 1 out of 297,000 families, whereas two children would be murdered by a parent in roughly 1 out of 2.7 million families.

36
Q

Psychologist Uri Simonsohn at the University of Pennsylvania defined p-hacking as exploiting researcher degrees of freedom until p 0.05. A study of more than 2,000 US psychologists7 suggested how common p-hacking is, and how many researchers admitted to these practices.

A

oke

37
Q

A journalist and a documentary filmmaker gathered 18 different measurements on 15 people and used creative p-hacking to ‘prove’ that eating chocolate leads to weight loss, reduced cholesterol levels and improved well-being.

A

oke

38
Q

asymmetric attention=

A

This happens when we give expected results a relatively free pass, but rigorously check non-intuitive results.

The evidence suggests that scientists are more prone to this than one would think. In 88% of cases, scientists blamed the results of experiments on how they were conducted, while consistent results were given little to no scrutiny.

39
Q

Researchers often fall prey to just-so storytelling, which is a fallacy named after the Rudyard Kipling tales that give whimsical explanations for things such as how the leopard got its spots.

A

oke

40
Q

just so story telling voorbeelden

A

Researchers use creative phrases to convince readers that their non-significant results are worthy of attention, including “flirting with conventional levels of significance” and “on the very fringes of significance”.

41
Q

Open science is a philosophy that encourages researchers to share their methods, data, computer code and results, and allows them to choose to make various parts of the project subject to outside scrutiny.

A

oke

42
Q

A more radical idea is the introduction of registered reports, in which scientists present their research plans for peer review before they even do the experiment. This should reduce the unconscious temptation to warp the data analysis.

A

oke

43
Q

team of rivals =

A

When it comes to replications and controversial topics, invite your academic rivals to work with you. This will help you spot flaws in your own hypotheses and theories and cancel them out with similar slants favouring the other side.

Psychologist Eric-Jan Wagenmakers teamed up with another group in an attempt to replicate their research suggesting that horizontal eye movements help people to retrieve events from their memory. The results were not replicated, but the collaboration generated new testable ideas and brought the two parties slightly closer.

44
Q

blind data analysis =

A

Blind data analysis = Researchers create alternative data sets by adding random noise or a hidden offset, moving participants to different experimental groups or hiding demographic categories.

The idea is that researchers who do not know how close they are to desired results will be less likely to find what they are unconsciously looking for

Perlmutter says that data blinding appeals to young researchers because of the sense of suspense it gives. A recent graduate student presented all her analyses and said that she was ready to unblind if everyone agreed, and the results looked great.

45
Q

According to Bayes’ theorem, the likelihood tells you everything you need to know about the data. The likelihood principle is controversial, as Neyman-Pearson inference violates it.

A

oke

46
Q

Men can respond in one of two ways to people telling them about their problems: They can offer a solution or they can provide empathy. Five out of five men in the sample were solvers.

A

oke

47
Q

The likelihood is the probability of obtaining data given a hypothesis. The data are most probable for a population proportion of 1, and the hypothesis that ‘the population proportion is l’ has the highest likelihood.

A

oke

48
Q

The likelihood of a hypothesis is not the same as the probability of the hypothesis in the light of the data. The hypothesis with the highest likelihood may not have the highest posterior probability.

A

oke

49
Q

The observed sample mean could be obtained from a population with a mean exactly the same as the sample mean, but it is also possible that the observed sample mean could be obtained from a population with a mean slightly different from the sample mean.

A

oke

50
Q

verschil likelihood en probability

A

likelihood = P(D|H)
probability= P(H|D)

51
Q

what does the fact that a hypothesis has the highest likelihood mean

A

that the data supports this hypothesis the most, but the prior probability may be lower, which means that this hypothesis will not have the highest posterior probability!!!

52
Q

wat is het verschil tussen likelihood (Bayesian) en NHST significance testing probability

A

likelihood = hoogte van curve, gaat over p(D|H)

probability = area, getting data as or more extreme

53
Q

Figure 4.3 illustrates the difference between a likelihood curve and a probability curve. The likelihood curve shows the height of the curve corresponding to the null hypothesis, while the probability curve shows the area under the curve beyond the null hypothesis.

A

oke

54
Q

Likelihood analysis and significance testing both consider the data as fixed but the hypothesis can vary. Likelihoods give a continuous graded measure of support for different hypotheses, whereas significance tests give a black and white decision.

A

oke

55
Q

Bayesian statistics is insensitive to multiple testing, stopping rules, and timing of explanation, whereas classical statistics regards more aspects of the data than just the likelihood as relevant to inference.

A

oke

56
Q
A