Week 5 Flashcards

1
Q

Conditional probability

A

the probability of one event occurring given that another condition is true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

P(a|b)

A

probability of a given b

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

P(b|a)

A

probability of b given a

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

prior probability

A

probability of it happening before you see the outcome

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

posterior probability

A

probability of it happening/obtained after evidence is collected

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

sensitivity

A

probability of a true positive result

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

specificity

A

probability of a true negative result

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Fallacy of transposed conditional

A

flipping around the conditions in the probability statement
we have a intuitive but incorrect tendency to think we know the probability of condition B given point A when we have been presented with a conditional probability (probability of condition A given point B)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

low prior probability

A

low posterior probability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

high prior probability

A

high posterior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

p-value

A

conditional probability
probability of an event (observing data like ours) given a condition (null hypothesis is true)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

not all hypothesis

A

are equally plausible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

to evaluate plausibility

A

consider whether there is strong theoretical basis for the hypothesis, and whether there’s plausible mechanisms by which a hypothesis can be true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what do we need to consider when determining the likelihood of b?

A

the prior probability of b being true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

how is a statistically significant outcome more convincing?

A

If the prior probability of the hypothesis is high rather than low

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Bem 2011

A

claimed to find evidence that information can travel back in time to affect our cognition and emotion

An implausible hypothesis that can’t possibly work and the effects have been difficult to replicate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

what shouldn’t p values be the only thing used for?

A

evidence as to whether to accept or reject a hypothesis

18
Q

sniff test

A

is the effect strong?
does it generalise to other populations? other situations?
do other researchers find these results?

19
Q

what could be alternative explanations?

A

experimental error
inappropriate methods or analysis techniques

20
Q

Straightforward replication

A

Identical copy of a previous study in order to see if the studies findings are valid and reliable

21
Q

Different participant population replication

A

Do these findings extend to other populations?

22
Q

Replications with different experimenters

A

These to less bias from preset ideas that experimentors may hold

23
Q

Previous studies may be carried out a long time ago

A

replicate to see if results are still relevant

24
Q

“good science”

A

accurate and reliable findings if replicated in a separate study

25
Psychology and complexity of human behaviour make replications hard
large variability
26
2015
replications of 100 psychology studies, only 39 were replicated
27
questionable research practises (QRP)
Due to publication bias pressure to publish significant findings
28
Simmons et al
Research question: does listening to children songs induce age contrast making people feel older? 2 songs were compared People fell older after listening to the child song compared to the control p=.033 showing an age contrast effect However, the actual study had three conditions/songs, older age contrast Multiple questions asked, 10 in total None of this was mentioned in the report
29
issues with QRP
Make replications difficult False positives are far more likely leading to pointless further studies and in effective treatment or policies The field of psychology and it's credibilities questioned
30
4 common QRPs 'degrees of freedom'
1. observing among DV 66.5% 2. choosing sample size 3. using covariates 4. reporting subsets of experimental conditions
31
why use QRPs?
Pressure to publish significant findings Ambiguity about decisions such as the exclusion of outliers Confirmation bias such as preset ideas
32
Lindsay 2015 troubling trio to help identify QRPs
1. low statistical power 2. surprising results 3. p value only slightly less than .05
33
tackling QRPs
pre-registration of studies - writing down your hypothesis, study design, and analysis the plan before you collect any data replication of ones one findings - reliability open access data - allowing other researchers access to everything allows them to reproduce analyses, check analyses and conduct additional analyses reduce publication bias against null findings registered reports - methods and analysis is peer-reviewed before study is conducted
34
individual differences to measure and control
personality experience physiological state psychological state genetics
35
enviromental influences to control
temperature context lighting time of day social influence
36
too much control
lowers ecological validity - findings don't appear in real life
37
questionable research practises
there are lots of decisions to make as a researcher about how best to conduct studies and analyse our data the flexibility leaves room for QRPs to appear, we can make little changes to our study or analysis to make data 'neater'
38
Common QRPs
collecting data from more participants and then checking the results again cherry-picking particular comparisons that "work" and discarding those that don't adding or removing 'covariates' running lots of studies and only reporting those that "work" deciding which "outliers" to discard after looking at the data
39
p-hacking
exploiting, perhaps unconsciously, researcher degrees of freedom until p<.05
40
open science
open data open source open educational resources open methodology open access open peer review