L5 Flashcards

1
Q

Are p values good at estimating magnitude?

A

No, only significance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

In psychology, when should we reject the null hypothesis?

A

p < .05

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Name the 4 types possible outcomes of a test using signal detection theory

A

Hit

False Alarm

Miss

Correct Rejection

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is it called when you want to decrease the number of type 1 errors in your experiment?

A

Conservatism

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How might you make your experiment more conservative?

A

Decrease the response criterion to lower than p <.05

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Why might you not want to make an experiment more conservative?

A

Because you increase the number of ‘miss errors’ and the tradeoff seems to be non-linear

(Much more likely to get ‘type 2 errors’ if more conservative)

Seemingly small shifts in our response criterion can decrease the sensitivity of our experiment because we don’t know how often genuine effects happen in reality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the difference between the null hypothesis and the research (alternative) hypothesis?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

When we run a significance test, we are measuring how likely it is that our effect has come from the ‘chance distribution’ instead of the alternative distribution.

If we get a mean of 3 in our sample data below, how do we determine if it is from the chance or the alternative distribution?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Does the p-value tell us the

probability of the data given the hypothesis

or

the probability of the hypothesis given the data

A

probability of the data given the hypothesis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the 5% cutoff (.05) we use to reject the null hypothesis called?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

The p-value is defined as -

A

The probability of getting an effect at least as big as the one we got if the null hypothesis is true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

With a alpha of .05, how often would we get a type 1 error?

A

5%

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Do significance tests tell us the probability that the effect exists in the wild?

What are these tests called?

A

No

This is why they are called inferential tests

We are inferring that our small sample of observations is representative of the population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Significance tests give us the probability of the data, given the null hypothesis that our sample came from a ‘chance distribution’

True or False

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

p-values are a good measure of the magnitude of the effect

True or False

A

False

They can only tell us if the effect is significant.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

If there is no effect, what will the distribution of p-values look like?

A

Spread out evenly

17
Q

How might we see if researchers have manipulated the results to get just under the significance of p < .05?

What is this called?

A

Analyse the range of p-values for the results, if there is a spike at just under .05 the results has likely been manipulated

This is called a p-curve analysis

(red line in example graph is sign of manipulation. Top graph is no difference in the groups (null is true), bottom is a genuine difference, with more trending towards 0)

18
Q

What is an effect size?

A

A measure how big a difference is or how strong a relationship is

19
Q

Is it better to use effect size than p values?

A

Yes, p values can give you an estimate of whether there is a significant difference but it can’t tell you the magnitude (size) of the effect

20
Q

What is a raw effect size dependent on?

A

Dependent on the scale or the measure being used in the experiment.

21
Q

What are the disadvantages of using the raw effect size?

A
  1. You need to be familiar with the DV and scales to make sense of the size of the effect and whether it is big compared to other experiments in the field
  2. When we look at studies overall, it is difficult to estimate the contribution of a particular variable because we are comparing apples and oranges.
    * In order to compare across studies we need to compare on the same scale​*
22
Q

Why are standardised effect sizes useful?

A

Indepenndent of the scale (DV) being used in the experiment.

Therefore you can compare effect sizes across different experiments. (Good for meta-analysis)

23
Q

What type of effect size is Cohens D?

A

Standardised effect size

24
Q

What is the formula for Cohen’s d

A

the difference between two group means of the two levels of IV of interest, divided by the population SD

We don’t know the population SD actually, so instead of population SD we compute the pooled standard deviation (SDpooled)

25
Q

What is a small, medium, large Cohen’s D effect size?

A

Small <.2, Medium <.5, Large >.8

Somewhat arbitrary though

26
Q

What is the standardised effect size of Correlational research?

A

Pearson’s r

27
Q

What does Pearson’s r tell us?

A

The strength of the relationship between two paired variables

28
Q

What are the Pearson’s R interpretation conventions?

A

Small < .10

Medium < .3

Large > .5

29
Q

What is statistical power?

A

The probability of rejecting the null hypothesis if the null hypothesis is false (1 - ß)

30
Q

What is statistical power dependent on?

A

1) Number of observations
2) Precision
3) Effect size

31
Q

What is the formula for statistical power?

A

1 - B (beta, the hit rate)

32
Q

Increasing statistical power is the same thing as reducing ___

A

ß

33
Q

What are 8 things you can do to increase your chance of detecting an effect when it exists in reality?

A
34
Q

Most of the things that you can alter to increase your chance of detecting an effect occurs at what stage of the research process?

A

Design stage

35
Q

What power do most people believe should we aim for in an experiment?

A

80% power

1-ß = .8

36
Q

We usually want to estimate statistical power before we do our experiment.

What is the drawback of doing this?

A

We might not have a good idea of the effect size we are looking for (if we had an idea, we probably wouldn’t be running the test in the first place)

37
Q

What calculation can we use to estimate our effect size beforehand?

A

Smallest Effect Size of Interest (SESOI)

38
Q

With statistical power, if the null hypothesis is false, what are the two possible outcomes that equal 100%

A

Make a mistake and fail to reject it (probability = ß)

Correctly reject the null (probability = 1 - ß)

Total probability = 100%

Increasing statistical power is the same thing as reducing ß