L9 - Statistical Power 2 Flashcards

1
Q

How would we plan an a priori power analysis?

(4 steps)

A

Use the relationship of sample size (N), effect size (ES), and our specified alpha (α) to calculate how many participants we need:

Decide on your α-level (Type I error rate)

Decide an acceptable level of power (Type II error rate)

Figure out the effect size (ES) you are looking for

Calculate N

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why would we not set our level of power at .99 when designing our study?

A

ur alpha (α) level is fixed, and our effect size (ES) is fixed, which means we would need to have an incredibly large sample size (N)

the higher our desired power level, the more participants we need

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

If we want higher power and have fixed effect size and fixed alpha, what would this mean for our sample size (N)?

A

We would need a larger sample size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

We don’t actually know the effect size before we run the study, so how can we determine the effect size before we run the study? (3 ways)

A

1. Base it on substantive knowledge

i.e. What you know about the situation and scale of measurement

2. Base it on previous research

What have others in your field used?

3. Use conventions

i.e. what is usually used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the difference between a two-tailed and a one-tailed test?

A

Two-tailed, p value is split between the front and back (2.5-2.5)

One-tailed, only looking at significance in one way (5%)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

If we are doing a 2 tailed test, and the results are only just out of the significance range, can we change to a 1 tailed test post-hoc since we could have done so at the beginning anyway?

A

No, it is bad science and a type of p-hacking

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Cohen believes that an acceptable level of power is ___. This means we are happy to accept a type 2 error rate of ___.

This is a convention for an acceptable level of power that many use.

A

Power = .8

Type 2 = .2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which of these is the null distribution and which is the alternative distribution and why?

A

Null is on the left, for the significant range is in the (alpha ) zone

Alternative is on the right, because the type 2 error zone is denoted by ß

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What 3 statistics do we need to know before we can conduct a post hoc power analysis?

A
  1. What was the sample size?
  2. What was the a level (type 1 error rate)?
  3. What was the effect size (ES)
    * With these 3, we can determine the power of the study.*
    * Example in notes and W9 powerpoint*
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

When writing up your discussion and you didn’t have enough power, should you write that

  1. You needed more power or
  2. How big a study you would need to have acceptable power?
A
  1. How big a study you would need to have in order to have acceptable power.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What does this graph tell us about the level of power and the correlation significance?

A

There is an assumed power of .8

With a sample size of 20, we can detect correlations of about .6

40 of about .4

etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

It is really important to think of statistical power in terms of the ____ to the things that determine it.

A

Relationships

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

The more stringent the significance level the ____ the necessary sample size

A

Greater

All other figures being equal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The smaller the effect size the ____ the necessary sample size

A

Larger

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Why do we need bigger samples to detect smaller effects?

A

Because each of the 4 stats (N, ES, alpha, power) are related to power

  • If power is constant at .8, alpha constant at .05, specify a small effect, that means we need a bigger sample than if we specified a big effect.*
  • It’s an “all else being equal scenario” as all things are related to each other.*
  • Important I think*
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

The higher the power required the ___ the necessary sample size

A

Larger

17
Q

The smaller the sample size the ___ the power

A

Lower

18
Q

What is the risk of having journals that demand statistically significant results and low powered studies?

A

Articles that gets published being type 1 errors

This is the replication crisis

19
Q

What are the 4 relationships that are all related to each other in power analysis?

A

Power

Effect Size

Sample Size

Significance level

20
Q

What is power analysis good for?

A

1. Sample size calculation

Before you begin your study

2. Evaluation of study results

  • Published literature*
  • • Useful when planning studies – ES estimates*
  • • Replicability?*
  • Your own results*
  • • What power did your study have (be careful here)*
  • • Given the observed ES, what size study to reliably detect? (more useful)*
  • Was my study too small? Or too large?*
  • • Ethical implications?*
21
Q

Why can having a sample size that is too small be considered unethical?

A

A too small sample size means that you will never have enough power for your results to be relevent.

This means you have wasted the participants time, unethical.

22
Q

One limitation of power analysis is that it uses the frequentist NHST decision making framework.

Why is this a limitation?

A

If you’re using power analysis outside this framework, none of it makes sense

23
Q

The concepts of Type I and Type II error at the heart of power analysis depend on what many consider to be the pathological nature of NHST.

Why is this so?

A

Because we are basing it off an arbitrary level of alpha. This is an arbitrary cutoff point.

  • Why should we treat p = .051 differently to p = .049?*
  • One implies that one study is meaningful, whereas the other is meaningless.*
24
Q

What is more important for significance, effect size or p values?

A

Effect size

P values are arbitrary - whereas effect size is much more meaningful, particularly when standardised

25
Q

What is the weakness of using the decision making NHST model?

A

It was meant for quality control in industry, not necessarily scientific modelling

Just because our study meets the alpha level, doesn’t necessarily mean that we should reject the null, as alpha is arbitrary.

Conversly, just because we didn’t find significant results, doesn’t necessarily mean that we’re certain it doesn’t have any effect.

26
Q

When we find significant or non-significant results in a power analysis, do we need to make a decision on whether our results are accurate?

A

No

All we need to do is report our results, then we can let others continue to build on those results in order to determine if it is accurate

27
Q

According to Burns, are 1-tailed p value statistical tests illogical?

A

Yes.

It changes nothing, because it is only making it more likely to be significant based on the arbitrary p-value, meaning you are kidding yourself if it is only just in the p <.05 range.

The effect size hasn’t changed, only how you view the effect size.

28
Q

Is statistical significance the same as scientific significance?

A

No

Just because something is statistically significant based on a arbitrary p value, doesn’t necessarily mean it’s scientifically valid.

29
Q

A non-significant result means that we should always retain the null.

A

Not necessarily

Instead, suspend judgements, because it could be a type 2 error.

30
Q

Rejecting H0 because we found significant results means that our hypothesis is correct.

(True or False)

A

False

A positive difference, a negative difference, or that the direction of the difference cannot be confidently stated

31
Q

If we have statistically significant results and reject the null, we can confidently say which direction the results were in.

A

Not necessarily.

Rejecting H0 implies either a positive difference, or a negative difference. The direction of the difference cannot be confidently stated.