Effect size and power Flashcards

Understand the concepts and calculations involved

1
Q

What is very important to calculate when designing an experiment?

A

The minimum sample size necessary to obtain a significant result, bearing in mind that smaller samples are more influenced by outliers but larger samples involve potentially deceiving/putting at risk a greater number of people - this is where POWER becomes important

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is statistical significance dependent on?

A

Sample size - a result may be falsely significant purely because a very large sample was used, but may be falsely insignificant because a sample was not big enough

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is effect size and why is it important?

A

Size of the difference between null and alternative hypotheses i.e. the true strength of the relationship between IV and DV
Reflects of the true extent of IV influence on a DV IRRESPECTIVE OF SAMPLE SIZE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How can effect size be graphically represented?

A

A small effect size would be illustrated by a considerable amount of overlap between null and alternative hypothesis distributions
For a large effect size, overlap would be much smaller

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How can we get an estimate of effect size prior to a study?

A

Look at prior literature addressing a similar question, or if the question is particularly novel a researcher may collect some pilot data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are meta-analyses and why are they necessary?

A

Quantitative technique for synthesising results of multiple studies surrounding a phenomenon, producing a single result - effect sizes from each study included are combined into a single effect size i.e. reflects unbiased estimate of the TRUE size of that phenomenon
Lit reviews are inherently selective and vulnerable to bias, also do not reflect effect size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How can we calculate effect size?

A

There is no single formula, but one which is commonly used is Cohen’s d = (mean of population 1 - mean of population 2)/common population SD (or alternatively use sample statistic equivalents)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Why is choice of alpha-level a tricky balancing act?

A

If p-value significance thresholds are set too low we increase the risk of Type 1 errors, however if it is set too high we are increasing the risk of type 2 errors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the sensitivity of a significance test influenced by?

A

The alpha cut-off value, the sample and effect sizes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Left of the alpha cut off point on the H1 curve is the region labelled as Beta. If this region makes up 20% of the distribution area, what does this mean?

A

Whenever the same experiment is ran with sample size N, there will be 20% chance of making a Type 2 Error
The remaining 80% of the distribution is the POWER of the significance test i.e. the probability that an actual existing effect will be detected

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the commonly accepted power level that all studies should aim to achieve?

A

0.8

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the 4 ways to increase power?

A

1) Increase sample size - in simple terms, with a larger sample random errors get drowned out by actual effect
2) Increase alpha level - this reduces risk of type 2 but increases risk of type 1 errors and usually means findings are not taken seriously
3) Increase effect size - change IV levels
4) Decrease population variance - narrower distributions and less overlap, can be produced e.g. by allowing participants to practise

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the calculation used to calculate power simply from sample size and effect size?

A

It is a calculation to find delta, which is done by multiplying Cohen’s d by the square root of the sample size
The delta value can then be looked up in the relevant table to find the power value
This is done for related/single sample t-tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the steps involved in calculating power the long way?

A

1) Calculate d (value represents number of SDs difference)
2) Assume that the alternative hypothesis is true, and we need to calculate the value of Beta
3) Calculate standard error and use this to find the boundary of the right hand rejection region (1.96xSE)
4) Find the difference between the cut-off value on the H0 distribution and the H1 mean - calculate how many SEs this value represents by dividing it by the SE
5) This value is your z-score, which can then be looked up in the reference tables - find the small segment value and this is your Beta
6) Power is then 1-Beta
(Return to example in folder for full work-through)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How can you calculate sample size using power?

A

Cohen produced a table which tells you the minimum N required for a power at 0.8 for a variety of tests and possible effect sizes
Can also use software such as G*Power when effect size is known

How well did you know this?
1
Not at all
2
3
4
5
Perfectly