lecture 2- effect size and power 2 Flashcards

1
Q

strength of association

A

How much of the variation in the dependent variable can be
explained by the independent variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

partial eta squared
-when is it used
-what does it measure
-is partial or classical eta squared

A

Can be used for a factorial design (ANOVA)
 Measures linear and nonlinear association (in contrast to
correlation)
 Discussion in literature whether partial or classical eta squared
is better

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what does classical eta square measure
vs partial eta square

A

Classical eta square measures the proportion of the total variance
in a dependent variable that is associated with the variance of a
given factor in an ANOVA model.

Partial eta square is a similar measure in which the effects of
other independent variables and interactions are partialled out.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q
A

 Classical η2 adds to no more than one: advantageous for
describing variance within one experiment

 Partial η2 can add to >1. Variance due to other factors is
removed, supporting comparisons of effect sizes across
experiments with different factor structures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

three risk estimates

A

relative risk
odds ratio
risk difference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

relative risk

A

relative risk refers to a measure used to compare the risk of a certain outcome (like developing a condition or experiencing an event) between two groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

odds ratio

A

An odds ratio (OR) is a measure used to compare the odds of a particular event occurring in one group relative to the odds of it occurring in another group.
- It is commonly used in case-control studies or in situations where you’re comparing the odds of an outcome between two different groups (like those exposed to a factor vs. those not exposed).

When risk is small the odds ratio approximates the relative risk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
A

The absolute difference in risk (probability) of an event between two groups.
-a measure that quantifies the difference in the risk of an outcome between two groups. Shows how much the risk changes between the exposed group (e.g., people who have a particular characteristic or intervention) and the unexposed group.

Difference between proportion of treatment group that contract the
disease and proportion of controls that contract the disease
Can be used to estimate number of cases avoided by a treatment (if
population size is known)
 Reflects overall probability of getting disease
 Easy to understand, also for non-experts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

recap
type 1 error
type 2 error

A

Type 1 Error (False Positive):
A Type 1 error occurs when you incorrectly reject a true null hypothesis. In other words, you conclude that there is an effect or a difference when, in fact, there isn’t one.

Type 2 Error (False Negative):
Definition: A Type 2 error occurs when you fail to reject a false null hypothesis. In other words, you conclude that there is no effect or difference when, in reality, there is one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what is power

A

Power is the probability of correctly rejecting the null-hypothesis (i.e.
when H0 is incorrect)
-In other words, power is the ability of a test to detect an effect if there is one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what factors influence power

A

Effect size
 note: unreliable measures reduce effect size by inflating estimate of sigma (e.g., Cohen’s d = (M1 – M2) / SDpooled)

 Alpha level (e.g., α=0.05 or α=0.01)
 One-tailed test has higher power than two-tailed

 Sample size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

what is prospective power
-what are the steps

A

Computed before the study’s data are collected

Three steps for calculating prospective power:
o Hypothesize effect size
o Alpha level
o Planned sample size

Get an estimate of effect size
o Do a pilot experiment and compute effect size
o Do a meta-analysis and compute weighted effect size
o Use Cohen’s estimates for small, medium and large effect size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

explain the importance of power

A

 Power determines how likely an effect is detected (assuming there
really is an effect)

 Suppose a drug is highly effective, but the researcher only tests 5
participants per group (treatment, control): Drug may never get onto
the market

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

a power at least _____ is usually considered acceptable

A

A power of at least 80% is usually considered acceptable
* Underpowered (power<80%) studies are useless and
unethical (waste of resources and people’s time)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

computing power - software

A

 G*power 3
 Free to download
 Runs on different operating systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what is observed power
-is it useful

A

Computed after study is completed
 Assumes effect size in the sample equals effect size in the population
 Generally not very useful

17
Q

example of when is observed power usueful

A

excpetion :
In a meta-analysis, observed power is useful
 It provides an indication as to which results to assign a higher weight

18
Q

increasing the power

A

 Adding participants
- Adding participants to groups that are cheaper to run

 Choose a less stringent significance level (usually not an option)

 Increase the hypothesized effect size
- Strengthening the intervention/manipulation (in experiments)
- ‘Throw out’ middle part of distribution
- Improve reliability of measurement Increasing power

  • Use as few groups as possible
     Use covariates variables
     Use a repeated measures design
     Use measures sensitive to change