Research methods Flashcards

1
Q

The misuse of NHST (Null Hypothesis of Significance Testing)

A
  • The American Statistical Association (2016) outlined principles on the misuse of p values in significance testing
    1. P-values are not measuring the probability of getting results by chance, or that a specific hypothesis is true
    1. Statistical significance is not the same as practical importance
    1. The p-value alone is not a good measure of evidence regarding model or hypothesis
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Type 1 and Type 2 errors

A
  1. Type 1 = incorrectly accepting alternative hypothesis
  2. Type 2 = incorrectly accepting null hypothesis
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Power

A
  • The probability of finding an effect assuming one exists in the population
  • Calculated as 1-B
  • B is the probability of not finding the effect (usually 0.2 as stated by Cohen)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What effects power? 3 factors

A
  1. Effect size: an objective and standardised measure of the magnitude of an effect (larger value = bigger effect size)
    Depends on test concluded – cohen’s d, pearson r, partial eta squared (ANOVA)
  2. Number of participants: more participants = more ‘signal’, less ‘noise’. You should choose sample size depending on the expected effect size (larger effect size = fewer pp’s, smaller effect size = more pp’s)
  3. Alpha level: the probability of obtaining a Typer 1 error. We compare our p value to this criterion when testing significance
    - Other factors: variability, design, test choice
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Problems with alpha testing

A
  • If we run multiple tests, this will increase the rate at which we might get a type 1 error (family wise experimental error rate)
  • We can account for this by limiting the number of test or by using corrections such as Bonferroni correction (but this reduces statistical power)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the difference between one and two-tailed tests

A
  • One-tailed- we hypothesise there will be a difference in scores, and we’re specific about which score will be higher (α=.05 at one end)
  • Two-tailed- We hypothesise there will be a difference in scores, but this could be in either direction (α= .025 at both ends)
  • For a one-tailed test, our p-value is half of the two-tailed p-value
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Which type of test do I run?

A
  • One-tailed tests are more powerful as a is higher
  • However, there are several caveats and considerations so in most cases, it is recommended that run a two-tailed test
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Power and study design:

A
  • Within-subjects studies are more powerful than between-subjects studies
  • To run a t-test with a: two-tailed design, medium effect size, a level of 0.05, power level of 0.8
  • 1) Calculate the power we have obtained in a study post-hoc
  • 2) Calculate how many participants we need to collect for a study a priori (this can be done using statistical programs such as G*Power)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is analysis of variance?

A
  • Analysis of variance (ANOVA) is an extension of the t-test
  • it allows us to test whether 3 or more population means are the same, without reducing power
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Assumptions of ANOVA

A
  • the scores were sampled randomly and are independent
  • roughly normal distribution
  • roughly equal number of participants in the groups
  • roughly equal variance for each condition
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

The basis of the ANOVA test

A
  • analysis of variance is a way to compare multiple conditioned in a single, powerful test
  • It was invented by Fisher (so its test statistic is F)
  • It compares the amount of variance explained by our experiment with the variance that is unexplained
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Between-groups ANOVA

A
  • The aim of ANOVA is to compare the ‘amount of variance explained by our experiment with the variance that is unexplained’
  • For between-group designs:
  • A) the explained variance is the variance between group
  • B) the unexplained is the variance within a group
  • The calculation is referred to as the mean squared (MS) error
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Degrees of freedom

A
  • There are degrees of freedom associated with both variance values:
  • A) degrees of freedom between conditions
  • B) residual degrees of freedom
  • ANOVA critical values require 2 d.f. values, one for each aspect of the variance
  • We must report both
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Pair-wise comparisons

A
  • ANOVA tells us whether groups differ or not
  • How do we know which particular conditions?
  • Run the multiple comparisons (those we were trying to avoid0
  • Some of these are ‘planned comparisons’, some are ‘post-hoc’
  • Correct for multiple comparisons
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Versions of ANOVA

A
  1. Analysis of variance (ANOVA) – one factor ANOVA and multifactor ANOVA
  2. Multivariate analysis of variance (MANOVA) – extension of ANOVA for multiple dependent variables
  3. Analysis of covariance (ANCOVA) – extension of ANOVA to handle continuous variables (e.g. correlations)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly