Week 1 (Introduction) Flashcards
Survey Vs Experiment
Survey
-Measures variables as they naturally occur
-Variables measured, not manipulated
-Dependent on sampling, findings may be generalised to the wider population
Experiments manipulate variables to isolate their effects
-Establishing causal relationships
-Randomisation used, (participants equally likely to be given treatment A or B)
-Holds other factors constant: so any differences are attributed to experimental manipulation
-Can be within subjects or between subjects, or a mixture of both (mixed designs)
Karl Popper: Hypothetico-Deductive Method
- Theory
- Hypothesis
- Operationalisation of Concepts
- Selection of participants
- Survey Studies / Experimental Design
- Data Collection
- Data Analysis
- Findings
Systematic Vs Unsystematic Variation
Systematic Variation: The variation that can be explained by the model (the statistic) H1
Unsystematic Variation: The variation that cannot be explained by the model (the statistic) H0
Test statistic equation
Variance explained by the model ÷ Variance not explained by the model
= Effect ÷ Error
One tailed vs Two Tailed Hypothesis
-One tailed is directional
-Two tailed is non-directional
Statistical Significance
If the probability of getting the result if the null hypotheses is true, is less than (0.05) then we infer that the result is significant
Type I Error
-Hasty rejection of the null hypotheses (a false positive)
-Concluding there is an effect when there in fact is one
Type II Error
-Hasty rejection of the alternative (false negative)
-Conclude there is no effect when there in fact is one
What are the limitations of relying on hypotheses testing
-Focus on whether or not the result is significant statistically but not necessarily in the broader sense!
-Does not give any indication of the size of the statistical effect
Alpha
-Typical value in psychology is 0.05
-The probability of making a Type I Error (Chance of saying there is an effect when there isn’t one)
Beta
-Typical value in psychology is 0.20
-The probability of making a Type II Error (Chance of saying there is NO effect when there is one)
Importance of Effect Sizes
-An attempt to address Type I errors
-If result is significant, gives indication of the size of the effect
-An effect can be significant but too small to be practically meaningful
-AKA the magnitude of the statistical effect found
-Variety of effect sizes used
Types of effect size values
-Cohen’s d (d)
-Pearson’s r (r)
-Omega (w)
-Eta-Squared (η2)
Cohen’s d effect sizes
Small effect: 0.2
Medium effect: 0.5
Large effect: 0.8
Pearson’s r effect sizes
Small effect: 0.1
Medium effect: 0.3
Large effect: 0.5
Omega
Small effect: 0.1
Medium effect: 0.3
Large effect: 0.5
Eta-Squared
Small effect: 0.01
Medium effect: 0.059
Large effect: 0.138
Power analysis
-Attempt to control for Type II errors
-Tells us the statistical power associated with a particular test (the strength of the statistical test to find an effect is there is one to find)
Two approaches to running a power analysis
A priori
-Before you collect the data & do the analysis
-Result of power analysis determines the sample size needed reliably find an effect if there is one to find
Post-Hoc
-Power estimated after collecting data and calculating inferential statistics (i.e, t-test, correlations, ANOVA, etc.)
Mean
𝑥̅ = Σx/N
-Adds up all the scores and divide by the number of scores/number of participants
-Gives an indication of the central tendency of the data set
Variance
s2 = Σd2 / N-1
-The sum of all the squared differences (Σd2) divided by the number of scores/number of participants minus 1 (N-1)
-This provides an indication of the spread in the data. The higher the variance, the more spread, i.e., the mean may be a poor representation of the data. The smaller the number, the less spread – so the mean is more reliable.
Standard Deviation
s = √(Σd2 / N-1)
-The sum of all the squared differences (Σd2) divided by the number of scores/number of participants minus 1 (N-1). This is then square rooted.
-The same as the variance, just we square root the variance. Reduces the value down, making it easier to interpret.
Parametric VS Non-Parametric statistics
Parametric
-Make assumptions about the data
-Normally distributed
-Homogeneity of variance
-Usually only for ratio/interval data
-Usually for e.g, group differences in equally sized groups
Non-Parametric
-Make no assumptions about the data
-Violation of normality assumption (e.g, if data are very skewed/sparse)
-Used if you have ordinal data
-Or sometimes where you have small group sizes
Four core principles of research integrity
-Honesty in all aspects of research
-Accountability in the conduct of research
-Professional courtesy and fairness in working with others
-Good stewardship of research on behalf of others
Replication Crisis
-An attempt was reported to replicate 100 psychological studies and found..
-Only 36% of studies could be replicated
-Average effect size of the replications was smaller than the original study
-More surprising findings were less likely to be successfully replicated.
-Social psychology findings were less likely to replicate than those in cognitive psychology.
Embracing principles of open science
Making all elements of research process freely and openly available
P-hacking
Fiddling with your data to see if that changes the result.
Pre-registration