week 1 Flashcards
Karl popper: Hypothetico-deductive method
Theory-> hypothesis-> operationalisation of concepts-> selection of participants-> survey/ experimental designs-> data collection-> data analysis-> findings
Experimental and survey research methods
Surveys measure variables as they naturally occur
Experiments manipulate variables to isolate their effects
statistical test
Test statistic= variance explained by the model/ variance not explained by the model= effect error
Testing hypothesis
we pose hypothesis
analyse our data
calculate the probability of getting the result if the null hypothesis is true
We then reject or fail to reject the null hypothesis
Errors
Type 1- rejection of null hypothesis and say there is an effect when there isn’t one
Type 2- rejection of hypothesis and say there is an effect when there isn’t one
Alpha and betas
Alpha 0.05- probability of making a type 1 error
Beta 0.20- probability of making a type 2 error
Effect size
An attempt to address type one errors referred to as magnitude of the statistical effect found and use a lot of different sizes
Power analysis
Attempt to control type 2 errors
Tells us the statistical power associated with a particular test
Two approaches to running power analysis
Model building
Pose hypotheses
Priori
Collect data
Calculate data
calculate descriptive statistics
identify whether the result is significant
effect size
Reject null hypothesis
Write up
Parametric vs Non-Parametric Statistics
Parametric Statistics
Make assumptions about the data
Normally distributed (see below)
Homogeneity of variance
Usually only for ratio/ interval data
Used for e.g., group differences in equally sized groups
Non-Parametric Statistics
Make no assumptions about the data
Violation of normality assumption (e.g., if data are very skewed / sparse)
Used if you have ordinal data
Or sometimes where you have small group sizes
Research integrity
Underpinned by four core principles: honest, accountability, good stewardship, professional courtesy and fairness
Replication crisis
Reports on an attempt to replicate 100 psychological studies and found…
Only 36% of studies could be replicated (i.e., found significant results as reported in the original study).
The average effect size of the replications was smaller than the original study (on average half the size of the original effect).
More ‘surprising’ findings were less likely to be successfully replicated.
Social psychology findings were less likely to replicate than those in cognitive psychology.
Embrace the principles ofOpen Science:
An umbrella term =
Making all elements of the research process freely and openly available
P-harking
Set out your data analysis plan in advance, and stick to it. If your result is non- significant, don’t ‘fiddle’ with your data to see if that changes the result
Low statistical power
Before starting data collection, use a priori power analysis to determine the required sample size
Replication studies
Given concerns about the reliability of effects reported in the literature, there is a move towards greater promotion of undertaking replication studies.