Week 1 (Introduction) Flashcards

1
Q

Survey Vs Experiment

A

Survey
-Measures variables as they naturally occur
-Variables measured, not manipulated
-Dependent on sampling, findings may be generalised to the wider population

Experiments manipulate variables to isolate their effects
-Establishing causal relationships
-Randomisation used, (participants equally likely to be given treatment A or B)
-Holds other factors constant: so any differences are attributed to experimental manipulation
-Can be within subjects or between subjects, or a mixture of both (mixed designs)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Karl Popper: Hypothetico-Deductive Method

A
  1. Theory
  2. Hypothesis
  3. Operationalisation of Concepts
  4. Selection of participants
  5. Survey Studies / Experimental Design
  6. Data Collection
  7. Data Analysis
  8. Findings
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Systematic Vs Unsystematic Variation

A

Systematic Variation: The variation that can be explained by the model (the statistic) H1

Unsystematic Variation: The variation that cannot be explained by the model (the statistic) H0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Test statistic equation

A

Variance explained by the model ÷ Variance not explained by the model

= Effect ÷ Error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

One tailed vs Two Tailed Hypothesis

A

-One tailed is directional
-Two tailed is non-directional

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Statistical Significance

A

If the probability of getting the result if the null hypotheses is true, is less than (0.05) then we infer that the result is significant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Type I Error

A

-Hasty rejection of the null hypotheses (a false positive)
-Concluding there is an effect when there in fact is one

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Type II Error

A

-Hasty rejection of the alternative (false negative)
-Conclude there is no effect when there in fact is one

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the limitations of relying on hypotheses testing

A

-Focus on whether or not the result is significant statistically but not necessarily in the broader sense!
-Does not give any indication of the size of the statistical effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Alpha

A

-Typical value in psychology is 0.05
-The probability of making a Type I Error (Chance of saying there is an effect when there isn’t one)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Beta

A

-Typical value in psychology is 0.20
-The probability of making a Type II Error (Chance of saying there is NO effect when there is one)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Importance of Effect Sizes

A

-An attempt to address Type I errors
-If result is significant, gives indication of the size of the effect
-An effect can be significant but too small to be practically meaningful
-AKA the magnitude of the statistical effect found
-Variety of effect sizes used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Types of effect size values

A

-Cohen’s d (d)
-Pearson’s r (r)
-Omega (w)
-Eta-Squared (η2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Cohen’s d effect sizes

A

Small effect: 0.2
Medium effect: 0.5
Large effect: 0.8

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Pearson’s r effect sizes

A

Small effect: 0.1
Medium effect: 0.3
Large effect: 0.5

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Omega

A

Small effect: 0.1
Medium effect: 0.3
Large effect: 0.5

17
Q

Eta-Squared

A

Small effect: 0.01
Medium effect: 0.059
Large effect: 0.138

18
Q

Power analysis

A

-Attempt to control for Type II errors
-Tells us the statistical power associated with a particular test (the strength of the statistical test to find an effect is there is one to find)

19
Q

Two approaches to running a power analysis

A

A priori
-Before you collect the data & do the analysis
-Result of power analysis determines the sample size needed reliably find an effect if there is one to find

Post-Hoc
-Power estimated after collecting data and calculating inferential statistics (i.e, t-test, correlations, ANOVA, etc.)

20
Q

Mean

A

𝑥̅ = Σx/N
-Adds up all the scores and divide by the number of scores/number of participants
-Gives an indication of the central tendency of the data set

21
Q

Variance

A

s2 = Σd2 / N-1
-The sum of all the squared differences (Σd2) divided by the number of scores/number of participants minus 1 (N-1)
-This provides an indication of the spread in the data. The higher the variance, the more spread, i.e., the mean may be a poor representation of the data. The smaller the number, the less spread – so the mean is more reliable.

22
Q

Standard Deviation

A

s = √(Σd2 / N-1)
-The sum of all the squared differences (Σd2) divided by the number of scores/number of participants minus 1 (N-1). This is then square rooted.
-The same as the variance, just we square root the variance. Reduces the value down, making it easier to interpret.

23
Q

Parametric VS Non-Parametric statistics

A

Parametric
-Make assumptions about the data
-Normally distributed
-Homogeneity of variance
-Usually only for ratio/interval data
-Usually for e.g, group differences in equally sized groups

Non-Parametric
-Make no assumptions about the data
-Violation of normality assumption (e.g, if data are very skewed/sparse)
-Used if you have ordinal data
-Or sometimes where you have small group sizes

24
Q

Four core principles of research integrity

A

-Honesty in all aspects of research
-Accountability in the conduct of research
-Professional courtesy and fairness in working with others
-Good stewardship of research on behalf of others

25
Q

Replication Crisis

A

-An attempt was reported to replicate 100 psychological studies and found..
-Only 36% of studies could be replicated
-Average effect size of the replications was smaller than the original study
-More surprising findings were less likely to be successfully replicated.
-Social psychology findings were less likely to replicate than those in cognitive psychology.

26
Q

Embracing principles of open science

A

Making all elements of research process freely and openly available

27
Q

P-hacking

A

Fiddling with your data to see if that changes the result.

28
Q

Pre-registration