LESSON 3 Flashcards
what are random errors?
Random error – chance fluctuations in out measurement. They obscure results
what are a constant/systematic errors?
Constant/systematic error – bias present which influences measurements continuously. They create bias in results.
what are the main 5 threats to internal validity?
selection, history, maturation, instrumentation, reactivity
IV threats: what is selection?
Bias from the selection of ppt to the different conditions of the IV
Means that ppt who are assigned to different levels of the IV may differ systematically to other ppt and systematically influence the measurement of the DV (other than the IV)
This is a key issue for Quasi experiments
IV threats: what is history?
Uncontrolled events that occur between testing occasions which may influence the DV (other than the IV)
IV threats: what’s maturation?
Changes in the characteristics of ppt between the test occasions e.g ppt getting older between conditions (i.e in longitudinal study), or looking at memory (ppt memory may have deteriorated between testing conditions
IV threats: what is instrumentation?
Changes in sensitivity/reliability of measuring instruments during the study
IV threats: what is reactivity?
Ppt awareness that they are being observed may influence their behaviour
(demand characteristics – when ppt believes they are expected to act in a certain way, experimenter bias – when experimenters expect to see something and this influences their behaviour)
How to counter reactivity – single/double blind procedures
what is the difference between reliability and validity?
Reliability = consistency
Test this through repeating measurements/study
Validity = truthfulness
By operationalise variables and controlled experiments
what are the main 4 ways researchers measure reliability?
test-retest reliability, inter-rater reliability, parallel forms reliability, internal consistency
measuring reliability: what are parallel forms reliability measurements?
If we administer different versions of out measure (e.g different types of IQ test) to the same ppt, would we get the same results?
Different versions can be useful to help eliminate memory effects as the questions are different
measuring reliability: what is internal consistency?
Determines whether all items/questions (e.g questionnaire) are measuring the same thing/construct
This can be assessed through split-half reliability (questionnaire items split into two groups and halves be administers to ppt on diff occasions e.g even questions vs odd questions – these should produce similar results!!)
what are the 4 types of validity?
face validity, content validity, criterion validity, construct validity
what is face validity?
Is the test measuring what it if supposed to measure at face-value?
e.g do the questions on a test reflect the knowledge ppt should have learnt?
what is context validity?
Does it measure the construct fully?
e.g does the test cover all expected knowledge and not just part of it
what is criterion validity?
Does the measure give results which are in agreement with other measures of the same thing?
HOW TO MEASURE THIS:
Concurrent validity – comparison of new test with established test
Predictive validity – does the test predict outcome on another variable
what is construct validity?
Is the construct we are trying to measure valid, and does it actually exist?
The validity of a construct is supported by collected research over time
HOW TO MEASURE THIS:
Convergent validity – correlates with tests of the same and related constructs (e.g satisfaction measures and contentment measures should relate to measures of happiness)
Discriminant validity – the construct shouldn’t correlate with tests of different/unrelated constructs (e.g measurements of sadness shouldn’t correlate with measures of happiness)
what is the difference between something being ‘sufficient’ and something being ‘necessary’? (true causation)
- Sufficiency and necessity – variable must achieve this criterion to make claims about causality
- Sufficient: y is adequate to cause x
- Necessary: y must be present to cause x
for true causation to be established the variable must be…
both necessary and sufficient to cause a change
what is cluster sampling?
Researcher samples an entire group/cluster from the population of interest
- Sufficiency reasons
- Generalisability issues as the cluster may not accurately represent the entire population
what is snowball sampling?
Recruit a small number of ppt and then use those initial contacts to recruit further ppt
Bias sample (get a certain kind of person) but it useful when looking for specific or difficult to access populations
what are the two main concerns for ecological validity?
- POPULATION VALIDITY – is the sample representative?
- ECOLOGICAL VALIDITY – does the behaviour measured reflect naturally occurring behaviour?
what are factors that must be considered when deciding the sample size?
Design (between or within design, number of conditions – more conditions = more ppt needed), response rate (ppt may drop out, not all may contribute), heterogeneity of population (is a small sample size enough, or does it need to be larger in order to be representative?)
what problems are associated with repeated measures designs relative to independent group designs?
there is an increased likelihood of fatigue effects - may become fatigued by second occasion of participation
there is an increased likelihood of reactivity - gaining awareness is more likely by taking part in 2 conditions
give an example of conceptual replication
using previously reported methods, collecting a new dataset but recruiting a younger sample, generate an operationalised hypothesis to perform empirical tests on
how can a researcher ensure that observers rates have stayed consistent over time?
test-retest reliability
factorial designs always:
- contain at least…
- the IV always have a…
contain at least 2 IVs
always have a within-subjects IV
what is an operational definition?
variable which has been operationalised by a researcher in order to measure it e.g frustration measured by bitemarks on a pencil
what is the principle of induction?
Causation - if A is often observed with B it is probable that on the next occasion A is observed, B will be too
what is a key issue with the principle of induction?
we can’t be certain we have considered every single instance of a phenomena/considered the full range of possible conditions to rely on induction
what is the difference between replication and reproduction in psychological testing?
Replication - means obtaining consistent results across studies aimed at answering the same scientific question, each of which has obtained its own data
reproduction - consistent results using the same input data, computational steps, methods, code, and conditions of analysis.
what is the difference between direct and conceptual replications in psychological testing?
direct replications - attempt to confirm original findings using same methods
conceptual replications - attempt to confirm original theoretical ideas by repeating across different conditions