Error & Control Flashcards
What are the 2 categories of measurement error?
- Random errors obscure the results
- Constant or systematic error, bias the results - much worse
What are Extraneous Variable?
Undesirable variables that add error to our experiments (measurement of the DV)
How are Extraneous Variables controlled?
Random allocation/ counterbalancing - adds error variance
Results in an even addition of error variance across levels of the IV.
What are Confounding Variables?
Disproportionately affect one level of the IV more than other levels.
- Add constant/systematic error at the level of the IV.
Confounding variables introduce a threat to internal validity experiments.
Threats to Internal Validity. (sources of confounding variables)
- Selection
- History
- Maturation
- Instrumentation
- Reactivity
How is Selection a Threat to Internal Validity?
Bias resulting from the selection or assignment of participants to different
levels of the IV.
Random assignment - solves problem
How is History to a threat to Internal Validity?
Uncontrolled events that take place between testing occasions.
How is Maturation a threat to Internal Validity?
Intrinsic changes in characteristics of participants between different test occasions, in repeated measures design.
Counterbalancing order of experiments - solves problem.
How is Instrumentation a threat to Internal Validity?
Changes in the sensitivity or reliability of measurement instruments during the course of the study
How is Reactivity a threat to Internal Validity?
Ps awareness that they are being observed may alter behaviour
Can threaten internal validity if Ps are more influenced by reactivity at one level of the IV than the other.
Counteracted by Blind Procedures.
Define Subject Related - Demand Characteristics.
Ps might behave in the way they think the researcher wants them to behave.
Define Experimenter related - Experimenter bias.
Experimenter can affect outcomes due to their own bias.
4 forms of reliability.
- Test-retest reliability
- Test-rater reliability
- Parallel forms reliability
- Internal consistency
> Split-Half Reliability
Define Test-retest Method of reliability
It assesses the external consistency of a test.
Measures fluctuations from one time to another.
Important for constructs which we expect to be stable (e.g. personality type)/ similar over time points.
Define Inter-rater (test-rater) reliability
It assesses the external consistency of a test.
Measures fluctuations between observers (the degree to which different raters give consistent estimates of the same behaviour)
Important when results are measured by each experimenters objectivity.
Define External Reliability (Parallel forms reliability)
The extent to which a measure varies from one use to another.
Define Internal Reliability
Extent to which a measure is consistent with itself
What is the Spilt-Half Method of Reliability.
It assesses the internal consistency of a test.
Then measures the extent to which all parts of the test contribute equally to what is being measured.
Define Content Validity.
Is the content appropriate.
Define Face Validity
Based on subjective judgement. Does the test relate to the underlying theoretical concepts?
Define Construct Validity.
Does the test relate to underlying theoretical concepts? Is the construct we are trying to measure valid?
The validity of a construct is supported by cumulative research evidence collected over time.
Define Convergent validity.
Convergent validity: correlates with tests of the same and related constructs.
Define Discriminant validity.
Discriminant validity: doesn’t correlate with tests of different or unrelated constructs.
Define Internal Validity.
The extent to which the manipulation of our IV caused the change to our DV.
Define Criterion Validity.
Looks at the degree to which the measurement agrees with other measurements that are assessing the same construct.
Define Concurrent Validity.
There is a high correlation between two measures of the same thing
Define Predictive Validity.
The scale has the ability to predict some other variable measuring a different construct.
Define Divergent Validity.
Is there a lack of correlation with measures of different and unrelated constructs.
Which 6 forms of validity are used to establish construct validity over time.
Content Face Criterion Concurrent Predictive Construct
Which 2 forms of validity are used to establish construct validity in the short term.
Convergent
Divergent
Define Causation
Sufficiency and Necessity are needed.
Sufficient: Y is adequate to cause X.
Necessary: Y must be present to cause X
Define True Causation.
True causation can only be established when necessity and sufficiency criteria are satisfied
The manipulation of the IV, in the absence of all other factors, will always result in the DV change (sufficient)
Define Multifactorial Causation.
Phenomenon is determined by many interacting factors
What is a Random Sample?
- the gold standard
- each member of the population has an equal chance of being selected
- usually quasi-random
What is a Systematic Sample?
- draw from the population at fixed intervals
- problematic in populations with a periodic function
What is a Stratified Sample?
Proportional: specified groups appear in numbers proportional to their size in the population
Disproportional: specified groups which are not equally represented in the population, are selected in equal proportions.
What is a Cluster sample?
researcher samples an entire group or cluster from the population of interest
What is an Opportunity/Convenience Sample?
- people who are easily available
- but can lead to a biased sample
What is a Snowball Sample?
- recruit small number of participants and then use those initial contacts to recruit further participants
- biases the sample, but useful if you want to recruit very specific populations.
What is Population Validity?
is our sample representative?
What is Ecological Validity?
does the behaviour measured reflect naturally occurring behaviour?