4: Validity Reliability Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What contributes to soundness of experiments?

A

precision

accuracy

sensitivity

specificity

reliability

validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what is precision?

A

Consistency, reliability, homogeneity of the data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what is accuracy?

A

Based on the precision of the measurements. Good accuracy if averaged values ≈ standard. Standards are rare in the biological sciences (behavioural or neurosciences). In psychophysics (Signal Detection Theory), accuracy = specificity + sensitivity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what is sensitivity?

A

Measures should be sensitive enough to detect differences in a characteristic that are important to the investigator

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what is specificity?

A

Measures should be specific to the characteristics, group, phenomenon, etc. investigated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what is reliability?

A

Consistency of the measures. Precision

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what is validity?

A

Does a variable represent what it is intended?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what are nuisance variables?

A

We hinted at this before with the concept of “distortion”

Two types:
Systematic error or bias
Random error (or error variance)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what is influenced by Systematic error or bias?

A

accuracy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what type of variables are a source of bias?

A

extraneous/ confounded variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Types of biases/systematic errors and their solutions

A

Observer/Experimenter»>
Blinding procedures

Subject/Participant&raquo_space;> Blinding/ Unobtrusive measures

Apparatus&raquo_space;> Calibration

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

what is random error?

A

The precision of measurements (and therefore the consistency and reliability) is influenced by random error unpredictable fluctuations)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

sources of random error

A

Random error is due to random fluctuations in participants, experimental conditions, methods of measurement, etc.

Main sources of random error:
*Observer / experimenter reliability
*Participant / subject reliability
*Instrument / apparatus reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

contributors of precision

A

Calibration of an apparatus

Consistency of a participant

Environmental and other factors

Archery example: Bow/sight, archer, wind

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How to assess precision?

A

Measures of variability (descriptive statistics)

measures of concordance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what are measures of variability

A

Standard deviation (sd) of repeated measurements.

Coefficient of variation (cv): (sd ÷ mean) × 100

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

what are measures of concordance?

A

Correlation coefficient: Consistency of results of paired
measurements. The coefficient of correlation is an index of
concordance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

what is reliability?

A

Consistent results over repeated measurements. Reliability refers to the PRECISION of your measures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

assessment methods

A

Test-retest reliability/consistency: Stability of test scores over time.

Alternative (parallel) forms reliability/consistency: e.g., recognition/recall example with
tests.

Internal consistency: How consistent is the measure across items intended to measure the same concept, e.g., split-half reliability/consistency » use of two lists in a memory test.

Inter-rater reliability: see next slide.

In some cases: Intra-rater reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

what is Inter-observer/rater consistency or reliability?

A

consistency of recording
and scoring between ALL OBSERVERS

with an inter- observer reliability measure such as an index of concordance, kappa coefficient, Kendall coefficient

21
Q

what is Intra-observer/rater consistency or reliability?

A

each observer, individually, records,
interprets or identifies SIMILAR behaviours or events the SAME
WAY.

with an intra- observer reliability measure)

22
Q

what does intra mean

A

within

23
Q

what does inter mean

A

between

24
Q

what is validity about?

A

Validity is about the threats to valid inference making; Is the procedure you chose measures what it is intended to measure?

25
Q

what are the four main types of threats to validity

A

construct validity

statistical conclusion validity

internal validity

external validity

26
Q

what is construct validity

A

The wrong independent variables are identified

27
Q

what is statistical conclusion validity

A

random error and wrong selection of statistical tests, low power, violation of statistical assumptions, fishing, etc

28
Q

types of validity of a measurement

A

face validity

content validity

construct validity

criterion validity

29
Q

what is face validity

A

How well the test appears to measure what it is designed to measure. It is a plausible measure of the variable we want to estimate. Face value. Non-scientific. E.g.: Common sense definition of stress.

30
Q

what is content validity

A

How adequately the measure addresses the representativeness of the measured event or phenomenon as a whole (i.e., represents the whole content). Expert opinion can determine this type of validity. E.g., you lack content validity if you want to measure stress, but you only take behavioural measures and no physiological measures (or vice versa)

31
Q

what is construct validity

A

A measure of how well a test and operational definition assess some underlying (theoretical) construct or variable. Depends heavily on the operational definitions, e.g., “stress”. The measurement procedure and the variable it measures are in agreement. An assay of glucocorticoids suggesting high levels of cortisol is associated with high stressful situations

32
Q

what is criterion validity

A

The ability of a measure to assess (or predict) an outcome or criterion. Performance measures

33
Q

what are the subtypes of criterion-related validity

A

concurrent validity

convergent/divergent validity

discriminant validity

predictive

34
Q

what is concurrent validity

A

A measure of how well an assay estimates a criterion/ performance in relation to another (concurrent) phenomenon or group of subjects at the same point in time. A new test or assay is validated as it concurs with an older, better established one.

35
Q

what is convergent/ divergent validity

A

Two or more methods of measurement converge/diverge upon one another. Strong relationship between the scores are found. Can be established by correlation

36
Q

what is discriminant validity

A

The methods of measurement diverge upon one another and the divergence is expected. A measure of stress should not be expected to be highly correlated with a measure/construct of empathy

37
Q

what is predictive validity

A

A measure of how well an assay predicts a phenomenon on a time criterion: e.g., pre/post. Measures predicts future states.

38
Q

what is the relationship between validity and reliability

A

A measure can have high reliability but not low validity.

A measure cannot be more valid than it is reliable

39
Q

types of validity and what they mean

A

Internal validity

External validity

40
Q

what is internal validity and what does that mean

A

Measures what it is supposed to?

Associated with the criteria for ultimate (analytic) experiments (i.e., fully “experimental”).

  • No confounded variables
  • Controlled variables… are controlled
  • Appropriate control group(s)
  • Random assignment (randomization)
  • Random selection (sampling) ~ preferable, but rarely attained
41
Q

what is external validity and what does that mean

A

Generalization potential or generalizability

Generalizability of the data!
* Species
* Environments
* Cultures
* Age groups
* Conditions, etc.

Determines the applications and implications of an experiment.

42
Q

what is the criteria for external validity

A

population selection

operational definitions

parameter values

demand characteristics

43
Q

what is population selection

A

Converging evidence (from different populations) and representativeness of the sample

44
Q

what are operational definitions

A

Agreement on definitions. For example, “stress”

45
Q

what are parameter values

A

The values you select for each variable in your experiment should be well defined. Applies to control variables and independent variables

46
Q

what are demand characteristics

A

Cues in a research procedure that influence the behaviour of subjects are absent or minimized.

Have the potential to influence internal validity

47
Q

what is ecological validity ( case of external validity)

A

Related to external validity (generalizes/applies well to other people, settings, conditions, etc.).

Are experiments done in the laboratory generalizable to the “real world”?

Not a central concern of neuroscience (in general). Technological limitations
and constraints.

48
Q

what are mediator variables

A

Provides a causal link in a sequence between an IV and a DV. Answers the WHY?

49
Q

what are moderator variables

A

Modulates the strength or direction of the relation between an IV and a DV. Answers the WHEN, and for WHOM or WHAT?