Reliability and Validity Flashcards

1
Q

Define

Alternative

A

Two forms of the same test developed; different items selected according to the same rules. Different distribution of scores (mean and variance may not be equal)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Define

Base rate

A

the proportion of individual in the population who show the behaviour of interest in a given psychological testing or assessment situation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Define

Classical test theory

A

a body of related psychometric theory that predicts outcomes of psychological testing such as the difficulty of items or the ability of test-takers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Define

Concurrent validity

A

a form of predictive validity in which the index of social behaviour is obtained close in time to score on the psychological test (or other assessment device)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Define

Construct underrepresentation

A

failure to capture important components of a contruct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Define

Construct validity

A

the meaning of a test score made possible by knowledge of the pattern of relationships it enters into with other variables and the theoretical interpretation of those relationships

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Define

Constuct-irrelevant variance

A

measuring things other than the construct of interest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Define

Content validity

A

the extent to which items on a test represent the universe of behaviour the test was designed to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Define

Convergent and discriminant validity

A

the subjection of a multitrait-multimethod matric to a set of criteria that specify which correlations should be large and which small in terms of a psychological theory of the constructs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Define

Criterion-related validity

A

the extent to which a measure is related to an outcome (e.g. marks in Year 12 are used to predict performance at university)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Define

Cronbach’s alpha

A

an estimate of reliability that is based on the average intercorrelation of the items in a test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Define

Cutting point

A

the test score or point on a scale, in the case of another assessment device, that is used to split those being tested or assessed into two groups predicted to show or not show some behaviour of interest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Define

Domain-sampling model

A

a way of thinking about the composition of a psychological test that sees the test as a representative sample of the larger domain of possible items that could be included in the test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Define

Equivalent forms reliability

A

the estimate of reliability of a test obtained by comparing two forms of a test constructed to measure the same construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Define

Errors of measurement

A

Factors that contribute to inconsistency - characteristics of tests taker, test or situation that have nothing to do with attribute being tested by effect scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Define

Face validity

A

Does the test look like it measures the relevant construct?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Define

Factor analysis

A

a mathematical method of summarising a matric of values (such as the intercorrelation of test scores) in terms of a smaller number of values (factors) from which the original matric can be reproduced

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Define

False negative decision

A

a decision that incorrectly allocates a test taker or person being assessed to the category of those predicted not to show some behaviour of interest on the basis of their score on a test or other assessment device

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Define

False positive decision

A

a decision that incorrectly allocates a test taker or person being assessed to the category of those predicted to show some behaviour of interest on the basis of their score on a test or other assessment device

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Define

Generalisability theory

A

a set of ideas and procedures that follow from the proposal that the consistency or precision of the output of a psychological assessment device depends on specifying the desired range of conditions over which this is to hold

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Define

Incremental validity

A

the extent to which knowledge of score on a test (or other assessment device) adds to that obtained by another, pre-existing score or psychological characteristic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Define

Inter-rater reliability

A

the extent to which different raters agree in their assessments of the same sample of ratees

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Define

Internal consistency

A

the extent to which a psychological test is homogenous or heterogeneous

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Define

Kuder-Richardson 20 (KR20)

A

a particular case of Cronbach’s alpha for dichotomously scored items (i.e. scored as 0 or 1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
# Define Method variance
the variability among scores on a psychological test or other assessment device that arises because of the form as distinct from the content of the test
26
# Define Multitrait-multimethod matrix
the patterns of correlations resulting from testing all possible relationships among two or more methods of assessing two ro more constructs
27
# Define Parallel forms relaibility
Two forms of the same test developed; different items selected according to the same rules. Same distribution of scores (mean and variance equal)
28
# Define Predictive validity
the extent to which a score on a psychological test (or other assessment device) allows a statement about standing on a variable indexing important social behaviour independent of the test
29
# Define Reliability
the consistency with which a test measures what it purports to measure in any given set of circumstances
30
# Define Reliability coefficient
an index - often a Pearson product moment correlation coefficient - of the ratio of true score to error score variance in a test as used in a given set of circumstances
31
# Define Selection ratio
the proportion of those tested or assessed who can be allocated to the category of showing the behaviour of interest in a given psychological testing or assessment situation
32
# Define Social desirability bias
a form of method variance common in the construction of psychological tests of personality that arises when people respond to questions that place them in a favourable or unfavourable light
33
# Define Spearman-Brown formula
applied to estimate reliability if each half of the test was the same length as the test. I.e. allows you to estimate internal consistency if the test was longer or shorter
34
# Define Split-half reliability
the estimate of reliability obtained by correlating scores on the two halves of a test formed in some systematic way (e.g. odd versus even items)
35
# Define Stability over time
the extent to which test scores remain stable when a test is administered on more than one occasion
36
# Define Standard error of estimate
an index of the amount of error in predicting one variable from another
37
# Define Standard error of measurement
an expression of the precision of an individual test score as an estimate of the trait it purports to measure
38
# Define Test-Retest Reliability
the estimate of reliability obtained by correlating scores on the test constructor is seeking to measure and the conditions under which it will be used
39
# Define True scores
Factors that contribute to consistency - stable attributes under examination
40
# Define Valid negative decision
a decision that correctly allocates a test taker or person being assessed to the category of those predicted not to show some behaviour of interest on the basis of their score on a test or other assessment device
41
# Define Valid positive decision
a decision that correctly allocates a test taker or person being assessed to the category of those predicted to show some behaviour of interest on the basis of their score on a test or other assessment device
42
# Define Validity
the extent to which evidence supports the meaning and use of a psychological test (or other assessment device)
43
# Definition Two forms of the same test developed; different items selected according to the same rules. Different distribution of scores (mean and variance may not be equal)
Alternative
44
# Definition the proportion of individual in the population who show the behaviour of interest in a given psychological testing or assessment situation
Base rate
45
# Definition a body of related psychometric theory that predicts outcomes of psychological testing such as the difficulty of items or the ability of test-takers
Classical test theory
46
# Definition a form of predictive validity in which the index of social behaviour is obtained close in time to score on the psychological test (or other assessment device)
Concurrent validity
47
# Definition failure to capture important components of a contruct
Construct underrepresentation
48
# Definition the meaning of a test score made possible by knowledge of the pattern of relationships it enters into with other variables and the theoretical interpretation of those relationships
Construct validity
49
# Definition measuring things other than the construct of interest
Constuct-irrelevant variance
50
# Definition the extent to which items on a test represent the universe of behaviour the test was designed to measure
Content validity
51
# Definition the subjection of a multitrait-multimethod matric to a set of criteria that specify which correlations should be large and which small in terms of a psychological theory of the constructs
Convergent and discriminant validity
52
# Definition the extent to which a measure is related to an outcome (e.g. marks in Year 12 are used to predict performance at university)
Criterion-related validity
53
# Definition an estimate of reliability that is based on the average intercorrelation of the items in a test
Cronbach's alpha
54
# Definition the test score or point on a scale, in the case of another assessment device, that is used to split those being tested or assessed into two groups predicted to show or not show some behaviour of interest
Cutting point
55
# Definition a way of thinking about the composition of a psychological test that sees the test as a representative sample of the larger domain of possible items that could be included in the test
Domain-sampling model
56
# Definition the estimate of reliability of a test obtained by comparing two forms of a test constructed to measure the same construct
Equivalent forms reliability
57
# Definition Factors that contribute to inconsistency - characteristics of tests taker, test or situation that have nothing to do with attribute being tested by effect scores
Errors of measurement
58
# Definition Does the test look like it measures the relevant construct?
Face validity
59
# Definition a mathematical method of summarising a matric of values (such as the intercorrelation of test scores) in terms of a smaller number of values (factors) from which the original matric can be reproduced
Factor analysis
60
# Definition a decision that incorrectly allocates a test taker or person being assessed to the category of those predicted not to show some behaviour of interest on the basis of their score on a test or other assessment device
False negative decision
61
# Definition a decision that incorrectly allocates a test taker or person being assessed to the category of those predicted to show some behaviour of interest on the basis of their score on a test or other assessment device
False positive decision
62
# Definition a set of ideas and procedures that follow from the proposal that the consistency or precision of the output of a psychological assessment device depends on specifying the desired range of conditions over which this is to hold
Generalisability theory
63
# Definition the extent to which knowledge of score on a test (or other assessment device) adds to that obtained by another, pre-existing score or psychological characteristic
Incremental validity
64
# Definition the extent to which different raters agree in their assessments of the same sample of ratees
Inter-rater reliability
65
# Definition the extent to which a psychological test is homogenous or heterogeneous
Internal consistency
66
# Definition a particular case of Cronbach's alpha for dichotomously scored items (i.e. scored as 0 or 1)
Kuder-Richardson 20 (KR20)
67
# Definition the variability among scores on a psychological test or other assessment device that arises because of the form as distinct from the content of the test
Method variance
68
# Definition the patterns of correlations resulting from testing all possible relationships among two or more methods of assessing two ro more constructs
Multitrait-multimethod matrix
69
# Definition Two forms of the same test developed; different items selected according to the same rules. Same distribution of scores (mean and variance equal)
Parallel forms relaibility
70
# Definition the extent to which a score on a psychological test (or other assessment device) allows a statement about standing on a variable indexing important social behaviour independent of the test
Predictive validity
71
# Definition the consistency with which a test measures what it purports to measure in any given set of circumstances
Reliability
72
# Definition an index - often a Pearson product moment correlation coefficient - of the ratio of true score to error score variance in a test as used in a given set of circumstances
Reliability coefficient
73
# Definition the proportion of those tested or assessed who can be allocated to the category of showing the behaviour of interest in a given psychological testing or assessment situation
Selection ratio
74
# Definition a form of method variance common in the construction of psychological tests of personality that arises when people respond to questions that place them in a favourable or unfavourable light
Social desirability bias
75
# Definition applied to estimate reliability if each half of the test was the same length as the test. I.e. allows you to estimate internal consistency if the test was longer or shorter
Spearman-Brown formula
76
# Definition the estimate of reliability obtained by correlating scores on the two halves of a test formed in some systematic way (e.g. odd versus even items)
Split-half reliability
77
# Definition the extent to which test scores remain stable when a test is administered on more than one occasion
Stability over time
78
# Definition an index of the amount of error in predicting one variable from another
Standard error of estimate
79
# Definition an expression of the precision of an individual test score as an estimate of the trait it purports to measure
Standard error of measurement
80
# Definition the estimate of reliability obtained by correlating scores on the test constructor is seeking to measure and the conditions under which it will be used
Test-Retest Reliability
81
# Definition Factors that contribute to consistency - stable attributes under examination
True scores
82
# Definition a decision that correctly allocates a test taker or person being assessed to the category of those predicted not to show some behaviour of interest on the basis of their score on a test or other assessment device
Valid negative decision
83
# Definition a decision that correctly allocates a test taker or person being assessed to the category of those predicted to show some behaviour of interest on the basis of their score on a test or other assessment device
Valid positive decision
84
# Definition the extent to which evidence supports the meaning and use of a psychological test (or other assessment device)
Validity
85
What is reliability?
* The degree to which a test tool provides consistent results * A test is considered reliable when it produces the same results again and again, when measuring the same thing
86
What is validity?
Validity can be broadly understood as the extent to which a test measures the constuct it is intended to measure
87
John is feeling unwell and visits his GP. The GP orders some blood tests. The results of the blood test indicate that John is iron deficient. The doctor prescribes iron supplements which John immediately start taking as prescribed. After a few weeks he returns to the doctor to repeat the blood tests. The results indicate that the iron levels have reduced! What might be happening?
* The test may have poor validity (i.e. it is measuring some other variable) * The test has poor reliability (i.e. when repeated, the test often shows different results)
88
Why is reliability and validity important?
* Diagnosis * Assessment of ability * Treatment * Decisions around recommending treatment * Monitoring treatment outcomes (e.g. reliability would be really important if you are repeating tests to see if the treatment is working) * The conclusions you can draw rely on the reliability and validity of the tests/assessments you are using. * Important clinically and in research
89
True or False: Test can be reliable without being valid
True
90
True or False: Test can be valid without being reliable
False Tests cannot be valid without being reliable
91
According to Classical Test Theory, what are test scores the result of?
* Factors that contribute to consistency – stable attributes under examination (“_True Scores_”) * Factors that contribute to inconsistency – characteristics of tests taker, test or situation that have nothing to do with attribute being tested but effect scores. (“_Errors of Measurement_”)
92
What are some commone sources of error on a test?
Item selection Test administration Test scoring Systematic measurement error
93
How is item selection a potential source of error?
sample of items chosen may not be equally reflective of every individual’s true score.
94
How is test administration a potential source of error?
General environmental conditions e.g. temperature, lighting, noise; temporary “states” of test taker e.g. fatigue, anxiety, distraction. E.g. completing an IQ test in a loud noisy room. Examiner providing non-standerdised instructions
95
Domain Sampling Theory considers the problem of using only a ________ of items to represent a construct
Domain Sampling Theory considers the problem of using only a **_sample_** of items to represent a construct
96
If the same test is administered to the same group twice at two different times, why might the scores not be the same?
More test-retest reliability Practice effects Maturation Treatment effects or setting
97
Which of these would test-retest be approapriate for? * State anxiety * Weight of a baby * Extraversion * Intelligence
Which of these would test-retest be approapriate for? * State anxiety * Weight of a baby * **Extraversion** * **Intelligence**
98
How do you maximise test-retest reliability?
* Test a relatively stable construct * No intervention in between testing * Shorter time between testing
99
What is the difference between parallel and alternate forms reliability?
The both involve two forms of the same test developed; different items selected according to the same rules. **Parallel Forms:** same distribution of scores (means and variance equal) **Alternate Forms:** different distribution of scores (mean and variance may not be equal)
100
What is the split-half method?
* Test is split into halves (randomly, odd-even system, top vs bottom) * Correlate the two halves * Estimate of reliability based on split half is smaller due to smaller number of items * Spearman-Brown formula is applied to estimate reliability if each half of the test was the same length as the test. * i.e. Allows you to estimate internal consistency if the test if it was longer or shorter
101
What is the rationale for the split-half method?
if scores on 2 half tests from single administration are highly correlated, scores on 2 whole tests from separate administrations should also be highly correlated.
102
What is Cronbach's alpha?
A generalised reliability coefficient for scoring systems that are graded for each item (i.e. agree to disagree) * Mean of all possible split-half correlations, corrected by the Spearman-Brown formula * Ranges from 0 (no similarity) to 1 (perfectly identical)
103
What are considered acceptable levels of reliability?
Depends on the purpose to some extent * .70-.80 acceptable or good * Greater than .90 may indicate redundancy in items * High reliability is really important in clinical settings when making decisions for a person (e.g. decision making capacity assessment).
104
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_: a particular case of Cronbach’s alpha for dichotomously scored items (i.e. scored as 0 or 1)
**_Kuder-Richardson 20 (KR20)_**: a particular case of Cronbach’s alpha for dichotomously scored items (i.e. scored as 0 or 1)
105
The __________ the SEM, the less certain we are that the test score represents the true score.
The **_larger_** the SEM, the less certain we are that the test score represents the true score.
106
How do you maximise reliability?
* Clear administration and scoring instructions for test user * Clear instructions for the test taker * Unambiguous test items * Standardised testing environment and procedure * Reduced time between testing sessions * Increase assessment length/items * Test try-out and modification * Discarding items that decrease reliability (item analysis) * Maximise VALIDITY
107
Draw a diagram that demonstrate the different types of validity
108
What are the four main types of validity?
Face validity Content validity Criterion-related validity Construct validity
109
What are some issues for content validity?
* Construct underrepresentation: failure to capture important components of a construct. * e.g. A depression scale that assesses cognitive and emotional components of depression, but not behavioural components. * Construct-irrelevant variance: measuring things other than the construct of interest. * e.g. The wording of our depression scale may make it likely that people will respond in socially desirable ways.
110
What are some examples of criterion-related validity?
* e.g. marks in Year 12 are used to predict performance at university * e.g. a marital satisfaction survey is used to predict divorce * e.g. scores on an anxiety scale you developed are correlated with clinical observations.
111
What is an example of concurrent validity?
A test designed to measure anxiety may be issued in conjunction with a diagnostic interview by an experienced clinician using the DSM-5. The concurrent validity of the test represents the extent to which the test score corresponds with the clinician’s observations of the client’s anxiety levels.
112
What is an example of predictive evidence?
VCE marks or ATAR scores are used to predict performance at university
113
What is an example of convergent evidence?
e. g. Relationship between score on measure of psychopathy and low emotional arousal. e. g. Relationship between low self-esteem and depression
114
What are some examples of discriminant evidence?
Scores on an anxiety measure should differ from scores on a depression measure, if each measure is assessing these individual constructs.