Session 6 Flashcards

1
Q

List 4 examples of measurement devices.

A
  1. Test
  2. Questionnaire
  3. Interview schedule/protocol
  4. Personality scale
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

List 2 factors of validity

A
  1. extent to which a measure/instrument measures what it is designed to measure
  2. accurately performs the function(s) it is purported to perform
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

List 3 KEY points about validity

A
  1. validity is relative to the purpose of testing
  2. validity is a matter of degree
  3. no measure/instrument is perfectly valid
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a ‘construct’?

4 features

A
  1. an abstract concept used in a particular theoretical manner to relate different behaviors according to their underlying features or causes
  2. used to describe, organize, summarize and communicate our interpretations of behavior
  3. abstract term used to summarize and describe behaviors that share certain attributes
  4. collection of related behaviors that are associated in a meaningful way
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Why is validity important in quantitative research?

A

researchers reduce constructs to numerical scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Why is validity important in qualitative research?

A

researchers must describe results in enough detail so that readers can picture the meanings that have been attached to a construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

List 3 types of validity.

A
  1. judgmental
  2. empirical
  3. judgmental-experimental
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

List and define 2 types of judgmental validity.

A
  1. Content: expert judgment

2. Face: participant judgment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

List and define 4 types of Empirical validity.

A
  1. criterion-predictive: correlation
  2. criterion-concurrent: correlation
  3. Convergent: correlation
  4. Divergent: correlation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Judgmental-Empirical Validity is what type?

A

Construct validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Judgmental-Empirical construct validity is established by what 2 things?

A
  1. hypothesize about relationship

2. test of hypothesis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Judgmental validity is an approach to establishing validity that uses ______________, usually of _____________ and therefore is only as good as the ____________. (6, 10)

A
  1. judgments
  2. experts
  3. judges
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Content Validity is a type of _______________ validity.

A

judgmental

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Content validity is ______________.

A

the degree to which measurements actually reflect the variable of interest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What two questions does content validity answer?

A
  1. Are we tapping the appropriate contents by the measure?
  2. Does the instrument cover all the areas needed to be observed AND does it cover them equally or proportionally to the interest?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Three principles for writing tests with high content validity

A
  1. Broad content
  2. focus to reflect importance
  3. appropriate level of language (vocabulary, sentence length) for the audience
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Face Validity is a type of _____________ validity.

A

judgemental

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Face Validity is __________.

A

the degree to which an instrument appears to be valid on the face of it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

The _______________ Test does not have very good face validity.

A

Rohrshach

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the question that Face validity answers?

A

On superficial inspection, does the instrument appear to measure what it purports to measure?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

The Rohrshack Test is designed to measure ____________.

A

psychopathology

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Who are the experts for the Rohrshack Test?

A

the person taking the test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Making the measurement tool LOOK like its measuring what it claims to be measuring is important to ___________ Validity.

A

Face

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

When is low Face Validity desirable?

A

when researchers want to disguise the true purpose of the research from the respondents because the participant might answer inaccurately due to socially acceptable expectations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What is Empirical Validity?
an approach to establishing validity that relies on, or is based on, observation or planned data collection rather than theory or subjective judgment
26
Empirical validity is usually reported as a ____________ ____________.
Validity Coefficient
27
What is the Validity Coefficient?
a correlation coefficient used to express validity
28
A correlation coefficient can range from ______ to _____ to ______.
-1 to 0 to +1
29
Validity coefficients are typically low because _________ and _________.
1. Performance on many criterion complex, involving many traits 2. Criterion measures themselves, may not be highly valid
30
The closer a correlation coefficient is to zero means there is _________ correlation.
low
31
The closer a correlation coefficient is to -1 or +1, the _________ valid the measurement.
MORE
32
Validate the measurement against some kind of criteria such as (3) __________, _________, ___________.
rule standard already existing test
33
Criterion Validity is a type of ______________ Validity.
Empirical
34
Criterion Validity is ____________.
the extent to which the scores obtained from a procedure correlate with an observable behavior
35
What is a criterion?
1. a rule or standard for making a judgment | 2. The standard by which the test is being judged
36
The two types of Criterion Validity are __________ and ________.
1. Predictive | 2. Concurrent
37
Predictive (criterion validity) is ________
the extent to which a procedure allows for accurate predictions about a participant's future behavior
38
Concurrent (criterion validity) is ____________.
the extent to which a procedures correlates with the present behavior of participants
39
Convergent Validity is _______
correlated with an already established instrument to establish another equally as valid instrument
40
Divergent Validity is __________
measurement of a variable that is the opposite of a known measurement that is valid
41
What is Judgmental-Empirical Validity?
an approach to establishing validity that relies on subjective judgments and data based on observation *combo: expert and observation
42
Construct Validity is a type of ______________ Validity.
Judgemental-empirical
43
Construct validity is _____.
the extent to which a measurement reflects the hypothetical construct of interest ** not observable
44
What is a construct?
1. an abstract concept used in a particular theoretical manner to relate different behaviors according to their underlying features or causes 2. used to describe, organize, summarize and communicate our interpretations of behavior 3. term used to summarize and describe behaviors that share certain attributes 4. a collection of related behaviors that are associated in a meaningful way
45
A ___________ does not have a physical being outside of its indicators.
construct
46
Researchers infer the existence of a construct by observing the ____________ of related indicators.
collection
47
What is the collection of indicators in a construct?
1. historical facts: family, medical, social 2. symptoms: behaviors, family reports 3. Clinical judgment and observation
48
Two factors in determining construct validity.
1. Judgment about the nature of relationship: hypothesize about how the construct in the form of the instrument designed to measure it should effect or relate to other variables 2. Empirical evidence: test the hypothesis using empirical methods
49
The method for determining construct validity offers only ____________ evidence regarding the validity of a measure.
indirect
50
Often construct validity is found through ___________ evidence.
indirect
51
Because the evidence for construct validity is indirect, researchers should be very cautious about declaring a measure to be valid on the basis of a ____________ study.
single
52
Construct validity is _________ secure
less
53
In construct validity researchers usually test a number of ___________ about the construct before determining construct validity.
hypotheses
54
A synonym for Reliability is ______.
consistency
55
____________ is more reliable than subjective.
objective
56
Reliability is __________.
the degree to which measurements are consistent
57
Types of Reliability errors are ___________.
1. Random 2. chance 3. Unsystematic * * interchangeable terms
58
Two important facts about reliability errors
1. since such errors are in principle random and unbiased, they tend to cancel each other out. 2. the sum of chance errors, when a sufficiently large number of cases is considered, approaches zero
59
The more concerning type of Reliability Error is ___________.
1. Systematic Error | 2. Constant Error
60
Definition of systematic error
an error produced by some factor that affects ALL observations similarly so that the errors are always in one direction and do not cancel each other out
61
A systematic error is usually a constant error and can be detected and _________________ for during statistical analysis.
corrected
62
What is the relationship between Reliability and Validity
reliability is a precursor of validity
63
A test cannot be valid if it is not first ____________.
reliable
64
Reliability comes ________, before it can be ________.
first | valid
65
______ before ______
R before V
66
High reliability means ________ random error.
little
67
High validity correlates with _______ true score
HIGH
68
Low reliability means ___________ random error
High
69
Can you have low reliability and high validity?
No, because you MUST have high reliability BEFORE validity can be considered
70
Two factors in the classic model for measuring reliability.
1. measure twice 2. check to see that the scores are consistent with each other usually done with a correlation coefficient, known as a reliability coefficient
71
What is the range of reliability?
-1 to 0 to +1
72
What are the three ways of measuring Reliability?
1. Inter observer or Inter-rater 2. Test-retest 3. Parallel forms
73
Describe an inter observer or inter-rater method.
the extent to which raters agree on the scores they assign to a participant's behavior
74
Describe Test-retest method.
the consistency with which participants obtain the same overall score when tested at different times
75
Describe Parallel forms method.
the consistency with which participants obtain the same overall score when given two forms of the same test, spaced slightly apart in time
76
How high should the reliability coefficient be?
.80 for individuals | .50 for groups of 25 or more
77
Why can the reliability coefficient for groups be lower than for individuals?
1. reliability coefficients indicate the reliability for individuals' scores 2. Group scores are averages * statistical theory indicates that averages are more reliable than the scores that underlie them (individual scores) because when computing an average, the negative errors tend to cancel out the positive errors
78
What is internal consistency/reliability?
use the scores from a single administration of a test to examine the consistency of test scores *examines the consistency within the test itself
79
List two methods for establishing internal consistency/reliability.
1. split-half | 2. Cronbach's Alpha (preferred)
80
What is the Split-half method of establishing internal consistency/reliability?
correlate scores on one half of the test with scores on the other half of the test
81
What is the Cronbach's alpha method of establishing internal consistency/reliability?
mathematical procedure used to obtain the equivalent of the average of all possible split-half reliability coefficients
82
Larger number of items leads to a _________ result.
better
83
Cronbach's is a formula used frequently in social sciences because it measures one particular _____________.
attribute
84
High internal consistency/reliability is desirable when a researcher has developed a test designed to measure a __________ unitary variable
single
85
Alphas should be ______ or more.
.80
86
In a test that measure several attributes you can still segment out each attribute's questions and perform a ____________ on those for each attribute
Cronbach's
87
List three types of Norm and Criterion Referenced tests.
1. Norm-referenced 2. Standardized 3. Criterion-referenced
88
What is a norm-referenced test?
tests designed to facilitate a comparison of an individual's performance with that of a norm group
89
What is a standardized test?
tests that come with standard directions for administration and interpretation
90
What is a criterion-referenced test?
tests designed to measure the extent to which individual examinees have met performance standards (i.e. a specific criteria)
91
List 3 attributes of Achievement Tests
1. measures knowledge and skills individuals have already acquired 2. Reliability: dependent on objectivity of scoring 3. Validity: dependent on comprehensiveness of coverage of stated knowledge or skill domain
92
What is an achievement test?
a measure of optimal performance
93
What is an Aptitude Test?
a measure of potential performance
94
List 4 attributes of Aptitude Tests
1. predict some specific type of achievement 2. measure likelihood that individual will be able to acquire knowledge and skills in a particular area 3. Reliability: r = .80 or higher for published tests 4. Validity: determined by correlating scores with a measure of achievement obtained at a later time (r = .20 - .60 for published tests)
95
List 4 attributes of Intelligence Tests
1. predict achievement in general, not any one specific type 2. measure the likelihood that individual will be able to acquire knowledge and skils in general 3. Reliability: no information provided 4. Validity: published tests have low to modest validity for predicting achievement in school
96
List 4 criticisms of Intelligence Tests.
1. tapping into culturally bound knowledge and skills rather than inmate (inborn) intelligence 2. Slanted towards dominant racial or ethnic groups 3. measure knowledge and skills that are acquired with instruction/formal schooling 4. don't measure all important aspects of intelligence
97
What is a Likert-Type Scale?
1. 5 point scale ranging 1-5 2. use verbal anchors for each number 3. reduce response bias by providing positive and negative statements
98
Likert scale is an __________ level scale.
interval