Mid Term Flashcards

1
Q

An ongoing fluid and dynamic process that continues throughout the course of the helping relationship

A

Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Refers to any systemic procedure for collecting information that is used to make inferences or decisions about the characteristics of a person

A

Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Is a complex problem-solving process

A

Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Encompasses a broad array of data collection methods from multiple sources, to yield relevant accurate and reliable information about an individual

A

Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Considered an ongoing process of gathering information

A

Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Often falsely used interchangeably with testing

A

Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Can proceed effectively without testing

A

Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Three ways assessment, and testing overlap

A

Collects info
Measures
Evidence based

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A single assessment instrument should never be the sole determinant of the decision-making process

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Four purposes of assessment

A

Screen
Diagnose
Intervene
Monitor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Multiple methods of data collection is referred to as ____ ____ assessment

A

Multimodal approach

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Three methods for assessment

A

Interview
test
observe

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Instruments designed to measure specific attributes of an individual

A

Tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

An assessment method that involves witnessing and documenting the behavior in particular environments

A

Observation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

There is no set number of methods or sources that are required in an

A

Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Additional sources and methods of gaining information leads to more ___ and ___ picture of the individual

A

Complete and accurate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Four steps of assessment

A

1) ID the problem,
2) select proper assessment methods,
3) evaluate the assessment information,
4) report results/ make recommendations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How many basic competencies created by professional associations are there?

A

23

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What does the acronym RUST stand for?

A

Responsibilities of users of standardized tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Four qualifications necessary to administer and interpret standardized tests

A

Purpose
characteristics,
setting/ conditions,
roles of test selectors, administrators, scores, and interpreters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

How long ago were essay exams given to civil Service employees in China?

A

2200 BC

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Who’s philosophy emphasize the importance of assessing an individuals, competency, and aptitude

A

Socrates and Plato

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Who identified items to screen for learning disabilities

A

FitzHerbert

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Who is the first to suggest formal IQ test?

A

Huarte

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Who had the first psychological lab?

A

Wundt

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

First IQ test creator

A

Binet

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Who applied the theory of evolution in an attempt to demonstrate the role heredity plays in intelligence

A

Gaulton

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Who created educational measure

A

Thorndyke

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Who created origins of intelligence?

A

Piaget

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Who created the bell shaped curve?

A

Hernstein and Murray

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What year was the no child left behind act implemented

A

2001

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Individuals with disabilities education improvement act of what year

A

2004

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Methods and sources of assessment vary greatly depending on these things

A

Needs, (client)
Purpose, (asmnt)
setting ,
availability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

May come primarily from collateral sources and records

A

assessment information

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Assesses pathology

A

Standardized testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

These (two things) will seldom provide enough information to make a useful decision

A

Observation and interview

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What is always the first step with a client no matter what direction you choose to go in

A

Interview

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Always use one method for assessment information

A

FALSE
*More than one method- t

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

What are two types of assessments

A

Formal and informal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Three types of assessments are

A

Interviews
tests
observations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What is considered the cornerstone of assessment?

A

Initial interview

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

What begins prior to other assessment methods

A

Initial interview

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

What is the primary purpose of the initial interview?

A

Gather background information relevant to the problem

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

List three things that depend on the purpose of the interview

A

Setting,
population,
counselor skills

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

What are the three categories of interviews?

A

Structured
semi structured
unstructured

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Counselors must be able to

A

Establish rapport
Be warm, respectful, empathetic
Safe place and accepting
Good listening skills
Effective probing and reflecting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

List five interview guidelines

A

Physical setting
Purpose
Confidentiality
Abide by standards
Avoid why questions
Alert to verbal and nonverbal behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Test that have a great impact on one’s life path are called

A

High stakes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Five things tests are used to measure various attributes

A

Cognition
Knowledge
Skills
Abilities
Personality traits

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

Purposes of tests
(6 items)

A

Screening
Classifying
Placing
Diagnose
Intervene
Progress

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Two different types of tests

A

Content - (purpose)
Format- (structure)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Five categories of assessments

A

Intellectual
Achievement
Aptitude
Career
Personality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

What are educational and psychological measurement based on
____ _____ ____

A

statistical principle data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

Six types of test variables

A

Qualitative
Quantitative
Continuous
Discrete
Observable
Latent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

Give an example of nominal data

A

Gender, hair color, nationality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Give an example of ordinal measurement

A

Rank, grades

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

N.O.I.R.

A

gender - N
rank- o
IQ - interval (no true zero)
height - ratio

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

The means of putting disorganized scores in order is called

A

Frequency distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

Two examples of frequency distribution are the

A

Histogram
frequency polygon- (Bell curve)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

The frequency is symmetrical. What does it look like?

A

Bell shaped curve

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

If a distribution is asymmetrical, it is called

A

Skewed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

If the frequency is negatively skewed

A

Left skewed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

If frequency distribution is positively skewed, it looks like

A

right

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

What are the measures of central tendency?

A

Mean mode median

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

What are the measures for variability? (3)

A

Range, variance, standard deviation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

What percentage of data falls between one standard deviation

A

68%, 34% on either side

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

What percentage of data falls within two standard deviations

A

95%

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

What percent of data will fall between three standard deviations

A

99.5%

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

What are two types of scores?

A

Normative and criterion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

Give an example of normative reference scores

A

Standardized test, IQ and achievement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

What’s an example of criterion referenced scores?

A

Proficiency tests, mastery test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

Five questions necessary to evaluate a normative group

A

1)cohesion
2)leadership structure
3) communication style
4) history & development
5) social ID

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

Six types of reference scores

(Think of complete bell curve scores)

A

Percentile Rank
Standard scores
Z scores
T scores
Scaled scores
Stanine

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

Percentage and percentile ranks are the same thing.
true or false

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

Standard scores range from a median of ____ with a standard deviation of ____

A

M 100
SD 15

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

What is the average range for IQ?

A

90 to 109

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

These scores convey standard scores in standard deviations that are not very sensitive

A

Z scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

A fixed standard score with the median of 50 and a standard deviation of 10 with a 40 to 60 average

A

T scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

A fixed standard score at 10 with a standard deviation of four. 8 to 12 is average.

A

Scaled scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

Fixed standard score of 5 w/standard deviation of 2 with a 1-9 range

A

Stanine

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

What are two types of test scores that compare performance, based on developmental levels

A

Age equivalent,
grade equivalent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

Always include either ____ scores, or _____ ranks when interpreting test scores

A

Standard, percentile

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

Involves interpretive and descriptive data

A

Qualitative assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

An IQ of 130 and above is classified as

A

Very superior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

An IQ of 120 to 129 is classified as

A

Superior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

Beck score of 29 to 63 indicates what

A

Severe depression

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

This measures performance against a set of standards. Shows clear, proficiency, in specific areas.

A

Criterion reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

This compares individual to group. Strengths among peers.

A

Norm reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

Why do we need to be careful when determining to use criterion reference or norm referenced interpretation?

A

Has significant impact on validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

What type of score may not provide enough units to differentiate amongst scores?

A

Stanine

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

3 types of interviews

A

Structured interview
semi structured interview
unstructured interview

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

Types of tests
(5)

A

Standardized vs. Non-standardized
Individual vs. Group
Maximum vs. Typical-Performance
Objective vs. Subjective
Verbal vs. Non-verbal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

The degree to which evidence and theory support the interpretation of test scores for proposed uses of the test.

A

Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

A ____ can be a whole test with many parts, test with one part, or a sub test, measuring specific characteristics

A

scale

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

Give an example of a type of scale test

A

Stanford Binet IQ test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

Distinct exam given in one setting can be made up of many parts of different tests

A

Battery

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

A _____ test takes measures of an IQ, anxiety, autism assessment to make a complete test

A

Battery test

98
Q

The national counselor exam is an example of

A

Computer-based tests

99
Q

Give an example of a computer adaptive test

A

Graduate management admission test, GMAT

100
Q

Monitoring and making a record of others or oneself in a particular context, is called

A

Observation

101
Q

Seeing what a person actually does in situations is called

A

Observation

102
Q

Methods for identifying immediate behavior before, and after are called

A

Antecedence and consequences

103
Q

Gathering information to identify problem behaviors and develop interventions is called what

A

Functional behavior assessment

104
Q

An observation that is graded and uses a rubric is what type of observation

A

Formal

105
Q

An observation that’s not graded, and based on past performance is what type of observation

A

Informal

106
Q

The type of observation that uses senses like eyesight smell

A

Direct observation

107
Q

This type of observation is reliant on reports from others

A

Indirect observation

108
Q

This setting offers a more accurate reflection of real life circumstance

A

Natural setting

109
Q

This setting is created by the observer

A

Contrived setting

110
Q

The observer doesn’t intrude on the research context

A

Unobstructive observation

111
Q

The researcher becomes a participant in the culture or context of situation

A

Participant observation

112
Q

List three methods of recording observations

A

Event, duration, time sampling

113
Q

These measure the general functioning of specific skills

A

Rating scales

114
Q

This measures multiple domains of functioning

A

Broadband scales

115
Q

This measures one or a few domains, more in-depth

A

Narrow band scales

116
Q

Third-party reporters are called

A

Collateral sources

117
Q

Very important data from teachers family employers when purpose is behavioral

A

Collateral source

118
Q

Required source when conducting a forensic evaluation

A

Collateral source

119
Q

Confidentiality is very important in

A

Collateral source

120
Q

Permission must be obtained and written consent is required for

A

Collateral sources

121
Q

Assessments ,scoring reports , adapted tests, SPSS, and SASR all ____ based

A

Computer

122
Q

Computer-based assessments can be used as standalone, clinical evaluations- T or F

A

False - we should never use these as standalone

123
Q

Who is ultimately responsible for the accuracy of interpretation of assessments?

A

The clinician

124
Q

_____ requires interpretation to have meaning for the individual

A

Results

125
Q

All of these :
Invasion of privacy,
too much reliance on a test score,
testing bias,
incriminating results,
IQ test don’t measure the correct construct, demonstration of competency for diploma,
multiple-choice test need to be replaced by authentic performance assessment,
too much pressure on stakeholders and high stakes testing
Are examples of:

A

Controversies about assessments

126
Q

Types of information needed, needs of the client, resources, timeframe for assessment, quality of instrument, qualification of the counselor are all criteria in determining what

A

Selecting appropriate assessment instrument

127
Q

Z

A

selecting appropriate assessment instrument

128
Q

There is a single source that catalogs every possible, formal and informal assessment instrument.

A

False

129
Q

References, publishers, website, specimen, sets, manuals, research, literature, professional organizations are all sources of

A

Locating assessment instrument information

130
Q

What questions should be asked before choosing an instrument?

A

What is the purpose of the instrument? What is the make up of the norm group? Are the results of the instrument reliable? Is there evidence of validity? Does the manual provide clear instructions?

131
Q

What practical issues should be considered, when choosing an instrument (5 options)

A

Time, ease,cost, scoring,
interpretation

132
Q

Self report, individually administered, group administration, computer administration, video administered, audio administered, sign language administered, nonverbal are all modes for

A

Administering assessment instruments

133
Q

What must be done before you administer an instrument

A

Obtain informed consent, maintain a copy in your records at all times

134
Q

Psychometric property pertaining to consistency dependability in the production of test scores is known as

A

Reliability

135
Q

This refers to the degree to which test scores are dependable

A

Reliability

136
Q

Dependability, consistent and stable is the definition of

A

Reliability

137
Q

What is one of the most important characteristics of assessment results?

A

Reliability

138
Q

If a scale fluctuates and produces different results, each time it said to be

A

Unreliable

139
Q

This refers to the results obtained with an assessment instrument not the actual instrument

A

Reliability

140
Q

Instruments are rarely totally consistent or error-free
T or f

A

True

141
Q

The greater the amount of measurement error on test scores equals

A

Lower reliability

142
Q

Amount of error in an instrument is called

A

Measurement error

143
Q

Any fluctuation that results from factors related to the measurement that is irrelevant to what is being measured is called

A

Measurement error

144
Q

The concept of true scores is totally

A

Theoretical

145
Q

You’re really not going to have 100% of a ____ ____

A

True score

146
Q

Some degree of error is inherent in all instruments is known as

A

Standard error of measurement

147
Q

A simple measure of an individuals test score fluctuation if the test were given to them repeatedly is known as

A

Standard error of measurement

148
Q

An estimation of the accuracy of individuals observed score, as compared to the true score is known as

A

Standard error of measurement

149
Q

What are three types of measurement error

A

Time sampling

Interrator differences

Content sampling
(TIC)

150
Q

Repeated testing of the same individual is known as

A

Time sampling

151
Q

The greatest source of error in instrument scores is from

A

Content sampling

152
Q

An error that results from selecting test items that inadequately measure the content that’s intended is known as:

A

Content sampling

153
Q

The subjectivity of the individual scoring the test is called

A

Interrater reliability

154
Q

Personality test or IQ tests are a form of what type of sampling

A

Content sampling

155
Q

Quality of the test items, test length, test taker variables, and test administration are examples of other

A

Measurement errors

156
Q

What is the oldest most commonly used method of estimating reliability?

A

Test retest

157
Q

This is most useful in measuring traits, abilities, or characteristics that are stable and do not change generally overtime

A

Test retest

158
Q

Giving two different versions of forms of the same test at the same time is called

A

Simultaneous administration

159
Q

Giving two different versions of the same test on different days to same group is an example of

A

Delayed administration

160
Q

An example of a delayed simultaneous administration is using the ______ test

A

Woodcock Johnson, AB

161
Q

Measuring the extent to which items on the instrument measure the same ability or trait is an example of

A

Internal consistency reliability

162
Q

Having a high internal consistency reliability means that the tests are

A

Homogenous

163
Q

If there is a strong correlation on the test, then there is a

A

High degree of internal consistency

164
Q

Split half reliability,
Kuder Richardson formula, coefficient alpha,
are three means for determining

A

Internal consistency

165
Q

What’s another name for a coefficient alpha?

A

Cronebacks alpha SPSS

166
Q

What is used for items answered yes or no, right or wrong , zero or one ,
for internal consistency

A

Kuder Richardson formula

167
Q

A potential source of error is the lack of agreement among raters for this reliability

A

Interrater reliability

168
Q

This can be done by correlating the scores obtained independently by two or more raters

A

Interater reliability

169
Q

This does not reflect content sampling or time sampling errors

A

Interator reliability

170
Q

Sensitive only to the differences among raters

A

Interator reliability

171
Q

What is a test designed to be given more than one time

A

Test retest, or alternate forms

172
Q

This evaluates, the extent to which different items on the test, measured the same content

A

Internal consistency

173
Q

If items are heterogenous and the test measures more than one construct, the reliability will be

A

Low

174
Q

Two types of scales that have low reliability

A

Joy and depression

175
Q

For test with more than one construct what method is appropriate

A

Split half method

176
Q

What reliability coefficients are acceptable and unacceptable

A

.70 is acceptable
.59 is unacceptable

177
Q

What does SEM stand for?

A

Standard error of measure

178
Q

Scores by a single individual if tested multiple times=

A

Standard error of measure

179
Q

Spread of scores obtained by a group of test takers on a single test

A

Standard deviation

180
Q

Confidence intervals

A

Bell curve
68% w/in 1 SD
95% w/in 2 SD
99.5 w/in 3 SD

181
Q

Longer tests improve

A

Reliability

182
Q

Larger number of test items can more accurately measure the ______ thus reducing content sampling errors.

A

Construct

183
Q

Multiple-choice test, writing, unambiguous questions, make sure questions are not too hard or easy, clearly stating administration and scoring, training, grading, or interpreting the test are examples of

A

Improving reliability factors

184
Q

Something that is sound, meaningful and accurate

A

Validity

185
Q

Can be viewed as the extent to which test scores provide answers to the targeted questions

A

Validity

186
Q

Can be reliable, but not

A

Valid

187
Q

Cannot be valid and not

A

Reliable

188
Q

Does the measure retain similar results each time similar people take it

A

Reliability

189
Q

It measures what it claims to measure:

A

Validity

190
Q

Refers to appropriateness of the use and interpretation of test results not the test

A

Validity

191
Q

This is a matter of degrees. It’s not all or none.

A

Validity

192
Q

This is a single unified concept

A

Validity

193
Q

Three subtypes of validity

A

Content
criterion
construct

194
Q

Test manuals are constructed from what types of validity

A

Content, criterion, construct

195
Q

This type of concept looks at test content, response, processes, internal structure, relations to other variables, consequences of testing

A

Unitary concept

196
Q

Most textbooks use which type of terminology

A

Traditional content,
criterion construct

197
Q

This is specific to a particular purpose

A

Validity

198
Q

No test is valid for all purposes

A

True

199
Q

What’s another name for construct?

A

Latent variables

200
Q

What are some examples of latent variables?

A

Aggression, morale, happiness, quality of life

201
Q

What are scientifically developed concepts/ ideas used to describe behavior called

A

Constructs

202
Q

What cannot be measured directly or observed directly

A

Constructs

203
Q

What is defined by a group of interrelated variables that can be measured

A

Construct

204
Q

An example of an interrelated construct variable that can be measured

A

Aggression: measured by physical violence, verbal attacks, and poor social skills

205
Q

If we have evidence that the interpretation of the results is valid based on the purpose of the test, then the results are considered to

A

Reflect the construct being measured

206
Q

A measure that provides inconsistent results cannot provide

A

Valid scores

207
Q

What is the most fundamental consideration in developing in evaluating tests?

A

Validity

208
Q

Validity centers on the relationship between the ____ of the test, and the ____based on the test scores

A

Purpose, interpretation

209
Q

The greater the impact, the results have on someone’s life the more ____ is required

A

Evidence

210
Q

Evidence of relationship between the content of the test, and the construct it presumes to measure is known as

A

Test content validity

211
Q

_____ areas reflect essential knowledge, behaviors, skills that represent the construct

A

Content

212
Q

______ comes from educational standards, accreditation standards, school curricular, syllabi textbooks

A

Achievement

213
Q

Personality and clinical inventories come primarily from

A

Characteristics in the DSM

214
Q

This comes from job, descriptions, employment, assessments, activities, tasks, and duties of the specific job

A

Career

215
Q

Some instruments are designed to measure a general _____, while others are designed to measure ____ components of a construct

A

construct,
several

216
Q

Predictor variable is compared to the criterion for which is designed to predict:

A

Criterion based evidence
(aptitude test is the predictor variable)

217
Q

The predictor variable is concurrently related to some criterion, (example depressed mood)

A

Concurrent evidence

218
Q

The degree to which the test score estimates some future level of performance

A

Predictive evidence

219
Q

The chosen criterion must be _____ to the intended purpose of the test

A

Appropriate

(Example IQ test is not predictor of morality)

220
Q

Relevant, reliable, and uncontaminated are what

A

Criterion measures should be

221
Q

Should not be influenced by external factors that are unrelated to the criterion is the definition of

A

Uncontaminated

222
Q

The means by which we evaluate the relationship between test results and a criterion measure

A

Validity coefficients

223
Q

The purpose is to show that the test scores accurately predict the criterion performance

A

Validity coefficient

224
Q

The range is from negative one to + 1
.5 is very high.
.21 is very low.

A

Validity coefficient

225
Q

A means of providing evidence of internal structure of a test

A

Evidence of homogeneity

226
Q

This can be proven by high, internal consistency coefficient

A

Homogeneity

227
Q

______ _____ Between scales, or a sub test on a single instrument, provides evidence of these components measure the construct that was intended

A

High correlation

228
Q

____ ____ is by correlating one instrument to other instruments that assess the same construct

A

Convergent evidence

229
Q

Test developers will use other

A

Well, established instruments

230
Q

When revising an instrument developers will use ___ ____ to compare with the latest version to be sure both are measuring the same construct

A

Previous versions

231
Q

_____ uses consistently low correlation values between the test and other test to measure different constructs

A

Divergent

232
Q

Another means to provide evidence of constructibility is called

A

Group differentiation

233
Q

If two groups have vastly ___ scores in a predicted way, then the test has evidence of ____

A

Different, construct validity

234
Q

Shows the degree to which test scores change with age

A

Age differentiation

235
Q

Source of construct validity

A

Experimental results

236
Q

The expectation that benefits will come from the test scores is known as

A

Intended Evidence-based consequences

237
Q

Actual and potential consequences of the test use, and the social impact are known as

A

Unintended consequences

238
Q

The actions and processes and emotional traits that the test taker invokes in responding to

A

Evidence based response process

239
Q

Does it look legitimate. Is it too hard or too childish. Is it too long too short or examples of

A

Evidence-based response process
( Does the test appear to test what it’s intended to test? ?)

240
Q

disruptive behavior in classroom is best

A

Observed

241
Q

The degree to which instrument measures what it is supposed to measure.

A

Validity

242
Q

amount of variation of a random variable expected about its mean.

A

Standard deviation