Mid Term Flashcards

1
Q

An ongoing fluid and dynamic process that continues throughout the course of the helping relationship

A

Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Refers to any systemic procedure for collecting information that is used to make inferences or decisions about the characteristics of a person

A

Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Is a complex problem-solving process

A

Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Encompasses a broad array of data collection methods from multiple sources, to yield relevant accurate and reliable information about an individual

A

Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Considered an ongoing process of gathering information

A

Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Often falsely used interchangeably with testing

A

Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Can proceed effectively without testing

A

Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Three ways assessment, and testing overlap

A

Collects info
Measures
Evidence based

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A single assessment instrument should never be the sole determinant of the decision-making process

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Four purposes of assessment

A

Screen
Diagnose
Intervene
Monitor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Multiple methods of data collection is referred to as ____ ____ assessment

A

Multimodal approach

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Three methods for assessment

A

Interview
test
observe

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Instruments designed to measure specific attributes of an individual

A

Tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

An assessment method that involves witnessing and documenting the behavior in particular environments

A

Observation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

There is no set number of methods or sources that are required in an

A

Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Additional sources and methods of gaining information leads to more ___ and ___ picture of the individual

A

Complete and accurate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Four steps of assessment

A

1) ID the problem,
2) select proper assessment methods,
3) evaluate the assessment information,
4) report results/ make recommendations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How many basic competencies created by professional associations are there?

A

23

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What does the acronym RUST stand for?

A

Responsibilities of users of standardized tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Four qualifications necessary to administer and interpret standardized tests

A

Purpose
characteristics,
setting/ conditions,
roles of test selectors, administrators, scores, and interpreters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

How long ago were essay exams given to civil Service employees in China?

A

2200 BC

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Who’s philosophy emphasize the importance of assessing an individuals, competency, and aptitude

A

Socrates and Plato

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Who identified items to screen for learning disabilities

A

FitzHerbert

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Who is the first to suggest formal IQ test?

A

Huarte

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Who had the first psychological lab?
Wundt
26
First IQ test creator
Binet
27
Who applied the theory of evolution in an attempt to demonstrate the role heredity plays in intelligence
Gaulton
28
Who created educational measure
Thorndyke
29
Who created origins of intelligence?
Piaget
30
Who created the bell shaped curve?
Hernstein and Murray
31
What year was the no child left behind act implemented
2001
32
Individuals with disabilities education improvement act of what year
2004
33
Methods and sources of assessment vary greatly depending on these things
Needs, (client) Purpose, (asmnt) setting , availability
34
May come primarily from collateral sources and records
assessment information
35
Assesses pathology
Standardized testing
36
These (two things) will seldom provide enough information to make a useful decision
Observation and interview
37
What is always the first step with a client no matter what direction you choose to go in
Interview
38
Always use one method for assessment information
FALSE *More than one method- t
39
What are two types of assessments
Formal and informal
40
Three types of assessments are
Interviews tests observations
41
What is considered the cornerstone of assessment?
Initial interview
42
What begins prior to other assessment methods
Initial interview
43
What is the primary purpose of the initial interview?
Gather background information relevant to the problem
44
List three things that depend on the purpose of the interview
Setting, population, counselor skills
45
What are the three categories of interviews?
Structured semi structured unstructured
46
Counselors must be able to
Establish rapport Be warm, respectful, empathetic Safe place and accepting Good listening skills Effective probing and reflecting
47
List five interview guidelines
Physical setting Purpose Confidentiality Abide by standards Avoid why questions Alert to verbal and nonverbal behavior
48
Test that have a great impact on one’s life path are called
High stakes
49
Five things tests are used to measure various attributes
Cognition Knowledge Skills Abilities Personality traits
50
Purposes of tests (6 items)
Screening Classifying Placing Diagnose Intervene Progress
51
Two different types of tests
Content - (purpose) Format- (structure)
52
Five categories of assessments
Intellectual Achievement Aptitude Career Personality
53
What are educational and psychological measurement based on ____ _____ ____
statistical principle data
54
Six types of test variables
Qualitative Quantitative Continuous Discrete Observable Latent
55
Give an example of nominal data
Gender, hair color, nationality
56
Give an example of ordinal measurement
Rank, grades
57
N.O.I.R.
gender - N rank- o IQ - interval (no true zero) height - ratio
58
The means of putting disorganized scores in order is called
Frequency distribution
59
Two examples of frequency distribution are the
Histogram frequency polygon- (Bell curve)
60
The frequency is symmetrical. What does it look like?
Bell shaped curve
61
If a distribution is asymmetrical, it is called
Skewed
62
If the frequency is negatively skewed
Left skewed
63
If frequency distribution is positively skewed, it looks like
right
64
What are the measures of central tendency?
Mean mode median
65
What are the measures for variability? (3)
Range, variance, standard deviation
66
What percentage of data falls between one standard deviation
68%, 34% on either side
67
What percentage of data falls within two standard deviations
95%
68
What percent of data will fall between three standard deviations
99.5%
69
What are two types of scores?
Normative and criterion
70
Give an example of normative reference scores
Standardized test, IQ and achievement
71
What’s an example of criterion referenced scores?
Proficiency tests, mastery test
72
Five questions necessary to evaluate a normative group
1)cohesion 2)leadership structure 3) communication style 4) history & development 5) social ID
73
Six types of reference scores (Think of complete bell curve scores)
Percentile Rank Standard scores Z scores T scores Scaled scores Stanine
74
Percentage and percentile ranks are the same thing. true or false
False
75
Standard scores range from a median of ____ with a standard deviation of ____
M 100 SD 15
76
What is the average range for IQ?
90 to 109
77
These scores convey standard scores in standard deviations that are not very sensitive
Z scores
78
A fixed standard score with the median of 50 and a standard deviation of 10 with a 40 to 60 average
T scores
79
A fixed standard score at 10 with a standard deviation of four. 8 to 12 is average.
Scaled scores
80
Fixed standard score of 5 w/standard deviation of 2 with a 1-9 range
Stanine
81
What are two types of test scores that compare performance, based on developmental levels
Age equivalent, grade equivalent
82
Always include either ____ scores, or _____ ranks when interpreting test scores
Standard, percentile
83
Involves interpretive and descriptive data
Qualitative assessment
84
An IQ of 130 and above is classified as
Very superior
85
An IQ of 120 to 129 is classified as
Superior
86
Beck score of 29 to 63 indicates what
Severe depression
87
This measures performance against a set of standards. Shows clear, proficiency, in specific areas.
Criterion reference
88
This compares individual to group. Strengths among peers.
Norm reference
89
Why do we need to be careful when determining to use criterion reference or norm referenced interpretation?
Has significant impact on validity
90
What type of score may not provide enough units to differentiate amongst scores?
Stanine
91
3 types of interviews
Structured interview semi structured interview unstructured interview
92
Types of tests (5)
Standardized vs. Non-standardized Individual vs. Group Maximum vs. Typical-Performance Objective vs. Subjective Verbal vs. Non-verbal
93
The degree to which evidence and theory support the interpretation of test scores for proposed uses of the test.
Validity
94
A ____ can be a whole test with many parts, test with one part, or a sub test, measuring specific characteristics
scale
95
Give an example of a type of scale test
Stanford Binet IQ test
96
Distinct exam given in one setting can be made up of many parts of different tests
Battery
97
A _____ test takes measures of an IQ, anxiety, autism assessment to make a complete test
Battery test
98
The national counselor exam is an example of
Computer-based tests
99
Give an example of a computer adaptive test
Graduate management admission test, GMAT
100
Monitoring and making a record of others or oneself in a particular context, is called
Observation
101
Seeing what a person actually does in situations is called
Observation
102
Methods for identifying immediate behavior before, and after are called
Antecedence and consequences
103
Gathering information to identify problem behaviors and develop interventions is called what
Functional behavior assessment
104
An observation that is graded and uses a rubric is what type of observation
Formal
105
An observation that’s not graded, and based on past performance is what type of observation
Informal
106
The type of observation that uses senses like eyesight smell
Direct observation
107
This type of observation is reliant on reports from others
Indirect observation
108
This setting offers a more accurate reflection of real life circumstance
Natural setting
109
This setting is created by the observer
Contrived setting
110
The observer doesn’t intrude on the research context
Unobstructive observation
111
The researcher becomes a participant in the culture or context of situation
Participant observation
112
List three methods of recording observations
Event, duration, time sampling
113
These measure the general functioning of specific skills
Rating scales
114
This measures multiple domains of functioning
Broadband scales
115
This measures one or a few domains, more in-depth
Narrow band scales
116
Third-party reporters are called
Collateral sources
117
Very important data from teachers family employers when purpose is behavioral
Collateral source
118
Required source when conducting a forensic evaluation
Collateral source
119
Confidentiality is very important in
Collateral source
120
Permission must be obtained and written consent is required for
Collateral sources
121
Assessments ,scoring reports , adapted tests, SPSS, and SASR all ____ based
Computer
122
Computer-based assessments can be used as standalone, clinical evaluations- T or F
False - we should never use these as standalone
123
Who is ultimately responsible for the accuracy of interpretation of assessments?
The clinician
124
_____ requires interpretation to have meaning for the individual
Results
125
All of these : Invasion of privacy, too much reliance on a test score, testing bias, incriminating results, IQ test don’t measure the correct construct, demonstration of competency for diploma, multiple-choice test need to be replaced by authentic performance assessment, too much pressure on stakeholders and high stakes testing Are examples of:
Controversies about assessments
126
Types of information needed, needs of the client, resources, timeframe for assessment, quality of instrument, qualification of the counselor are all criteria in determining what
Selecting appropriate assessment instrument
127
Z
selecting appropriate assessment instrument
128
There is a single source that catalogs every possible, formal and informal assessment instrument.
False
129
References, publishers, website, specimen, sets, manuals, research, literature, professional organizations are all sources of
Locating assessment instrument information
130
What questions should be asked before choosing an instrument?
What is the purpose of the instrument? What is the make up of the norm group? Are the results of the instrument reliable? Is there evidence of validity? Does the manual provide clear instructions?
131
What practical issues should be considered, when choosing an instrument (5 options)
Time, ease,cost, scoring, interpretation
132
Self report, individually administered, group administration, computer administration, video administered, audio administered, sign language administered, nonverbal are all modes for
Administering assessment instruments
133
What must be done before you administer an instrument
Obtain informed consent, maintain a copy in your records at all times
134
Psychometric property pertaining to consistency dependability in the production of test scores is known as
Reliability
135
This refers to the degree to which test scores are dependable
Reliability
136
Dependability, consistent and stable is the definition of
Reliability
137
What is one of the most important characteristics of assessment results?
Reliability
138
If a scale fluctuates and produces different results, each time it said to be
Unreliable
139
This refers to the results obtained with an assessment instrument not the actual instrument
Reliability
140
Instruments are rarely totally consistent or error-free T or f
True
141
The greater the amount of measurement error on test scores equals
Lower reliability
142
Amount of error in an instrument is called
Measurement error
143
Any fluctuation that results from factors related to the measurement that is irrelevant to what is being measured is called
Measurement error
144
The concept of true scores is totally
Theoretical
145
You’re really not going to have 100% of a ____ ____
True score
146
Some degree of error is inherent in all instruments is known as
Standard error of measurement
147
A simple measure of an individuals test score fluctuation if the test were given to them repeatedly is known as
Standard error of measurement
148
An estimation of the accuracy of individuals observed score, as compared to the true score is known as
Standard error of measurement
149
What are three types of measurement error
Time sampling Interrator differences Content sampling (TIC)
150
Repeated testing of the same individual is known as
Time sampling
151
The greatest source of error in instrument scores is from
Content sampling
152
An error that results from selecting test items that inadequately measure the content that’s intended is known as:
Content sampling
153
The subjectivity of the individual scoring the test is called
Interrater reliability
154
Personality test or IQ tests are a form of what type of sampling
Content sampling
155
Quality of the test items, test length, test taker variables, and test administration are examples of other
Measurement errors
156
What is the oldest most commonly used method of estimating reliability?
Test retest
157
This is most useful in measuring traits, abilities, or characteristics that are stable and do not change generally overtime
Test retest
158
Giving two different versions of forms of the same test at the same time is called
Simultaneous administration
159
Giving two different versions of the same test on different days to same group is an example of
Delayed administration
160
An example of a delayed simultaneous administration is using the ______ test
Woodcock Johnson, AB
161
Measuring the extent to which items on the instrument measure the same ability or trait is an example of
Internal consistency reliability
162
Having a high internal consistency reliability means that the tests are
Homogenous
163
If there is a strong correlation on the test, then there is a
High degree of internal consistency
164
Split half reliability, Kuder Richardson formula, coefficient alpha, are three means for determining
Internal consistency
165
What’s another name for a coefficient alpha?
Cronebacks alpha SPSS
166
What is used for items answered yes or no, right or wrong , zero or one , for internal consistency
Kuder Richardson formula
167
A potential source of error is the lack of agreement among raters for this reliability
Interrater reliability
168
This can be done by correlating the scores obtained independently by two or more raters
Interater reliability
169
This does not reflect content sampling or time sampling errors
Interator reliability
170
Sensitive only to the differences among raters
Interator reliability
171
What is a test designed to be given more than one time
Test retest, or alternate forms
172
This evaluates, the extent to which different items on the test, measured the same content
Internal consistency
173
If items are heterogenous and the test measures more than one construct, the reliability will be
Low
174
Two types of scales that have low reliability
Joy and depression
175
For test with more than one construct what method is appropriate
Split half method
176
What reliability coefficients are acceptable and unacceptable
.70 is acceptable .59 is unacceptable
177
What does SEM stand for?
Standard error of measure
178
Scores by a single individual if tested multiple times=
Standard error of measure
179
Spread of scores obtained by a group of test takers on a single test
Standard deviation
180
Confidence intervals
Bell curve 68% w/in 1 SD 95% w/in 2 SD 99.5 w/in 3 SD
181
Longer tests improve
Reliability
182
Larger number of test items can more accurately measure the ______ thus reducing content sampling errors.
Construct
183
Multiple-choice test, writing, unambiguous questions, make sure questions are not too hard or easy, clearly stating administration and scoring, training, grading, or interpreting the test are examples of
Improving reliability factors
184
Something that is sound, meaningful and accurate
Validity
185
Can be viewed as the extent to which test scores provide answers to the targeted questions
Validity
186
Can be reliable, but not
Valid
187
Cannot be valid and not
Reliable
188
Does the measure retain similar results each time similar people take it
Reliability
189
It measures what it claims to measure:
Validity
190
Refers to appropriateness of the use and interpretation of test results not the test
Validity
191
This is a matter of degrees. It’s not all or none.
Validity
192
This is a single unified concept
Validity
193
Three subtypes of validity
Content criterion construct
194
Test manuals are constructed from what types of validity
Content, criterion, construct
195
This type of concept looks at test content, response, processes, internal structure, relations to other variables, consequences of testing
Unitary concept
196
Most textbooks use which type of terminology
Traditional content, criterion construct
197
This is specific to a particular purpose
Validity
198
No test is valid for all purposes
True
199
What’s another name for construct?
Latent variables
200
What are some examples of latent variables?
Aggression, morale, happiness, quality of life
201
What are scientifically developed concepts/ ideas used to describe behavior called
Constructs
202
What cannot be measured directly or observed directly
Constructs
203
What is defined by a group of interrelated variables that can be measured
Construct
204
An example of an interrelated construct variable that can be measured
Aggression: measured by physical violence, verbal attacks, and poor social skills
205
If we have evidence that the interpretation of the results is valid based on the purpose of the test, then the results are considered to
Reflect the construct being measured
206
A measure that provides inconsistent results cannot provide
Valid scores
207
What is the most fundamental consideration in developing in evaluating tests?
Validity
208
Validity centers on the relationship between the ____ of the test, and the ____based on the test scores
Purpose, interpretation
209
The greater the impact, the results have on someone’s life the more ____ is required
Evidence
210
Evidence of relationship between the content of the test, and the construct it presumes to measure is known as
Test content validity
211
_____ areas reflect essential knowledge, behaviors, skills that represent the construct
Content
212
______ comes from educational standards, accreditation standards, school curricular, syllabi textbooks
Achievement
213
Personality and clinical inventories come primarily from
Characteristics in the DSM
214
This comes from job, descriptions, employment, assessments, activities, tasks, and duties of the specific job
Career
215
Some instruments are designed to measure a general _____, while others are designed to measure ____ components of a construct
construct, several
216
Predictor variable is compared to the criterion for which is designed to predict:
Criterion based evidence (aptitude test is the predictor variable)
217
The predictor variable is concurrently related to some criterion, (example depressed mood)
Concurrent evidence
218
The degree to which the test score estimates some future level of performance
Predictive evidence
219
The chosen criterion must be _____ to the intended purpose of the test
Appropriate (Example IQ test is not predictor of morality)
220
Relevant, reliable, and uncontaminated are what
Criterion measures should be
221
Should not be influenced by external factors that are unrelated to the criterion is the definition of
Uncontaminated
222
The means by which we evaluate the relationship between test results and a criterion measure
Validity coefficients
223
The purpose is to show that the test scores accurately predict the criterion performance
Validity coefficient
224
The range is from negative one to + 1 .5 is very high. .21 is very low.
Validity coefficient
225
A means of providing evidence of internal structure of a test
Evidence of homogeneity
226
This can be proven by high, internal consistency coefficient
Homogeneity
227
______ _____ Between scales, or a sub test on a single instrument, provides evidence of these components measure the construct that was intended
High correlation
228
____ ____ is by correlating one instrument to other instruments that assess the same construct
Convergent evidence
229
Test developers will use other
Well, established instruments
230
When revising an instrument developers will use ___ ____ to compare with the latest version to be sure both are measuring the same construct
Previous versions
231
_____ uses consistently low correlation values between the test and other test to measure different constructs
Divergent
232
Another means to provide evidence of constructibility is called
Group differentiation
233
If two groups have vastly ___ scores in a predicted way, then the test has evidence of ____
Different, construct validity
234
Shows the degree to which test scores change with age
Age differentiation
235
Source of construct validity
Experimental results
236
The expectation that benefits will come from the test scores is known as
Intended Evidence-based consequences
237
Actual and potential consequences of the test use, and the social impact are known as
Unintended consequences
238
The actions and processes and emotional traits that the test taker invokes in responding to
Evidence based response process
239
Does it look legitimate. Is it too hard or too childish. Is it too long too short or examples of
Evidence-based response process ( Does the test appear to test what it’s intended to test? ?)
240
disruptive behavior in classroom is best
Observed
241
The degree to which instrument measures what it is supposed to measure.
Validity
242
amount of variation of a random variable expected about its mean.
Standard deviation