Study Guide Exam 2 (Assessment and Diagnosis) Flashcards

1
Q

Norm samples: what they need to be

A

Representative of the population taking the test
Consistent with that population
Current (must match current generation)
Large enough sample size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Flynn effect

A

Intelligence increases over successive generations

In order to stay accurate, intelligence tests must be renormed every couple of years

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Types of norm samples

A

Nationally representative sample (reflects society as a whole)
Local sample
Clinical sample (compare to people with given diagnosis)
Criminal sample (utilizing criminals)
Employee sample (used in hiring decisions)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Ungrouped frequency distributions

A

For each score/criteria, number of people/items that fit criteria are listed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Grouped frequency distributions

A

Scores are grouped (ex: 90-100) and number of people whose scores lie in that range are listed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Frequency graphs

A

Histograms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Mean

A

Arithmetic average

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Median

A

Point that divides distribution in half

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Mode

A

Most frequent score

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which measure of central tendency to pick

A

Normal distribution: mean
Skewed distribution: median
Nominal data: mode

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Positions of mean and median in positively and negatively skewed distributions

A
Positively skewed (right skewed): mean is higher than median
Negatively skewed (left skewed): median is higher than mean
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Standard deviations

A

Average distance of scores and how far they vary from mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Raw scores

A

Number of questions answered correctly on a test

Only used to calculate other scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Percentile ranks

A

Percentage of people scoring below

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

z scores

A

M=0

SD=1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

t scores

A

M=50

SD=10

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

IQ scores

A

M=100

SD=15

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Content sampling error

A

Difference between sample of items on test and total domain of items

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Time sampling error

A

Random fluctuations in performance over time

Can be due to examinee (fatigue, illness, anxiety, maturation) or due to environment (distractions, temperature)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Interrater differences

A

When scoring is subjective, different scorers may score answers differently

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Test-retest reliability

A

Administer the same test on 2 occasions
Correlate the scores from both administrations
Sensitive to sampling error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Things to consider surrounding test-retest reliability

A

Length of interval between testing
Activities during interval (distraction or not)
Carry-over effects from one test to next

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Alternate-form reliability

A

Develop two parallel forms of test
Administer both forms (simultaneously or delayed)
Correlate the scores of the different forms
Sensitive to content sampling error (simultaneous and delayed) and time sampling error (delayed only)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Things to consider surrounding alternate-form reliability

A

Few tests have alternate forms

Reduction of carry-over effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Split-half reliability

A

Administer the test
Divide it into 2 equivalent halves
Correlate the scores for the half tests
Sensitive to content sampling error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Things to consider surrounding split-half reliability

A

Only 1 administration (no time sampling error)
How to split test up
Short tests have worse reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Kuder-Richardson and coefficient (Cronbach’s) alpha

A

Administer test
Compare each item to all other items
Use KR-20 for dichotomous answers and Cronbach’s alpha for any type of variable
Sensitive to content sampling error and item heterogeneity
Measures internal consistency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Inter-rater reliability

A

Administer test
2 individuals score test
Calculate agreement between scores
Sensitive to differences between raters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

High-stake decision tests: reliability coefficient used

A

Greater than 0.9 or 0.95

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

General clinical use: reliability coefficient used

A

Greater than 0.8

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Class tests and screening tests: reliability coefficient used

A

Greater than 0.7

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Content validity

A

Degree to which the items on the test are representative of the behavior the test was designed to sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

How content validity is determined

A

Expert judges systematically review the test content

Evaluate item relevance and content coverage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Criterion-related validity

A

Degree to which the test is effective in estimating performance on an outcome measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Predictive validity

A

Form of criterion-related validity
Time interval between test and criterion
Example: ACT and college performance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Concurrent validity

A

Form of criterion-related validity
Test and criterion are measured at same time
Example: language test and GPA

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Construct validity

A

Degree to which test measures what it is designed to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Convergent validity

A

Form of construct validity

Correlate test scores with tests of same or similar construct to determine

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Discriminant validity

A

Form of construct validity

Correlate test scores with tests of dissimilar construct to determine

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Incremental validity

A

Determines if the test provides a gain over another test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Face validity

A

Determines if the test appears to measure what it is designed to measure
Not a true form of validity
Problem with tests high in these: can fake them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Type of material that should be used on a matching test

A

Homogenous material (all items should relate to a common theme)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Multiple choice tests: what kinds of stems should not be included?

A

Negatively-stated ones

Unclear ones

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Multiple choice tests: how many alternatives should be given?

A

3-5

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Multiple choice tests: what makes a bad alternative?

A

Long
Grammatically incorrect in question
Implausible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Multiple choice tests: how should placement of correct answer be determined?

A

Random (otherwise, examinees can detect pattern)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Multiple choice tests, true/false tests, and typical response tests: what kind of wording should be avoided?

A

“Never” or “always” for all 3
“Usually” for true/false
“All of the above” or “none of the above” for multiple choice

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

True/false tests: how many ideas per item?

A

1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

True/false tests: what should be the ratio of true to false answers?

A

1:1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

Matching tests: ratio of responses to stems?

A

More responses than stems (make it possible to get only 1 wrong)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Matching tests: how long should responses and lists be?

A

Brief

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Essay tests and short answer tests: what needs to be created?

A

Scoring rubric

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

Essay tests: what kinds of material should be covered?

A

Objectives that can’t be easily measured with selected-response items

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

Essay tests: how should grading be done?

A

Blindly

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

Short answer tests: how long should answers be?

A

Questions should be able to be answered in only a few words

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Short answer tests: how many correct responses?

A

1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Short answer tests: for quantitative items, what should be specified?

A

Desired level of precision

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

Short answer tests: how many blanks should be included? How long should they be?

A

Only 1 blank included
Should be long enough to write out answer
Otherwise, becomes dead giveaway

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

Short answer tests: where should blanks be included?

A

At the end of the sentence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

Typical response tests: what should be covered?

A

Focus items on experiences (thoughts, feelings, behaviors)

Limit items to a single experience

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

Typical response tests: what kinds of questions should be avoided?

A

Items that will be answered universally the same

Leading questions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

Typical response tests: how should response scales be constructed?

A

If neutral option is desired, have odd numbered scale
High numbers shouldn’t always represent the same thing
Options should be labeled as Likert-type scale (rating from 0-7, etc.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

Spearman

A

Identified a general intelligence “G”

Underlies everything else about you

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

Cattell-Horn-Carroll

A

10 types of intelligence theory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

3 abilities incorporated by most definitions of intelligence

A

Problem solving
Abstract reasoning
Ability to acquire knowledge

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

Original determination of IQ (used by Binet)

A

Mental age/chronological age * 100

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

How IQ is currently determined

A

Raw score compared to age/grade appropriate norm sample

M=100, SD=15

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

Why professionals have a love/hate relationship with intelligence tests

A

Good: reliable and valid (psychometrically sound, predict academic success, fairly stable over time)
Bad: limited (make complex construct into 1 number), misunderstood and overused

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

Group administered tests: who administers and who scores?

A

Standardized: anyone can administer (teachers, etc.), but professionals interpret

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

Group administered tests: content focuses on which skills most?

A

Verbal skills

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

Examples of group-administered aptitude tests

A

Otis-Lennon School Ability Test

American College Test (ACT)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

Individually administered tests: how standardized?

A

Very standardized
No feedback given during testing regarding performance or test
Additional queries only when specified (only can say “Tell me more about that.”)
Answers are recorded verbatim

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

Individually administered tests: starting point

A

Starting point determined by age/grade

Reversals sometimes needed (person gets 1st question wrong: must back down in level)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

Individually administered tests: ending point

A

Testing ends when person answers 5 questions wrong in a row

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

Individually administered tests: skills tested

A

Verbal and performance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

3 individually administered IQ tests for adults

A

Wechsler Adult Intelligence Scale (WAIS; most commonly used)
Stanford-Binet
Woodcock-Johnson Tests of Cognitive Abilities

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

Child version of Wechsler Adult Intelligence Scale

A

Wechsler Intelligence Scale for Children (WISC)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

WAIS: subtests and index scores

A

15 subtests combine to make 4 index scores: Verbal Comprehension Index (VCI), Perceptual Reasoning Index (PRI), Working Memory Index (WMI), Processing Speed Index (PSI)
4 index scores combined to make Full Scale IQ score

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

WAIS: norm set

A

Older teenagers to elderly

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

WISC: basics

A

2-3 hours to administer and score
Administered by professionals
Normed for children elementary-aged to older adolescence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

Stanford-Binet: norm set

A

Young children to elderly

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

Stanford-Binet: IQ scores

A

3 composite IQ scores: verbal IQ, nonverbal IQ, full scale IQ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

Score range difference between WAIS/WISC and Stanford-Binet

A

Stanford-Binet: possible to score higher than 160 (not possible for WAIS or WISC)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

Woodcock-Johnson: norm set

A

Young children to elderly

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

What Woodcock-Johnson is based on

A

Cattell-Horn-Carroll theory of 10 types of intelligence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

Woodcock-Johnson full scale IQ

A

Based on comprehensive assessment of Cattell-Horn-Carrol abilities

87
Q

Full scale IQ

A

Overall, composite IQ (# reported)

88
Q

What kind of a construct is IQ?

A

Unitary construct

89
Q

2 disorders that include intelligence in the criteria

A
Intellectual disability (IQ less than 70, impairments across multiple domains- occupational, educational, social function, activities of daily living)
Learning disorders (discrepancy between intelligence and achievement; math, reading, written expression)
Neither is based on intelligence alone
90
Q

Response to intervention

A

Method of preventing struggling students from being placed in special ed
Students are provided regular instruction: progress is monitored
If they don’t progress, they get additional instruction: progress is monitored
Those who still don’t respond receive special education or special education evaluation

91
Q

Achievement definition

A

Knowledge in a skill or content domain in which one has received instruction

92
Q

Aptitude vs. achievement

A

Aptitude measures cognitive abilities/ knowledge accumulated across life experience
Achievement measures learning due to instruction

93
Q

Group administered achievement tests

A

Can be administered by anyone, but interpreted by professionals
Standardized
Items increase in difficulty as exam progresses
Time limits often included
Often focus on verbal skills

94
Q

Examples of group administered achievement tests

A

Stanford Achievement Tests
Iowa Tests of Basic Skills (Iowa Basics)
California Achievement Tests

95
Q

What individually administered achievement tests are used for

A

Used to determine presence of learning disorders

96
Q

Standardization of individually administered achievement tests

A

No feedback given during testing regarding performance or test
Additional queries used only when specified
Answers are recorded verbatim

97
Q

Examples of individually administered achievement tests

A

Wechsler Individual Achievement Test
Woodcock-Johnson Tests of Achievement
Wide Range Achievement Test

98
Q

Wechsler Individual Achievement Test: norm set and areas tested

A

Normed for young children to elderly

Scores: reading, math, written language (handwriting), oral language

99
Q

Woodcock-Johnson Tests of Achievement: norm set and areas tested

A

Normed for young children to elderly

Scores: reading, oral language, math, writing

100
Q

Wide Range Achievement Test: norm set and areas tested

A

Normed for young children to elderly

Scores: word reading, reading comprehension, spelling, math

101
Q

How Wide Range Achievement Test differs from other 2

A

WRAT is used as a screening test: it takes only 20-30 minutes to administer (others take 1.5-3 hours)

102
Q

Other examples of achievement tests

A

School tests (teacher-constructed tests)
Psych GRE
MCAT
Licensing exams (EPPP- psychologists)

103
Q

Personality

A

Characteristic way of behaving/thinking across situations

104
Q

Uses for personality assessments

A
Diagnosis
Treatment planning
Self-understanding
Identifying children with emotional/behavioral problems
Hiring decisions
Legal questions
105
Q

Woodworth

A

Developed first personality test (Personal Data Sheet)

106
Q

Trait vs. state

A

Trait: stable internal characteristic, test-retest reliability can be greater than 0.8
State: transient, lower test-retest reliability

107
Q

Response set

A

Unconscious responding in a negative or positive manner

Test taker bias that affects formal personality assessment

108
Q

Dissimulation

A

Faking the test
Increases with face validity
Test taker bias that affects formal personality assessment

109
Q

Validity scales

A

Used to detect individuals not responding in an accurate manner on personality assessments

110
Q

Content rational approach

A

Similar to process of determining content validity: expert looks at test and decides if it represents what it should be testing

111
Q

Empirical criterion keying

A

Large pool of items is administered to 2 groups: clinical group with specific diagnosis and control group
Items that discriminate between groups are retained (may or may not be directly associated with psychopathology- not necessarily face valid)

112
Q

Minnesota Multiphasic Personality Inventory (MMPI)

A

Most used personality measure
Developed using empirical criterion keying
Contains validity scales (detect random responding, lying, etc.)
Adequate reliability
10 clinical scales

113
Q

Hypochondriasis

A

Clinical scale on MMPI

Somatic complaints

114
Q

Depression

A

Clinical scale on MMPI

Pessimism, hopelessness, discouragement

115
Q

Hysteria

A

Clinical scale on MMPI

Development of physical symptoms in response to stress

116
Q

Psychopathic deviate

A

Clinical scale on MMPI

Difficulty incorporating societal standards and values

117
Q

Masculinity/femininity

A

Clinical scale on MMPI

Tendency to reject stereotypical gender norms

118
Q

Paranoia

A

Clinical scale on MMPI

Paranoid delusions

119
Q

Psychasthenia

A

Clinical scale on MMPI

Anxiety, agitation, discomfort

120
Q

Schizophrenia

A

Clinical scale on MMPI

Psychotic symptoms, confusion, disorientation

121
Q

Hypomania

A

Clinical scale on MMPI

High energy levels, narcissism, possibly mania

122
Q

Social introversion

A

Clinical scale on MMPI

Prefers being alone to being with others

123
Q

Factor analysis

A

Statistical approach to personality assessment development

Evaluates the presence/structure of latent constructs

124
Q

NEO Personality Inventory

A

Developed using factor analysis
5-factor model (Neuroticism, Extraversion, Openness, Agreeableness, Conscientiousness)
Pretty good reliability and validity

125
Q

Theoretical approach

A

Match test to theory

126
Q

Myers-Briggs Type Indicator

A

Developed using theoretical approach
Based on Jung’s theories
4 scales: introversion (I)/extraversion (E), sensing (S)/intuition (N), thinking (T)/feeling (F), judging (J)/perceiving (P)
Personality represented by one of 16 4 letter combinations

127
Q

Millon Clinical Multiaxial Inventory (MCMI)

A

Developed using theoretical approach
Based on Millon’s theories surrounding personality disorders
2 scales: clinical personality scales and clinical syndrome scales
Good reliability and validity, but high correlations between scales (problem)

128
Q

Objective personality assessments given to children

A

Child Behavior Checklist
Barkley Scales (ADHD)
Each test has a version for the parent, a version for the child, and a version for the teacher to fill out

129
Q

Broad-band vs. symptom measures

A

Broad-band: lots of info on a variety of topics, allow for a comprehensive view (example: MMPI)
Symptom measure: identify specific symptoms (example: Beck Depression Inventory)

130
Q

Ink blot test

A

Examinee is presented with an ambiguous inkblot and asked to identify what they see
Limited validity

131
Q

Rorschach ink blot test scoring/interpreting

A

Exner developed most comprehensive system for scoring (including norm set)
Limited validity, though

132
Q

Apperception tests

A

Given an ambiguous picture, examinee must make up story
Themes presented in stories tell something about examinee
Have issues with validity

133
Q

Projective drawings: advantage

A

Require little verbal abilities/ child friendly

134
Q

House-tree-person test

A

House-tree-person test (house: home life and family relationships, tree: deep feelings about self, person: less deep view of self)

135
Q

Pros and cons of projective tests

A

Pros: popular in clinical settings, supply rich information (not a lot of face validity)
Cons: questionable psychometrics (poor reliability and validity), so should be used with caution

136
Q

Anatomical dolls

A

Controversial assessment technique
Used to assess sexual assault in children (watch what child is paying attention to, how child plays with doll, etc.)
Lots of false positives

137
Q

Hypnosis assisted assessment and drug assisted assessment

A

Controversial assessment technique
Truth serum (sodium amytol): help people relax and share difficult information
Help people relax and remember things
People under hypnosis or sodium amytol are suggestible to forming false memories

138
Q

Neuropsychology

A

Study of brain-behavior relationships

139
Q

Neurology vs. neuropsychology

A

Neurologist focuses on anatomy and physiology of brain

Neuropsychologist focuses on functional product (behavior and cognition) of CNS dysfunction

140
Q

Uses of neuropsychology

A

Identify damaged areas of brain
Identify impairments caused by damage
Assessing brain function

141
Q

Common referral questions

A
Traumatic brain injury
Cerebrovascular accidents (example: stroke)
Tumors
Dementia and delirium
Neurological conditions
142
Q

A thorough neuropsychological assessment includes…

A
Higher order information processing
Anterior and posterior cortical regions
Presence of specific deficits
Intact functional systems
Affect, personality, behavior
143
Q

Fixed battery

A

Comprehensive, standard set of tests administered to everyone
Take a long time to administer (about 10 hours)

144
Q

Most commonly used fixed battery

A

Halstead-Reitan Neuropsychological Test Battery for Adults (HRNB)

145
Q

Flexible battery

A

Flexible combination of tests to address specific referral question

146
Q

Brief screeners

A

Quickly administered tests that provide general information on functioning
Used to determine whether more testing is needed
Example: mini mental status exam

147
Q

Memory assessments

A

Memory is impaired in functional and organic disorders (forgetting recent events)
Can be used to discriminate between psychiatric disorders and brain injury (forgetting is common in brain injury but not in psychiatric disorders)

148
Q

Most commonly used memory test

A

Wechsler Memory Scale

149
Q

Continuous performance tests

A
Used to assess attention (ADHD diagnosis, etc.)
Boring tasks (press a key when an x shows up on the screen, etc.): measure how well person stays with them
150
Q

Executive function tests

A

Stroop task: measure ability to ignore reading word (name color of ink only)
Wisconsin card sort: measure adaptability to new rules
Delay discounting: measure ability to delay gratification in order to gain a greater outcome later on

151
Q

Motor function tests

A

Grip strength
Finger tapping test
Purdue pegboard (fine motor skills: put pegs on peg board, put washers on pegs)

152
Q

Sensory functioning tests

A
Clock drawing test
Facial recognition test
Left-right orientation
Smell identification
Finger orientation
153
Q

Language functioning tests

A

Measure ability to develop language skills and ability to use language

154
Q

Example of language functioning test

A

Expressive Vocabulary Test

Boston Diagnostic Aphasia Examination

155
Q

Normative approach to interpretation

A

Compare current performance against normative standard

Inferences made within context of premorbid ability

156
Q

Ideographic approach to interpretation

A

Compare within the individual: compare current scores to previous scores or estimates of premorbid functioning

157
Q

How to estimate premorbid functioning

A

Prior testing
Reviewing records
Clinical interview (“What were you like beforehand?”)
Interviewing others
Demographic estimation (assuming that you were average)
Hold tests (tests that are resistant to brain damage, such as vocabulary- scores are used to estimate IQ)

158
Q

Pattern analysis approach to interpretation

A

Patterns across tasks differentiate functional/dysfunctional systems

159
Q

Pathognomonic signs

A

Signs that are highly indicative of dysfunction

160
Q

ABCs of behavioral assessment

A

A: antecedent (what was happening before behavior took place)
B: behavior (what did the person do)
C: consequent (what happened after the behavior took place)

161
Q

Direct observation

A

Method of behavioral assessment

Observe behavior in its context (real world)

162
Q

Analogue assessment

A

Method of behavioral assessment

Simulate real world events in a therapy setting through role play

163
Q

Indirect observation

A

Client monitors observations through self-monitoring (recording behavior) or self-report (remembering what happened after the fact)

164
Q

Behavioral interview

A

Clinical interview focusing on ABCs

Relies on self-report

165
Q

Sources of information for behavioral assessment

A
Client
Therapist
Parents
Teachers
Spouses 
Friends
166
Q

Pros and cons for behavioral assessment

A

Pros: direct information, contextual
Cons: labor intensive, reactivity, not everything is observable

167
Q

Reactivity

A

Problem with direct observation: behavior changes when being observed
Decreases as observation time increases

168
Q

Settings for behavioral assessment

A

School
Home
Therapy setting
Real world is preferable to therapy setting

169
Q

Formal inventories

A

Used to enable comparison across people (standardization)
Informants rate behavior on a number of dimensions
Parents, teachers, spouse, child, etc.

170
Q

Formal inventories: broad-based vs. single domain

A

Broad based: cover a number of behaviors/disorders (example: Achenbach)
Single domain: assess behavior for 1 disorder (example: Childhood Autism Rating Scale, Barkley Scales- ADHD)

171
Q

Psychophysiology

A

Used to record internal behavior/physiological responses

172
Q

EEG

A

Used in psychophysiology

Measures brain waves by measuring electrical activity across scalp

173
Q

GSR (Galvanic skin response)

A

Used in psychophysiology

Measures sweat

174
Q

Settings for forensic psychology

A
Prison (most common)
Police departments
Law firms
Government agencies
Private practice (consultants)
175
Q

Role of psychologists in court

A

Provide testimony as an expert witness

176
Q

Expert witness

A

Person who possesses knowledge and expertise necessary to assist judge/jury
Objective presentation is goal

177
Q

Differences between clinical and forensic assessment: purpose

A

Clinical: purpose is diagnosis and treatment
Forensic: purpose is gaining information for court

178
Q

Differences between clinical and forensic assessment: participation

A

Clinical: participation is voluntary
Forensic: participation is involuntary

179
Q

Differences between clinical and forensic assessment: confidentiality

A

Clinical: confidentiality
Forensic: no confidentiality

180
Q

Differences between clinical and forensic assessment: setting

A

Clinical: office
Forensic: jail

181
Q

Differences between clinical and forensic assessment: testing attitude

A

Clinical: positive, genuine
Forensic: hostile, coached (by lawyer; malingering is a big concern)

182
Q

Not guilty by reason of insanity (NGRI)

A

At the time of the offense the defendant, by reason of mental illness or mental defect, did not know his/her conduct was wrong
Used in less than 1% of felony cases; successful in about 25%
Results in mandatory hospitalization (prison-based state hospital; stay until person is no longer a danger)

183
Q

NGRI defense: what assessment involves

A

Review of case records
Review of mental health history
Clinical interview
Psychological testing

184
Q

Competency to be sentenced

A

Criminal is required to understand reason for punishment
If cannot understand reason for punishment, don’t receive it
Rarely contested: most common cases of contesting are capital cases

185
Q

Mitigation in sentencing

A

Determining whether circumstances exist that lessen moral culpability
Examples: crime of passion, brain injury causing impulsivity
Evaluate probability of future violence

186
Q

Juvenile tried as adult

A

Determining whether to transfer juvenile to adult court

Decision is based on cognitive, emotional, and moral maturity

187
Q

Capital sentencing and intellectual disability

A

Execution of people with intellectual disabilities is outlawed
Testing assesses cognitive capacity

188
Q

Personal injury litigation

A

Attempt to seek recovery of actual damages (out of pocket costs) and/or punitive damages (grief/emotional distress)
Psychologist has to determine presence of CNS damage, emotional injury assessment, quantify degree of injury, and verify injury actually took place

189
Q

Divorce and child custody

A

Must determine best interests of children

Assess parent factors and child factors

190
Q

Civil competency

A

Determining whether person is able to manage his/her affairs, make medical decisions, and waive rights
Neuropsych testing used

191
Q

Other civil matters relating to children

A

Child abuse/neglect investigations
Removing children from the home
Adoption considerations

192
Q

Admissibility

A

Expert standing doesn’t guarantee testimony will be accepted

193
Q

Daubert standard

A

Expert’s reasoning/methods must be reliable, logical, and scientific
Credible link between reasoning and conclusion
Credibility must be established as well

194
Q

Third-party observers

A

Attorneys or other experts may ask to be present during assessment
Issues: standardization procedures, professional standards, test security

195
Q

Demographic factors that serve as a potential basis for bias

A

Intelligence scores are often higher for Whites than for Blacks, Hispanics, or Native Americans
Intelligence scores are often higher for Asian Americans than for Whites

196
Q

Explanations for differences in psychological assessments

A

Genetic factors
Environmental factors (SES, education, culture)
Gene-environment interaction
Test bias

197
Q

Bias

A

Systematic influence that distorts measurement or prediction by test scores (systematic difference in test scores)

198
Q

Fairness

A

Moral, philosophical, legal issue

Is it okay that differences across groups exist on assessments?

199
Q

Offensiveness

A

Content that is viewed as offensive or demeaning

200
Q

Inappropriate content

A

Source of potential bias

Minority children haven’t been exposed to the content on the test or needed for test

201
Q

Inappropriate standardization samples

A

Source of potential bias

Minorities are underrepresented in standardization samples

202
Q

Examiner and language bias

A

Source of potential bias
Most psychologists are White and speak standard English
May intimidate ethnic minorities
Difficulties communicating accurately with minority children

203
Q

Inequitable social consequences

A

Objection to testing
Consequences of testing results different for minorities
Perceived as unable to learn, assigned to dead-end jobs, previous discrimination, labeling effects

204
Q

Measurement of different constructs

A

Source of potential bias

Tests measure different constructs when used with minorities

205
Q

Differential predictive validity

A

Source of potential bias

Valid predictions apply for one group, but not for another

206
Q

Qualitatively distinct aptitude and personality

A

Source of potential bias
Minority/majority groups possess qualitatively different aptitude and personality structure
Test development should begin with different definitions for different groups

207
Q

Cultural loading

A

Degree of cultural specificity present in the test

Test can be culturally loaded without being culturally biased

208
Q

Culture free tests

A

Several attempts have been made to create these, but ultimately have been unsuccessful

209
Q

Ways to reduce bias on tests

A

Use minority review panels to look for cultural loading (problem: high disagreement)
Factor analysis: use statistics to determine if questions differ across groups
Assess across groups (does it work for everyone?)

210
Q

Evidence for cultural bias

A

Little evidence exists (well-developed, standardized tests show little bias)

211
Q

General ethics to consider

A

Stick to referral question
Match test to your purpose
Consider reliability and validity
Understand norm sample

212
Q

Using testing in context

A

Use multiple measures to converge on a diagnosis

Attend to behavior observations

213
Q

Client considerations

A

Informed consent
Involve client in decisions
Maintain confidentiality
Be sensitive in presenting results

214
Q

Other considerations

A

Maintain test security
Don’t practice outside of expertise
Cultural sensitivity