A - done Flashcards

0
Q

4 typical evaluative procedures?

A

Clinical interview, informal, personality, ability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
1
Q

Number of assessments needed for Dx & Txt?

A

Multiple is best.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Assessment vs test?

A

Test is a subset of assessment, which could include interviews, observations, etc

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Interpretation vs evaluation?

A

Interpretation assigns meaning. Evaluation assigns value/worth, eg progress or effectiveness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Types of assessments? (5 pairs)

A

Individual/group, standardized/nonstandardized, power/speed, maximal/typical performance, objective/subjective.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Standardized/nonstandardized tests?

A

Standardized - consistent administration, validity & reliability, comparison to norms. Nonstandardized - flexible, variable use, use of judgement by administrator eg Rorschach, TAT.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Purposes of assessment? (6)

A

Dx & Txt planning, placement services, admission (ed), selection (job), monitoring progress, evaluation of overall outcomes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Ethics of Appraisal: competence & assessment?

A

Use only instruments trained in and competent to use.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Ethics of Appraisal: assessment & informed consent

A

Explain in advance the nature & purpose of the assessment & intended use.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Ethics of Appraisal: release of results?

A

To professionals qualified to interpret results. With consent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Ethics of Appraisal: conditions of administration?

A

Conditions that facilitate optimal results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Ethics of Appraisal: instrument selection? (5)

A

Current, valid, reliable, multiculturally appropriate, w consideration for psychometric limitations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Ethics of Appraisal: scoring and interpretation?

A

Document any concerns about the tests, the administration, and how they will be used in counseling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Ethics of Appraisal: assessment construction?

A

Use scientific methodology & knowledge, inform of benefits, limits, encourage use of multiple sources of info

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Civil Rights Act of 1964 plus amendments?

A

Assessments for employability must relate strictly to the job description, and must not discriminate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

FERPA?

A

Family Education Rights & Privacy Act, 1974. Provides confidentiality for test results, but access for both student & parent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

IDEA?

A

Individuals w Disabilities Education improvement Act, 2004. Right to testing at expense of school system. Right to IEP w accommodations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Vocational and Technical Education Act? (7)

A
  1. Vocational assistance for those: w disabilities, the economically disadvantaged, entering nontraditional occupations, w limited English, the incarcerated, adults needing voc training, & single parents.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

ADA?

A

Americans w Disabilities Act, 1990. Employment testing must measure ability to do job tasks without confounding results with a disability. Ensures accomodations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

HIPAA?

A

Health Insurance Portability & Accountability Act, 1996. Obtain consent to release. CTs have access to their records.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

NCLB?

A

No Child Left Behind Act, 2001. Improve accountability. Requires states to assess basic skills.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Larry P vs Riles

A

Document use of nondiscriminatory & valid assessments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Diana v. California State Board of Educ.

A

Counselors must provide testing Information in the CT’s 1st language, as well as English.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Bakke v Regents of the University of California.

A

Barred use of quotas for admissions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Soroka v Dayton-Hudson Co.

A

Psychological screening tests for hiring are an invasion of privacy. Controversial.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Sources of info on assessments?

A

Mental Measurementse Yearbook (Buros). Tests in Print. Tests. Test Critiques.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

MMY?

A

Mental Measurements Yearbook. Details for commercially available assessments, including reliability, validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

TIP.

A

Tests in Print. All published and commercial tests for psych and Educ. No critiques or psychometric data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Test Critiques?

A

Comprehensive reviews, 8 pp, for pro & lay person.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Definition of Validity?
Property of?
To increase credibility?

A
  • How accurately does an instrument measure what it purports to measure?
  • A property of the scores of an instrument. Will vary according to intended purpose and intended test takers.
  • More types of validity means greater credibility.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

8 Types of validity?

A

Content, criterion (concurrent/predictive), construct, experimental design validity, convergent/discriminant.
Not face.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Content validity?

A

Content is appropriate to purpose, w all major content areas covered w an appropriate number of items for an area’s importance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Criterion validity?

A

Effectiveness relative to a specific criterion. Can be concurrent or predictive validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Concurrent validity?

A

Comparison to a criterion collected at the same time.

Ex: depression scores & data collected on hospitalizations for SI in the last 6 months.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Predictive validity?

A

Predicts performance on a criterion collected in the future.
Ex: can a depression score predict hospitalizations for SI in the future?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Construct validity?

A

Extent to which an instrument measures a theoretical construct,
esp. an abstract one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Experimental design validity?

A

Implementation of a design to show an instrument measures a specific construct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Statistical technique used to check construct validity?

A

Factor analysis - looks for statistical relationships between subscales and with the construct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Convergent validity?

A

Relationship can be shown with other constructs where theoretically there should be relationship.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Discriminant validity?

A

No relationship is found w constructs where no relationship should be found.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Validity coefficient?

A

A correlation between a test score and the criterion measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

A test of prediction validity?

A

Regression equation to predict an individual’s future score.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Standard error of estimate?

A

The expected margin of error in a predicted criterion score. Prediction validity is never 100%.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Define decision accuracy.

A

The accuracy of instruments in supporting decisions in counseling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Decision accuracy-
Definition?
6 types?

A

Assesses the accuracy of an instrument in supporting counseling decisions.
Sensitivity, specificity, false positive, false negative, efficiency, incremental validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Decision accuracy - sensitivity?

A

Instrument’s ability to identify the presence of a phenomenon.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Decision accuracy - specificity?

A

Instrument’s ability to identify the absence of a phenomenon.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Decision accuracy - false positive?

A

Instrument wrongly identifies the presence of a phenomenon.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Decision accuracy - false negative?

A

Instrument inaccurately identifies the absence of a phenomenon.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Decision accuracy - efficiency?

A

Ratio of correct counseling decisions indicated by the instrument over total decisions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

Decision accuracy - incremental validity?

A

Concerned w the extent an instrument enhances the accuracy of prediction of a criterion, eg job performance or GPA.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Reliability?

A

Consistency of scores obtained by the same person over different administrations of the same test. Reliability is concerned w the error found in instruments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Reliability - test-retest?

A

AKA temporal stability. Consistency of scores across time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

Alternative form reliability?

A

AKA parallel form or equivalent form reliability. Consistency of scores across alternative, equivalent tests.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

Reliability - Internal consistency?

A

Consistency of responses from 1 item to another during a single administration of the test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

Split half reliability?

A

Correlates 1/2 the test against the other half.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Spearman-Brown Prophecy formula?

A

Used to compensate for short length in split half estimates of reliability in tests.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

A test for inter-item reliability?

A

Correlate all possible split half combinations in a test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

Kuder Richardson Formula 20?

A

Estimate of reliability in inter-item consistency when items are dichotomous eg true/false.

59
Q

Cronbach’s coefficient alpha?

A

Estimate of reliability in inter-item consistency when items have multipoint responses, eg Likert scales.

60
Q

Inter-scorer reliability?

A

AKA inter-rater reliability. Degree of consistency between scorers doing observation/assessment/interviews.

61
Q

How is reliability reported?

A

As a correlation coefficient, the closer to 1.00 the more reliable. Nationally normed Achievement, aptitude, GRE, would be .90 and above. Personality tests may be below .90.

62
Q

Standard error of measurement?

A

The standard deviation of repeated scores from the same test w the same person. The larger the SEM, the lower the reliability. SEM is often reported in confidence intervals. Ex: 95% of scores will fall at 2 SD. That is the 95% confidence level.

63
Q

5 factors influencing reliability?

A

Test length (longer is better).
Range restriction (restriction of range of possible scores is worse)
Homogeneity of items (more homogeneous is better).
Heterogeneity of test takers (more heterogenous on the tested subject is better).
Speed tests (get spuriously high reliability because everyone gets almost all items correct)

64
Q

Relationship between reliability and validity?

A

Test scores can be reliable but not valid. If valid, they’re reliable.

65
Q

Item analysis?

A

Assess test items - eliminate too easy/difficult/confusing items.

66
Q

Item difficulty?

A

P value = percent of test takers who get an item correctly. P=.5 is considered good item difficulty.

67
Q

Item discrimination?

A

How well an assessment discriminates between high & low scorers.

68
Q

Test theory?

A
  • Psychometric theory.
  • Expects constructs to have the ability to be measured for quality and quantity to be considered empirical.
  • Strives to enhance validity & reliability.
69
Q

Classical test theory? (3)

A
  • Most influential.
  • Individual’s observed score = true score + error present during test administration.
  • Aim - increase reliability.
70
Q

Item Response Theory?

A

Aka Modern Test Theory. Applying mathematics to the data, eg to detect bias (eg different responses from males/females), or equating scores from 2 different tests.

71
Q

Construct-based validity model? (3)

A
  • A test theory.
  • Validity is a wholistic concept.
  • Internal & external validity.
72
Q

Scales of measurement?

A

Nominal, ordinal, interval, ratio.

73
Q

Nominal scale?

A

Named classifications. Numbers as labels.

74
Q

Ordinal scale?

A

Rank order.
Eg Likert scales, or first/second/third place.
Intervals aren’t necessarily equal.

75
Q

Interval scale?

A

Equal intervals. No zero point.

Ex: educational and psychological test scores are usually interval Ex: temperature.

76
Q

Ratio scales?

A

Equal intervals, has a zero point.

Ex: height, weight, physical measurements in natural sciences.

77
Q

Scale designs used? (4)

A

Likert. Semantic differential. Thurstone. Guttman.

78
Q

Likert scale?

A

Assessing attitudes or opinions.
Eg Strongly agree to strongly disagree.
Eg Very satisfied to very dissatisfied.

79
Q

Semantic differential scale? (2)

A

Aka self-anchored scale.

Place a mark between dichotomous adjectives.

80
Q

Thurstone scale? (2)

A

Express beliefs by marking agree/disagree to various statements that are related but successive.
Employs a paired comparison methods.

81
Q

Guttman scale? (3)

A

Measures intensity of a variable.
Items are presented in a successive order from less to more extreme.
Check items you agree with.

82
Q

Raw scores vs derived scores?

A

Raw is the original score.

Derived scores are converted and compared to a norm group.

83
Q

Normal distribution? (3)

A
  • Bell curve.
  • Most scores fall near the mean, few fall at extremes.
  • Permits comparisons to be made between CTs and across tests for 1 CT through derived scores.
84
Q

Norms

A

Typical performance against which other scores are compared.

85
Q

Norm-referenced assessment?

A

Individual’s score is compared to the average score of the test-taking norm group.

86
Q

Criterion-referenced assessment?

A

Comparing an individual’s score w a predetermined criterion.
Eg licensing exams.

87
Q

Ipsative assessment?

A

Comparing a test takers score w his/her previous scores on the test.
Eg computer games.

88
Q

Percentile, or percentile rank? (3)

A
  • Percentage of scores falling at or below an individual score.
  • Not equal units of measure.
  • Percentiles tend to exaggerate differences near the mean and minimize differences at the tails.
89
Q

Standardized score?

A

Compares individual scores to a norm group through conversion of the raw score to a score that specifies the number of standard deviations a score is from the mean.

90
Q

Z score?

A

Mean = 0
1 SD = 1.00
2 SD = 2.00
-1 SD = -1.00

91
Q

T score?

A
Mean of 50, SD of 10. 
Used on personality, interest, and aptitude tests.
1 SD = 60
2 SD = 70
-1 SD = 40
92
Q

Deviation IQ?

A
Aka standard score.
Mean of 100, SD of 15.
1 SD = 115
2 SD = 130
-1 SD = 85
93
Q

Stanine scores?

A

Achievement tests.
Mean of 5, with the mean falling halfway through the 5th interval.
SD of 2. Always a whole number. Approx:
1 SD = 7
2 SD = 9
-1 SD = 3

94
Q

Normal curve equivalents?

A

A standardized score, range 1-99. Divide the normal curve into 100 equal parts.
Mean of 50, SD of 21.06.
NCE score of 23 means 23% of peers scored at or below.
1 SD = 71.06
2 SD = 92.12
-1 SD = 29.94

95
Q

Developmental scores?

A

Age equivalent & grade equivalent scores.

96
Q

Age equivalent scores?

A

Compares an individual’s score w the average score of those of the same age.
Reported in chronological yrs and months.

97
Q

Grade equivalent scores?

A

Compares the individual’s score w the average score of those at the SAME grade level.
Reported in grade level & months in grade.
Does not indicate a need for change in grade. Does not assess skills. And does not mean the individual is performing at that grade level.

98
Q

Ability assessment?

4 Types?

A
Instruments that measure the cognitive domain.
Includes:  achievement
                aptitude
                intelligence
                high stakes testing
99
Q

Achievement tests -
Purpose?
5 types?

A
  • Subset of ability tests.
  • Assess what one has learned.
  • Can include: standardized, norm referenced tests; teacher’s criterion tests, tests to assess progress/at risk, tests for program evaluation.
100
Q

Standardized achievement tests include? (3)

Acceptable reliability coefficient?

A

Survey batteries, diagnostic tests, readiness tests.

Reliability >= .80

101
Q

Survey batteries?
Subset of?
Define?
Examples? (4)

A

Subset of standardized achievement tests.
Collection of tests - multiple content areas.
Examples: Iowa test of basic skills, Metropolitan, Terranova, Stanford Achievement.

102
Q

Diagnostic tests?
Subset of?
Purpose?
Examples?

A

Subset of standardized achievement tests.
Identify learning disabilities or strengths & difficulties in an academic area.
Eg Wide range achievement test, key math diagnostic test, Woodcock-Johnson, Peabody individual achievement test, test of adult basic education.

103
Q
Readiness tests - 
Subset of?
Define?
Purpose?
Criticisms?
A

Subset of standardized achievement tests.
Definition: A group of criterion-referenced standardized achievement tests that indicate minimum skills needed to move to next grade.
Used in high stakes testing.
Criticized for their cultural and language biases.

104
Q

Aptitude tests -
Assess?
2 types?

A
  • Subset of ability tests.
  • Assess what a person is capable of learning, predict future performance.
  • Types: Cognitive ability tests
    Vocational aptitude tests
105
Q

Cognitive ability tests?

A

Subset of aptitude tests.
Predict ability to perform in school, up through grad school.
Eg ACT, SAT, GRE, LSAT, MCAT, cognitive ability test, Otis Lennon school ability test, Miller analogies test.

106
Q
Vocational aptitude testing -
Subset of?
Definition?
Useful to whom?
Examples?
A

Subset of aptitude tests.
Predictive tests of occupational success.
For career guidance for job seekers
For employers screening for competent, well-suited employees
Includes: multiple aptitude tests, Armed services vocational aptitude battery, Differential aptitude test, special aptitude tests.

107
Q
Multiple Aptitude Tests -
Subset of?
Assess?
Predict?
Example?
A

Type of vocational aptitude tests.
Assess several distinct aspects of ability at once.
Predict success in several occupations.
Ex: Armed services vocational aptitude battery - most widely used, 10 tests

108
Q

Special aptitude tests?

A

Type of vocational aptitude tests.

Assess one homogenous area of aptitude, eg clerical, mechanical, musical, artistic.

109
Q

Intelligence tests -
Scores?
Purposes?

A

Subset of ability tests.
Single summary score (IQ) and index scores derived from factor analysis.
Identify & classify intellectual developmental disabilities.
Detect giftedness & learning disabilities.

110
Q

First intelligence test developed by?

A

Binet & Simon

111
Q

IQ -
Formula?
Developed by?

A

MA/CA*100

Developed by Stern.

112
Q

Spearman on intelligence?

A

2 factor
G - general factor
S - specific factors - skills acquired in training

113
Q

Catell’s fluid and crystallized intelligence?

A
Fluid = innate ability - reasoning, memory, speed of processing
Crystallized = gained through learning
114
Q

Howard Gardiner’s multiple intelligences

A

8 primary intelligences

Linguistic, logical-math, musical, spatial, bodily kinesthetic, intrapersonal, interpersonal, naturalistic

115
Q

Cattell Horn Carroll on intelligence?

A

Most empirically validated.
Hierarchical w 3 strata: general,
broad cognitive abilities, & narrow cognitive abilities

116
Q

Common intelligence tests?

A

Wechsler (WAIS, WISC, WPPSI), Stanford Binet, Kaufman

117
Q
High stakes testing - 
Subset of?
What is used?
Purpose?
Criticism?
A

Subset of ability tests.
What is used: Criterion-referenced assessments used. A single defined assessment is used.
Purpose: a clear line on pass/fail; has a direct consequence on major educ. decisions.
Criticisms: single scores, not addressing diversity.

118
Q

Clinical assessment - purpose?

Types?

A

Purposes: CT’s self-awareness. Conceptualization and Txt of CT.
Types: Personality assessments
Informal assessments (eg observation, clinical interview)
Other assessments (eg MSE, performance, suicide)

119
Q

Personality tests.
What do they assess?
2 types?

A

Type of clinical assessment.
Facets of character that remain stable - temperament, patterns of behavior.
Objective & projective.

120
Q
Objective personality tests?
Define?
Assess?
Purpose?
Examples?
A

Subset of clinical assessment - personality.
Standardized self report instruments.
Assess personality types, traits, states, self-concept.
Identify psychopathology, assist Txt planning.
Ex: MMPI, Millon, Myers-Briggs, California psychological inventory, 16 personality factors, the NEO, Coopersmith self esteem inventories-kids.

121
Q
Projective personality tests -
Define?
Used by?
Purpose?
Examples?
A

Subset of clinical assessment - personality.
Interpreting CT’s response to ambiguous stimuli.
Psychoanalytic.
Identify psychopathology and for Txt planning.
Ex: Rorshach, TAT, House tree person, Sentence completion test

122
Q

Informal assessments -

Subset of? Type? Purpose? Includes?

A

Subset of clinical assessment. Subjective.
Purpose: to identify strengths & needs of CTs.
Includes: Observation, interviewing, rating scales, classification systems.

123
Q

Observation?

2 types?

A

Type of clinical assessment - informal.
Direct - behavior, antecedents, consequences, usually in a naturalistic setting.
Indirect - through self-report or informants, via behavioral interviewing, checklists, rating scales.

124
Q

Clinical interviewing?

A

Type of clinical assessment - informal.
Most common assessment in counseling.
Structured, semi structured, unstructured.

125
Q

Structured clinical interview?

A

Type of clinical assessment - informal.
Pre-established questions in a set order.
Detailed, exhaustive.
Provide consistency, but no flexibility.

126
Q

Semi structured clinical interview?

A

Type of clinical assessment - informal.
Pre-established questions and areas.
Can customize, flexible.
More prone to bias, error, less reliable.

127
Q

Unstructured clinical interview?

A

Type of clinical assessment - informal.
Tend to follow CT’s lead w open-ended Qs and reflective skills.
Most flexible.
Least reliable.

128
Q

Rating scales for informal clinical assessment?

A

Evaluate the quantity of an attribute.
Eg a scale from 1 - 5, from hardly at all to extremely
Can be Broad band or Narrow focus.

129
Q

Classification systems for informal clinical assessment -
Define?
3 commonly used systems?

A

To assess presence/absence of an attribute.
1 Behavior & feeling word checklists.
2 Sociometric instruments - assess social dynamics.
3 Situational tests, eg role play to see how CT may do in real life.

130
Q

Mental status exam?

12 parts?

A

Clinical assessment - other.
Snapshot of mental Sx & psychological state.
Appearance, attitude, mood & affect, psychomotor, thought process & thought content & perceptions, judgement & insight, intellectual functioning & memory.

131
Q

A performance assessment?

A

Clinical assessment - other.
Nonverbal assessments. For CTs w foreign language or disabilities.
Ex: Draw a man test, Cattell culture fair intelligence tests, Test of non-verbal intelligence (TONI), Porteus, Bayley, Gesell developmental scale.

132
Q

Suicide assessment?

A

Clinical assessment - other.
Gather info to assess lethality & risk factors
through clinical interview or standardized assessments.

133
Q

Suicide assessment acronyms?

A

PIMP

SAD PERSONS

134
Q

PIMP suicide assessment?

A

Plan
Intent
Means
Prior attempt

135
Q

SAD PERSONS suicide assessment?

A

Sex, age, depression, previous attempt, ethanol abuse, rational thought loss, social supports lacking, organized plan, no spouse, sickness.

136
Q

Levels of suicide lethality?

A

Low - not suicidal at time of assessment
Low moderate - somewhat suicidal, but no risk factors
Moderate - suicidal w several risk factors
Moderate high - determined to die, may SA within 72 hrs without intervention
High - SA in process, needs medical intervention

137
Q

Suicide risk factors?

A

Demographics - male, single, widowed, White, higher age
Psychosocial - lack of supports, unemployed/drop in SES
Psych Dx - mood or anxiety disorders, schizophrenia, substance use disorders, borderline, antisocial, narcissistic PDs.
Suicidal emotions - hopelessness, helplessness, worthlessness, loneliness, depression.
Hx - family Hx of suicide, abuse, MI. CT has Hx of SAs.
Individual factors - inability to problem solve, AOD use, low tolerance for psychological pain.
Suicidality & Sx - past and present SI, plans, behavior, intent; no reason for living; HI.

138
Q

Standardized assessments for suicidal lethality?

A

Specific suicide assessments - eg Beck scale for SI.
Reasons for living inventories.
Standardized personality tests - MMPI, Millon
Projective personality tests - TAT, Rorshach, Rotter incomplete sentence

139
Q

Definition of bias in assessment? (3)

A

Bias in language or culture.
Deprives a person of demonstrating their true ability.
Can result in lower or higher scores.

140
Q

Types of bias in assessment? (5)

A

Examiner - examiner’s beliefs/behavior influence the test.
Interpretive - interpretation provides unfair adv/disadvantage
Response - test taker uses a response set, eg always ‘yes’
Situational - testing conditions affect different cultures differently
Ecological - global systems - eg use of only Western tests

141
Q

How to reduce bias in assessment? (7)

A

Use assessments appropriate for multicultural pops.
Provide appropriate norms.
Use the best language for the pop.
Consider how culture/group affects administration and interp.
Understand CT’s worldview & level of acculturation.
Be knowledgeable of the CT’s culture.
Avoid cultural stereotypes.

142
Q

Test translation vs test adaptation?

A

Translation isn’t enough; need to adapt for culture, for familiarity of concepts, objects, values.
Adaptation includes empirically validating the cultural equivalence of the test.

143
Q

Computer based testing -
Definition?
Pros?
Cons?

A

AKA computer based assessment.
Administering, analyzing, interpreting via computer.
Advantages: time & cost reduced, scoring accuracy, quick feedback, standardization, privacy.
Disadvantages: expense, less human contact, may not have standards or normative data, some assessments aren’t possible.

144
Q

Computer adaptive testing?

A

Adjusts the test’s structure and items to the test taker’s abilities.
Eg GRE.