1 Flashcards

1
Q

Comorbitity

A
  1. Medical conditions present simultaneously in a patient

2. Depression and anxiety

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Importance of Assessment

A
  1. Aid in discovery of information about how individual:
    A. Perceives self
    B. Perceives others
    C. How perception helps/hinders ability to achieve goals
  2. Identify dysfunctional thinking:
    A. Develop specific tx plan to learn new ways of thinking
  3. Acquire information about one’s social history
    A. Self perception is learned
    B. Behaviors/emotions/ thoughts have been reinforced
    C. Identify hoe this learning has enhanced/limited individual ability to cope effectively w/life
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Clinical Assessment

A
  1. Psychological assessment via the Clinical Method:
    A. Semi-structured interview: presenting problem, current mental status, developmental history
    B. Relies on clinician experience and intuition
  2. Accuracy and precision affected by numerous factors
    A. Clinical judgment
    B. Hypothesis confirmation bias > gather info that only supports suspected diagnosis
    C. Significant symptom overlap in DSM
    - 46% Americans meet criteria for a DSM disorder at some point
    - 28% of that group manifest comorbidity
    - accurate diagnosis using clinical method alone > major challenge
  3. Psychological problems generally difficult for people to describe accurately
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Standardized Psychological Testing

A
  1. Proven superior to clinical method > reliability and validity
  2. Diagnosis (some) require testing
    A. Learning disorders
    B. Mental retardation
    C. Brain damage after injury
    D. Dementia
  3. Eliminates response bias
    A. Situational defensiveness
    B. Symptom exaggeration/ malingering
    C. Inconsistency of response to items
  4. Can measure degree or severity of disorder more precisely than clinical method
    A. Mild or high Autistic disorder
    B. Levels of depressive disorder
  5. Enables clinician to gather large amount of client info
    A. Personality traits that are overlooked
    B. Elimination of illegal/unethical issues that occur from unintended bias
    C. Allows individuals to be compared to large groups of other peers, so inferences about strengths/weaknesses can be made
    D. Data gathered can be compared to data from other sources (family, interview) helping to formulate tx
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Why do practitioners decide against psychological testing?

A
  1. Lack of training
  2. Most precluded from insurance companies and court systems
  3. The way in which students are trained > not adequately addressing efficiency
    A. Ponderous/ redundant/ ambiguous/ obscure reports that take too long to read/understand
  4. Why important for masters students to understand psychological testing?
    A. When a specific diagnosis may lead to medication tx > clinician in better position to refer to psychiatrist
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Clinician use of psychological tests

A
  1. No single assessment techniques provides clinician w/all info about pt.
    A. Every modality has strength/ weakness
    B. Limited use of time, single interview, psychometric tool used
  2. Task of psychometric testing:
    A. Gather as much useful info as possible
    B. Consider:
    -nature of info provided by each method
    -peculiarities associated w/ specific ways different scales define a construct
    -reliability and validity of different scales
    -motivational and environmental circumstance present during assessment
    -compare data with pt hx, and pt observation
  3. Integrate all data into clear, concise description of how person perceives self and others, and how perception inhibits/helps one’s life goals
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Examples of types of assessment and domains of test selections

A
  1. Educational decisions (learning disability)
  2. Forensic (mentally disordered offender, competency to stand trial)
  3. Personal injury lawsuits
  4. Workman’s comp
  5. Veterans benefits
  6. Child custody
    Domains
  7. Cognitive functioning
  8. Emotional functioning (psychopathology)
  9. Personality
  10. Adaptive level
  11. Alcohol abuse
  12. Diagnosis
  13. Prognosis risk
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Correlation coefficient (r)

A
  1. Ranges from -1.0 to +1.0
    A. The closer to +1, the more closely two variables are related
    B. If r is close to 0, there is no relationship between the variables
    C. If r is positive, as one variable goes up, the other variable goes up
    D. If r is negative, as one variable goes up, the other gets smaller
  2. Squaring r
    A. The square of r is equal to the percent of the variation in one variable that is related to the variation in the other
    B. 0.5 r means = 25% of the variation is related
  3. Correlation is not causation!
  4. Works more effectively for linear relationships (one variable gets smaller, one other gets smaller (or larger) than curvilinear relationships (not following a straight line, age and health care)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Univariate descriptive statistics

A
  1. Describe a set of test data
    A. How many people scored at a certain level
    B. What the average score for the group was
    C. Percentile or rank equivalent
  2. Must know to describe distribution of scores:
    A. Central tendency (mode, medium, mean)
    B. Variability dispersion (range, standard variation, variance)
    C. Shape (skew, kurtosis)
  3. To standardize: (scores relative to other scores, group norms)
    A. Z scores, stanines, percentiles
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Central tendency

A
  1. Mode: most frequently occurring score in distribution
    A. If more than one most frequently occurring score > multimodal
    B. If no score is repeated, no mode
  2. Mean: the arithmetic average of test scores
    A. Add all, divide by #of scores
  3. Median: middle value in list of scores
    A. List in numerical order from S to L
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Variability dispersion

A
  1. Range: highest score minus lowest score
    A. When extreme scores, best to drop those before obtaining range
  2. Variance: like the mean, is also an average, but it is the average of the squared deviation of each score from the mean.
    A. Necessary to take the square root of the variance
    B. Difference between data pt. and mean, then squared
    C. Data point (11) mean (32) difference (-21) squared (441)
  3. Standard deviation: measure that summarizes the amount by which every value within a dataset varies from the mean
    A. The sum of the sample variance (1544.64, already squared) divided by sample size (1544.64/25) = 61.79, then square root of 61.79 = 7.86 is the Standard deviation (scores tend to deviate 7.86 points from the mean)
    B. How tightly values are bunched around the mean
    C. Normal distribution: most data clustered around the mean, few values extremely high or low
    D. 68% are less than one standard deviation from mean
    E. 95% less than two
    F. 99% less than three
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Standardization (Z scores)

A
  1. Z=(X-M)/SD
  2. X= raw score
  3. M= mean score
  4. SD= standard deviation
  5. Raw score above the mean = positive z score
  6. Raw score below the mean = negative z score
  7. Positive z scores fall to right of mean, in upper half of bell shaped curve
  8. Negative z scores fall to left of mean, in lower half of bell shaped curve
  9. One standard deviation above the mean = z score 1 (SD and Z equiv.)
  10. Z scores across different distributions are comparable (?)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Raw scores

A
  1. A score you observe - an original and untransformed score before any operation is performed on it
  2. Form the basis for other scores (percentiles, standard scores)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Norm-referenced scores

A
  1. Used to evaluate one’s relative performance
  2. Set of scores that represents a collection of individual performances, and is developed by administering a test to a large group of test takers
  3. This complete set of scores is the measure by which the individual scores of other test takers are compared
  4. Allow us to compare outcomes with others in the same test taker group
  5. Ex. Emotional and Behavior Disorder Scale: to know how excessive behavior might be for any one child within that same age range, the child’s score is compared to this set of norms to see how it compares
    A. 308 students ages 5-18 used to develop the norms
    B. Convert norm to percentile (universal understanding/characteristic)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Percentiles

A
  1. Examine a score relative to the rest of the scores in the set
  2. An exact point within an entire distribution of scores
  3. Rank: a point in a distribution of scores below which a given percentile of scores fall
    A. 45th percentile is the score below which 45% of the other scores fall
    B. Percentile of 82 corresponds to a raw score of 18/20, 82% of all scores in distribution fall below Stu’s score of 18
  4. Percentiles easy to compute, understand, apply across any test situation
  5. Tells little about qualitative performance
  6. Percentile ranks not equally spaced
    A. Rank of 40 and 50 much different than rank of 10 and 20
  7. 50th percentile = median
  8. The first decile = the first 10 percentile ranks
  9. Percentiles do not accurately reflect differences between raw scores, and differences between percentiles
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Reliability

A
  1. A measure’s ability, given the same situation, to provide the same result time after time
    A. When measuring weight on a scale three consecutive times, you might get three slightly different readings
    B. Does not mean scale is unreliable > take average of three readings
    C. Cannot administer assessment measures more than once to a client > we use reliability coefficient to estimate true and error variance
17
Q

Estimating Reliability

A
  1. Before reliability can be calculated, must decide what type of measurement error to focus on
    A. Changes in test scores due to time > correlation coefficient between test given at time 1 and test given at a later time (test-retest reliability)
    B. Error to focus on > correlation coefficient to compute to estimate reliability
18
Q

Inter-rater reliability

A
  1. Used to determine whether rafters are consistent in their observations
19
Q

Reliability is cool because…

A
  1. Measures different components that make up any test score
    A. Observed score: what your actual score was
    B. True score: 100% reflection of what you really know
    C. Error score: the differences between the observed and true score
20
Q

Sources of error

A
  1. Smaller the error, the greater the Reliability
  2. Observed score = actual score + error score
  3. Error score: two types
    A. Trait error: originate within individual taking test (lack of study)
    B. Method error: originate in the testing situation (poor instructions)
  4. Reliability = true score/ true score + error score
21
Q

Types of reliability: Test/retest

A
  1. Used to determine whether a test is reliable over time
    A. Test: preferences for different types of vocational programs
    - test in July and September to same people
    - when two sets of scores are correlated, you have reliability
    B. Always used when measuring differences or changes over time
    C. Weakness: practice effects: when first testing influences second test
    -people remember questions, concepts, ideas
    D. Weakness: interaction between amount of time between tests and nature of sample being tested
    -Ex. Assessing growth and development in young children: individual differences at young ages are profound, waiting 3 or 6 months to retest motor skills might result in inaccurate correlation
22
Q

Types of reliability: Parallel Forms

A
  1. Used when you want to examine the equivalence or similarity between two different forms of the same test
    A. Study on memory: (2 day period for test 1 and 2)
    - test 1: 10 words that you memorize and recite back after 20 sec
    - text 2: 10 different, but similar words that you memorize and recite back
    - higher the correlation, greater the equivalency
23
Q

Types of reliability: internal consistency

A
  1. Used when you want to know whether items on a test are consistent with one another, that they represent only one dimension, construct, area of interest
    A. Attitude toward health care test: set of 20 questions (1 agree - 5 disagree)
    -people who score high on certain items (I like my hmo) also score low on items (I don’t like anything than private health insurance)
    - this correlation is consistent across all people taking test
  2. Used when there are right and wrong answers
24
Q

Types of reliability: Chronbach’s Alpha

A
  1. Used when looking at reliability of a test that doesn’t have right or wrong answers (personality, attitude test)!
    A. Correlates the score for each item with the total score for each individual, then comparing that to variability present for all individual item scores
    B. Individual test taker with a high(Er) test score should have a high(Er) score on each item
    C. Individual test taker with a low(Er) test score should have a low(Er) score on each item
25
Q

Types of reliability: Split-half

A
  1. Splitting test into two halves and computing split-half reliability
  2. Scores on one half compared with second half to test if there is a strong relationship
  3. If strong relationship exists = good internal consistency
  4. Weakness: how to split test?
    A. First 10, second 10 (items inadvertently group by subject matter or difficulty)
    B. All odd vs all even (shorter tests less reliable)
    - Use Spearman - Brown formula: to correct
    -Ex. Compute reliability coefficient of .73, then corrected split-half coefficient would be .84
26
Q

Types of reliability: Inter-rater

A
  1. Used to determine how much raters agree on their judgements of some outcome
    A. Exploration of Rorschach ink Blot test
    -provide experts w/responses of one or more patients to ink blots, and then score responses:
    -Inter-rater Reliability = # of agreements/ # of possible agreements
27
Q

Standard Error of Measurement

A
  1. Measure of how much a test score varies for an individual from time to time
  2. No one ever scores the same on a test when taken more than once
  3. The SEM is the standard deviation or the amount of spread that each observed score differs from the true score
    A. Stu: scores 18, 17, 19
    B. His true score is 19 (what he really knows)
    C. SEM calculated based on differences between true score (19) and scores 19, 17, 18
  4. To estimate w/o rather taking test three times use formula:
    A. SEM = (SD)(square root of 1-r)
    -SD = standard deviation of sample
    -r = reliability estimate of measure
    - Ex. Score is 100, SD of sample is 10, reliability estimate is .71
    - (10)(square root of 1-.71) = 5.38
    - 5.38 is how much variability we can expect around any one individual’s score on repeated testing
  5. How accurate is a score on a measure
  6. The smaller the SEM, the more reliable
  7. Goal is to minimize SEM
28
Q

Stratification/ Standardization

A
1. How has a study been stratified? How has it been represented by variables of
A. Age
B. Gender
C. Educational level
D. Ethnicity
E. Geographic location
29
Q

Validity

A
  1. Property of assessment tool indicating that the tool measures what it is supposed to measure
  2. Scores only have meaning if valid
  3. Degree of validity along a continuum from weak to strong
  4. Cannot use objective measures to quantify degree of validity
30
Q

Content validity

A
  1. Used most often for achievement tests
  2. Does the collection of items on the test fairly represent all possible questions that could be asked?
    A. Biopsychology test: amount of time spent on each topic (neurochemistry, vision, memory)
    -#of items on test reflect reflect the amount of time spent teaching each topic
31
Q

Criterion Validity

A
  1. Assesses whether a test reflects a set of abilities in a current or future setting as measured by some other test
  2. If criterion taking place in the here and now > concurrent validity
    A. Used for achievement tests and certification or licensing
  3. If criterion taking place in the future > predictive validity
    A. Used in entrance exams (GRE)
  4. Criterion validity to be present > establish concurrent or predictive validity
32
Q

Concurrent validity

A
  1. Here and now criterion

A. Achievement tests, certification, licensing

33
Q

Predictive Validity

A
  1. Predictive validity of GRE score to predict success in college:
    A. Subjects given exam > scores compared to their academic performance 1 yr later
    B. Static-99 test: assess likelihood of an inmate who is in prison for a sexual offense to commit another sexual offense
  2. Do predictive measures predict future behavior of any individual?
    A. Surgical operation has a 98% success rate > probably safe, (quality of procedure) BUT
    B. Statistic cannot be applied to any individual patients risk > each pt. Comes to the operating room w/different variables not accounted for
    C. The statistic is relevant, but not predictive
34
Q

Construct validity

A
  1. Most interesting, ambitious, difficult to establish > constructs not easily defined
  2. Construct: group of interrelated variables
    A. Aggression: inappropriate physical contact, violence, lack of successful social interaction
  3. To establish: FIGHT test of aggression (self-report, series of items that represents theoretical view of that construct)
    A. FIGHT scale includes both items that are related to identifying aggressive behaviors and others that are not
    B. Examine whether positive scores on FIGHT correlate w/presence of aggressive behaviors you predicted, and whether negative scores do not correlate for the items that were non aggressive behaviors
  4. Assess convergent validity
    A. Give FIGHT measure to subjects with a well established measure of aggression (Buss-Durkheim hostility scale)
    B. If correlations between two scales are high = evidence of convergent validity
  5. Discriminant validity: compares scores on an instrument with measures designed to assess completely different constructs
    A. Selby No Worries Inventory (SNWI) compared to Hamilton Depression Inventory
    B. If correlations are low > evidence for discriminant validity for SNWI
35
Q

Ecological validity

A
  1. Refers to relationship between test scores and real world behavior
  2. Tests and instruments generally poor ecological validity
  3. No significant correlation between IQ scores (goos Reliability and validity) and later success in life
  4. Environment always reduces ecological validity: test taking not the real world
  5. Must compare test findings w/ social history, info about current level of functioning
36
Q

Interpreting validity coefficients

A
  1. Validity coefficients must be squared to be interpreted
    A. VRAG: assess violent potential
    B. To assess predictive validity, felons take VRAG prior to release
    C. How many felons arrested for a violent offense after parole?
    D. Typical predictive validity coefficient for research is around .50
    E. Must square figure > .25
    F. 25% of violent recidivism explained by VRAG
    G. 75% due to other factors