Chapter 3 Issues in Personality Assessment Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Assessment

A

Measuring of Personality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Observer Ratings

A
  • Someone other than the person being assessed
  • Interviews
  • Watching his or her actions
  • Observe a person’s belongings
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Interviews

A
  • Can be especially insightful if person is asked to talk about something other than themselves- boundaries come down
  • In the moment observation vs. summative judgments
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Snoopology

A
  • studied people’s offices, bedrooms, and other personal domains
  • People “portray and betray” their personalities by the objects and mementos they surround themselves with. (Gosling, 2008)
  • Identity claims: symbolic statements about who we are
  • Indicators of how we want to be regarded
  • Can be directed to other people who enter our space, or they can be directed to ourselves, reminders to ourselves of who we are
  • Feeling regulators: help us manage our emotions
  • Behavioral residue: physical traces left in our surrounds by everyday actions
  • Tell more the residue, the less organized you probably are.
  • Give an indication of what sorts of things take place repeatedly in your life space
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Self Reports

A
  • People themselves indicate what they think they’re like or how they feel or act.
  • Introspection
  • Ask people to respond to a specific set of items
  • Many formats
  • True-false
  • Multipoint rating scale
  • Some focus on a single quality of personality
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Inventory

A
  • Measure that assesses several dimensions of personality
  • Go through each step of development for each scale of the inventory, rather than just one.
  • Multiple scales
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Implicit Assessments

A

*Attempt to find out what a person is like from the person (like self-reports) but not by asking him or her directly.
*Discover people’s unconscious attitudes or perceptions- often include parts of their personalities they may be ignorant of or try to hide
-Given a task of some sort that involves making judgments about stimuli.
-The pattern of responses (ex: reaction times) can inform the assessor about what the person is like
*Ex: Implicit Association Test (IAT)
-Semantic properties in memory that are believed to be hard to detect by introspection
-Categorize a long series of stimuli as quickly as you can
Motive approach to personality
*The person being assessed produces a sample of “behavior”
-Action
-Internal behavior-heart rate
-Answering questions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Subjective Measures

A

*Interpretation is part of the measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Objective Measures

A

*The measure is of a concrete physical reality that requires no interpretation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Reliability

A
  • Once you’ve made an observation about someone, how confident can you be that if you looked again a second or third time you’d see about the same thing?
  • Consistency/repeatability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Error

A
  • randomness
  • Can be reduced, but not eliminated
  • Repeat the measure- make the observation more than once
  • Measuring the same quality from a slightly different angle or using a slightly different “measuring device”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Internal Consistency/Internal Reliability

A
  • Each observer or item carries its own error, so to cancel out, we use many different question items. Internal Consistency refers to the extent that they all agree with one another.
  • Within a set of observations of a single aspect of personality
  • The items are highly reliable means that people’s responses to the items are highly correlated.
  • More items in self-reports
  • Different telescrope
  • Different math problem
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Split Half Reliability

A
  • Separate the items into two subsets (odd vs. even-numbered items), add up people’s scores for each subset and correlate the two subtotals with each other
  • If the two halves of the item set measure the same quality, people who score high on one half should also score high on the other half, and people who score low on one half should also score low on the other half.
  • Measure internal consistency
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Inter-rater Reliability

A
  • both see about the same thing when they look at the same event
  • Trained in how to observe what they’re observing
  • Agreement among different raters/observers
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Item response theory (IRT)

A
  • Attempt to increase the efficiency of assessment, while reducing the number of items
  • Determining the most useful items, and the most useful response choices, for the concept being measured
  • Creation of response curves: show how frequently each response is used, and whether each choice is measuring something different from other choices
  • Determines the “difficulty” of an item
  • Computerized adaptive testing (CAT)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Computerized adaptive testing (CAT)

A

*Ensures that less difficult items are not given after an item of medium difficulty has been endorsed

17
Q

Stability Across Time

A

Extent to which measurements are stable over time

*Test Re-Test Reliability

18
Q

Test Re-Test Reliability

A

*Giving the test to the same people at two different times

Reliability concerns repeatability across time

19
Q

Validity

A

*Whether what you’re measuring is what you think you’re measuring (or what you’re trying to measure).

20
Q

Construct Validity

A
  • How well deos operational definition (the event) match the conceptual definition (the abstract quality you have in mind to measure)?
  • Close, high validity; aren’t close, low validity
  • Other subtypes of validity all provide support for construct validity
21
Q

Conceptual and Operational Definition

A
  • Ex: Love
  • Conceptual definition: a strong affection for another person
  • Operational definition: ask the person you’re assessing to indicate on a rating scale how much she loves someone/measure how willing she is to give up events she enjoys in order to be with him.
22
Q

Criterion Validity (Predictor Validity)

A
  • Other manifestations (behavioral index/external criterion) of the quality it’s supposed to measure
  • Does your measurement produce results that are consistent with an external criterion, such as a trained observer?
  • Does it accurately predict what it purports to measure?
  • Tests how well the measure predicts something else it’s supposed to predict
  • Often relevant when developing a new scale- if it’s consistent with other, already established measurements of same construct
  • Best way to predict construct validity
  • Too often, researchers choose criterion measures that the poor reflections of the construct.
23
Q

Convergent Validity

A
  • Showing that the measure relates to characterisitics that are similar to, but not the same as, what it’s supposed to measure.
  • The evidence “converges” on the construct you’re interested in, even though any angle finding by itself won’t clearly reflect the construct.
  • Ex: Measure dominance
  • Should relate at least a little to measures of qualities such as leadership (positively) or shyness (inversely)
24
Q

Discriminant Validity

A
  • It does not measure qualities it’s not intended to measure- especially qualities that don’t fit your conceptual definition of the construct.
  • Defense against the third-variable problem
25
Q

Face Validity

A
  • The assessment device appears, on its “face,” to be measuring the construct it was intended to measure. It looks right.
  • Easier to respond to than measures with less face validity
  • Distinctions between qualities of personality that differ in subtle ways
  • Detriment: threatening or embarrassing to admit-> reduce face validity
  • Convenient to have
26
Q

Response Sets

A
  • Psychological orientation, a readiness to answer in a particular way
  • Biase in people’s responses in assessment
  • Create distortions in what’s assessed
27
Q

Acquiesence

A
  • Emerges most clearly when the assessment device is a self-report instrument that, in one fashion or another, asks the person questions that require a “yes” or “no” response (“agree” and “disagree”)
  • Tendency to say “yes”
  • Everyone does this but to varying degree
  • Can be prevented by having some items that ask reverse
  • Write half the items so that “yes” means being at one end of the personality dimension. Write the other half of the items so that “no” means being at that end of the personality dimension. In the process of scoring the test, then, any bias that comes from the simple tendency to say “yes” is canceled out.
  • Negatively worded items often are harder to understand or more complicated to answer than positively worded items -> less accurate
28
Q

Social Desirability

A
  • Reflects the fact that people tend to portray themselves in a good light (in socially desirable ways) whenever possible.
  • Phrasing undesirable responses in ways that makes them more acceptable
  • Let people admit the undesirable quality indirectly
  • Include items that assess the person’s degree of concern about social desirability and use this information as a correlation factor in evaluating the person’s responses to other items.
29
Q

Ration/Theoretical approach

A
  • Based on theoretical considerations from the very start
  • First develops a theoretical basis for believing that a particular aspect of personality
  • Create a test in which this dimension validly and reliably in people’s answers
  • Often leads to assessment devices that have a high degree of face validity
  • Majority of personality measurement devices that exist today were developed using this path
  • What to measure -> How to measure (some of the measures/most in chapters)
30
Q

Empirical/data-based approach

A
  • Relies on data, rather than on theory, to decide what items go into the assessment device
  • The person developing the measure uses the data to decide what qualities of personality even exist.
  • Important contributor to trait psychology
  • Reflects a very pragmatic orientation to the process of assessment
  • Guided less by a desire to understand personality than by a practical aim: to sort people into categories
31
Q

Criterion keying

A

*The criterion is the groups into which people are to be sorted. To develop the test, you start with a huge number of possible items and find out which ones are answered differently by one criterion group than by other people.
*Reflects the fact that the items retained are those that distinguish between the criterion group and other people.
If an item set can be found for each group, then the test (all item sets together) can be used to tell who belongs to which group.
*Minnesota Multiphasic Personality Inventory (MMPI): very long true-false inventory that was developed to assess abnormality
-A large number of self-descriptive statements were given to a group of normal persons and to groups of psychiatric patients
The criterion already existed
*MMPI-2 is increasingly recognized that different diagnostic categories are not as distinct as they were formerly thought to be.
-Recognition of this pattern is a broad (and intense) reconsideration of the nature of psychiatric diagnosis.