Psychometrics Flashcards

1
Q

What determines choice of format in item writing? (2 marks)

A

Objectives and purposes of test (eg do we want to measure extent/amount of interaction, or quality of interaction)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Difference between objective and purpose of a study?

A

Purpose - broad goal of research
Objective - how are we practically going to achieve that

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

List 4 of the 9 item writing guidelines

A
  • clearly define what you want to measure
  • generate an item pool (best items are selected after analysis)
  • Avoid long items
  • Keep the reading difficulty appropriate
  • use clear and concise wording (avoid double-barrelled items and double negatives)
  • Use both pos & neg worded items
  • use culturally neutral items
  • (for MCQS) - make all distractors plausible & vary position of correct answer
  • (for true/false Qs) - equal numbers of both and make both statements the same lenth
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

List the 5 categories of item formats

A
  1. Dichotomous
  2. Polytomous
  3. The Likert format
  4. The Category format
  5. Checklists and Q-sorts
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Advantage of the dichotomous format (3 marks)

A
  • easy to administer
  • quick to score
  • requires absolute judgement
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Disadvantages of the the dichotomous format (3 marks)

A
  • less reliable (50% chance of correct answer)
  • encourages memorization instead of understanding
  • often the truth is not black and white (true false is an oversimplification)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Minimum number of options for a polytomous format?

A

3 (but 4 is commonly used, and considered more reliable)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

3 guidelines in writing distractors in the polytomous format

A
  • distractors must be clearly written
  • distractors must be plausible correct answer
  • avoid “cute” distractors
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Advantages of polytomous questions (4 marks)

A
  • easy to administer
  • easy to score
  • requires absolute judgement
  • more reliable than dichotomous (less chance of guessing correctly)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Formula for correcting guessing

A

R-(W/(n-1))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Fields in which Likert scales are predominantly used (2 marks)

A

Attitude and Personality questionnaires

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How can one avoid a the neutral response bias in Likert Scales

A

have an even number of options

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How does one score negatively worded items from a Likert scale

A

Reverse score

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Suggested best no. of options in a category format question?

A

7

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Disadvantages of the category format (2 marks)

A
  • tendency to spread answers across all categories
  • susceptible to the groupings of things being rated (rate an item lower if those other items in the category are really good - i.e not objective)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

When best to use category format questions? (2 marks)

A
  • when people are highly involved in a subject (more motivated to make a finer discrimination)
  • when you want to measure the amount of something (eg levels of road rage)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Two tips when using the category format

A
  • make sure your endpoints are clearly defined
  • use a visual analogue scale (ideal with kids, for e.g smily face on one side of scale and frowny face on the other to describe how they’re feeling)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Where are Checklists format questions commonly found?

A

Personality measures (e.g a list of adjectives, tick those that describe you)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Describe the process of Q-sort format questions

A

Place statements into piles, piles indicate the degree to which you think a statement describes a person/yourself

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

In terms of Item analysis, describe item difficulty and give another name for it

A

The proportion of people who get a particular item correct (higher value = easier item)
AKA facility index
p = no of correct answers/no of participants

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Ideal range for optimum difficulty level

A

0,3 - 0,7

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

How to calculate ODL (optimum difficulty level) for an item

A

Halfway between 100% the chance of guessing the answer correctly (1+chance)/2
E.g: For a item with 4 options, ODL = (1+0,25)/2 = 0.625

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

How should difficulty levels range across items in a questionnaire

A

You want most items around the ODL and a few at the extremes. The distribution of p-values (difficulty levels) should be approximately normal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Why does one need a range of item difficulty levels?

A

To discriminate between ability of test-takers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
List 3 exceptions to having optimum difficulty levels
- need for difficult items (e.g selection process) - need easier items (e.g special education) - need to consider other factors (e.g boost confidence/morale at start of test)
26
p (an item difficulty level) tells us nothing about...
...the intrinsic characteristics of an item. It's value is related to a given sample
27
Item discriminability is good when...
people who did well on the test overall get the item correct (and vice versa)
28
Describe the extreme groups method when calculating item discriminability
calculated by looking at proportion of people in the upper quartile who got the item correct minus the proportion of people in the lower quartile who got the item correct {in other words, the difference in item difficulty when comparing the top and bottom 25%} Di = U/Nu-L/Nl *Should be a positive number of item has good discriminability
29
A red flag in item discriminability?
A negative number
30
Describe the point biserial method when calculating item discriminability
Calculate an item-total correlation (if test-taker fails the item but does well on the overall test, i-tc will be negative)
31
Can item-total correlations be used for likert-type scales and other formats such as category to polymous formats?
yes
32
Results from item-total correlations can be used to decide....
which items to remove from the questionnaire
33
Item characteristic curves (ICCS) are visual depictions of...
the relationship between performance on an item and performance on the overall test
34
Give the x- and y-axes of an ICC
x-axis = total score on test y-axis = proportion {of test takers who got the item} correct
35
3 steps to drawing ICCs
1. Define categories of test performance (eg specific total scores/percentages) 2. Determine what proportion of people w/in each category got the item correct 3. Plot your ICC
36
Briefly explain Item Response Theory (IRT)
Test difficulty is tailored to the individual - wrong answer = decrease difficulty, right answer = increase difficulty. Test performance is defined by the level of difficulty of items answered correctly
37
Name the program through which Item Response Theory is often administered
The Adaptive Computer-based test (ACT)
38
Advantages of Item Response Theory (3 marks)
- increase morale - quicker tests - decrease chance of cheating
39
In terms of measurement precision, name the three types of tests
1. Peaked conventional 2. Rectangular conventional 3. Adaptive
40
Described Peaked Conventional tests (3 points)
- Test individuals at average ability. - Doesn't assess high or low levels well - high precision for average ability levels, low precision at either end
41
Describe Rectangular Convention tests (2 points)
- equal number of items assessing all ability levels - relatively low precision across the board
42
Describe Adaptive Conventional tests
- test focuses on the range that challenges each individual test-taker - precision is high at every ability level
43
Describe criterion-referenced tests
The test is developed based on learning outcomes - compares performance with some objectively defined criterion (What should the test-taker be able to do?)
44
How does one evaluate items in criterion-referenced tests? And how should the score/frequency graph look
2 Groups - one given the learning unit and one not given the learning unit. Graph should look like a V
45
List 3 limitations of criterion-referenced tests
1. tell you you got something wrong, but not why 2. Emphasis on ranking students rather than identifying gaps in knowledge 3. Teaching to the test - not to education
46
What is referred to as the "test blueprint"
The test specifications
47
List 4 of the 7 things that test specifications should describe
1 Test (response) format 2 Item format 3 Total number of test items (test length) 4 Content areas of the construct(s) tested 5 Whether items or prompts will contain visual stimuli 6 How test scores will be interpreted 7 Time limits
48
In terms of response format, list 3 ways in which participants can demonstrate their skills
1. Selected response (eg Likert scale/MCQ/dichotomous) 2. Constructed response (eg essay/fill-in-the-blank) 3. Performance response (eg block design task)
49
In terms of response format, give an example of objective vs subjective formats
Obj - MCQ or Likert Subj - Essays, projective tests
50
List 5 types of item response format
1. Open-ended - eg open ended essay q (no limitations on the test taker) 2. Forced-choice items - MCQS, true/false qs. 3. Ipsative forced choice (leads the test-taker into a certain direction, but still somewhat open. e.g I find work from home....) 4. Sentence completion 5. Performance based items
51
List the two determinants of test length
1. Amount of administration time available 2. Purpose of the measure (eg screening vs comprehensive)
52
When test length increases compliance ..... because people get ..... and .....
decreases; fatigued and bored
53
How many more items should be in the initial version of the test than the final one?
50%
54
Having good ..... ensures that all domains of a construct is tested
Content areas
55
..... refers to the ways in which knowledge or symptoms are demonstrated (and these are therefore tested for)
manifestations
56
Reliability is the desired ..... or ..... of test scores and tells us about the amount of ..... in a measurement tool
consistency or reproducibility; error
57
Why is test-retest not always a good measure of reliability?
Participants learn skills from the first administration of the test
58
Normally we perform roughly around our true score, and so our scores are.....distributed
normally
59
......is something we can use to increase reliability
internal consistency
60
Name the 4 classical test theory assumptions (NB to know these)
1. Each person has a true score we could obtain if there was no measurement error 2. There is measurement error - but this error is random 3. The true score of an individual doesn’t change with repeated applications of the same test, even though their observed score does 4. The distribution of random errors and thus observed test scores will be the same for all people
61
List the 2 assumptions of Classical test theory: the domain sampling model
1. If we construct a test on something, we can’t ask all possible questions - So we only use a few test items (sample) 2. Using fewer items can lead to the introduction of error
62
Reliability = variance of observed score on test/ X X = ?
X = variance of true score * this is a logic estimation - not a calculation we can actually fo
63
An individuals true score is unknowable - but we can calculate the range in which it should fall by taking into account the reliability of the measurement tool, otherwise know as the....
....Standard Error of Measurement (SEM) SEM= SD√1-r (r=reliability of the test)
64
Formula for creating confidence intervals with the SEM
The z-score for a 95% confidence interval = 1.96 Therefore: Lower bound = x-1.96*(SEM) *Upper bound* = x+1.96*(SEM)
65
List the 4 type is reliability (and two sub-types of type 4)
1. Test-retest rel 2. Parallel forms rel 3. Inter-rater rel 4. Internal consistency - split-half - coefficient/cronbach's alpha
66
Give the name of the correlation between the 2 scores in test/re-test reliability and the source of error in t-rt rel
- the correlation of stability - source of error = time sampling
67
Issues with test-retest rel (3 mark)
1. Carry over effects (attitude or performance at T2 influenced be performance at T1 2. Practice effects 3. Time between testing (too little time = remember responses, too much time = maturation)
68
In Parallel forms reliability, name the correlation between the 2 scores and give the source of error
Name = coefficient of equivalence Source of error = item sampling
69
In terms of Parallel forms reliability, list four ways to create a parallel test to give the participant
1. response alternatives can be reworded 2. order of questions changed 3. change wording of question 4. different items altogether
70
Explain Inter-rater reliability (1 mark). Give the names of the correlation between raters's scores (2 marks) and give acceptable ranges of correlation scores
- IRR = how consistently multiple rates agree (more raters = more reliability) - correlation between 2 rates = Cohen's Kappa, between more than 2 raters = Fleiss' Kappa - >.75 = excellent agreement - .50 - .75 = satisfactory - >.40 = poor
71
Describe internal consistency and give the source of error
IC = the extent to which different items within a test measure the same thing Source of error = internal consistency
72
Give one advantage and one disadvantage or split-half reliability
ADV = only need 1 test DISADV = how do we divide the test into equivalent halves (correlation with change each time depending on which items go to each half)
73
What problem is created by splitting a test in half for split-half reliability?
halving the length of the test also decreases the reliability (domain sampling model says fewer items = lower reliability)
74
Name the correction used to adjust for the number of items in each half of the test when calculating split-half reliability
Spearman-Brown correction
75
What does Cronbach's/ Coefficient Alpha measure
the error associated with each test item as well as error associated with how well the test items fit together
76
What level of reliability is satisfactory for Cronbach's alpha?
≥ 0.70 = exploratory research ≥ 0.80 = basic research ≥ 0.90 = applied scenarios
77
When does Cronbach's alpha become unhelpful?
When there are too many items - as this artificially inflates your CA scores
78
List 3 factors that influence Reliabilty
1. Number of items in a test 2. Variability of the sample 3. Extraneous variable (testing situation, ambiguous items, unstandardized procedures, demand effect etc)
79
List 5 ways to improve reliability
1. Increase/decrease the number of items 2. Item analysis 3. Inter-rater training 4. Pilot-testing 5. Clear conceptualisation
80
List three things that can affect your Cronbach's Alpha score
1. Number of test items 2. Bad test items (too broad, ambiguous, easy etc) 3. Multi-dimensionality
81
Explain the difference between internal consistency and homogeneity
IC = how inter-related the items are HG = unidimensionality, the extent to which it is only made up on one thing
82
Name 3 of the 5 ways that Cronbach's alpha is often described
1. The mean of all split-half reliabilities (not accurate) 2. A measure of first-factor saturation 3. The lower bound of the reliability of a test 4. It is equal to reliability in conditions of essential tau-equivalence 5. A more general version of the KR coefficient of equivalence
83
Describe the difference between Cronbach's Alpha and Standardized Item Alpha
CA -> Deals with variance (how much scores vary) and covariance(the amount by which items vary together (co-vary)) SIA-> deals with inter-item correlation (the correlation of each item with every other item)(SIA derived from CA)
84
Give an example to illustrate the difference between variance and co-variance
Think about your group of friends – all of you probably ‘fit together’ pretty well You co-vary a lot: You have a lot of shared variance and little unshared variance As a group, you are internally consistent Now think about the PSY3007S class as a whole There is a fair amount of shared variance between people in the class, but the class probably has a lot more varied people in it than your group of friends The PSY3007S class therefore has more variance and less covariance than your group of friends As a class, you are less internally consistent than your group of friends
85
Why is Cronbach's Alpha a better measure of reliability than split-half?
SH reliability relies on inter-item covariance, but doesn't take variance into account. CA takes variance into account, which accounts for error of measurement. CA will therefore be smaller than split-half rel, and is a better estimate
86
If a test measures only one factor = If a test measures more than one factor =
unidimensional multi-dimensional
87
True or false: the higher the level of Cronbach's Alpha, the more likely it is that the test is made up of one factor
FALSE Multi-dimensional tests can have high CA values too. (EG the WAIS-III measures 2 factors - Verbal IQ and Performance IQ, yet it has very good reliability and CA scores) People assume the above to be true because the confuse the terms INTERNAL CONSISTENCY and HOMOGENEITY
88
Why do people assume high CA values indicate unidimensionality?
CA measure how well items fit together (the co-varience of items). It makes sense that some people assume that the more covariance items have, the more they should fit together to make up one thing (i.e the more they should measure one factor only). BUT, internal consistency and unidimensionality are not the same thing!
89
The question behind Validity is....
is the test measuring what it claims to measure?
90
Why is validity important? (2 marks)
1. Gives meaning to a test score 2. Indication of the usefulness of a test
91
If a test is not valid, then reliability is.... If a test is not reliable then it is...
moot also not valid
92
Name the four broad types of validity, and two sub-types of two of the broad types
1. Face validity 2. Content validity 3. Criterion validity - Concurrent & Predictive 4. Construct validity - convergent & divergent
93
Briefly describe face validity and how it is determined
On the surface (its face) the measure seems to measure what it claims to measure. Determined through a review of the items, not through a statistical analysis
94
Content validity is the....
...degree to which a test measures an intended content area
95
How is content validity established?
It is established through judgement by expert judges and statistical analysis such as factor analysis
96
Name and briefly describe 2 potential errors in of content validity
1. Construct under-representation: A test does not capture important components of the construct 2. Construct-irrelevant variance: When test scores are influenced by things other than the construct the test is supposed to measure (e.g test score influenced by reading ability or performance anxiety
97
Which needs to be established first; reliability or validity?
Reliability
98
Criterion validity is...
how well a test score estimates or predicts a criterion behaviour or outcome, now or in future
99
Name and briefly describe 2 types of criterion vali
Concurrent criterion validity: The extent to which test scores can correctly identify the current state of individuals Predictive validity: How well does performance on one test predict future performance on some other measure?
100
In construct validity we look at...
the relationship between the construct we want to measure and other constructs (to what other constructs is it similar or different
101
A construct is ...
A hypothetical attribute Something we think exists, but is not directly measurable or observable (e.g., anxiety)
102
Name and briefly describe the 2 sub-types of construct validity.
1. Convergent validity: High correlations with between tests that measure similar constructs 2. Divergent/discriminant validity Scores on a test have low correlations with other tests that measure different constructs
103
Name and briefly describe 2 factors affecting validity
1. Reliability (Any form of measurement error can reduce validity But you can have reliability without validity, but your test would just then be useless) 2. Social Diversity (Tests may not be equally valid for different social/cultural groups E.g., a test of superstition in one culture might be a test of religiosity in another)
104
How does one establish construct validity properly? Briefly describe this method
Multitrait-Multimethod (MTMM) matrix: A correlation matrix which shows correlations between tests measuring different traits/factors, measured according to different methods
105
List the 4 rules of MTMM
Rule 1: The values in the validity diagonal should be more than 0, and large enough to encourage further exploration of validity (evidence of convergent validity) Rule 2: A value in the validity diagonal should be higher than the values lying in its column and row, in the heterotrait-heteromethod triangles (HTHM triangles are divergent validity values, validity diagonal values are convergent validity values - conv val must be > then div val) Rule 3: A value in the validity diagonal should be higher than the values lying in its column and row, in the heterotrait-monomethod triangles (HTMM triangles also = divergent val values) Rule 4: There should be more or less the same pattern of correlations in all the different triangles
106
To establish validity of a new scale, one could.....
correlate it with an already established scale via a MTMM matrix
107
In the MTMM, the reliability diagonals are: A) the intercepts of different traits within the same method B) the intercepts of the same traits across different measures C) the intercepts of different traits across different methods D) the intercepts of the same traits within the same method
D
108
Reliability diagonals are also called...
monotrait-monomethod values
109
In the MTMM, the reliability diagonals are: A) the intercepts of different traits measured by the same method B) the intercepts of the same traits measured by different measures C) the intercepts of different traits measured by different methods D) The intercepts of the same traits measured by the same measure
D
110
Validity diagonals are also called..
monotrait-heteromethod values
111
In the MTMM matrix, the hetermethod block is made up by the....
Validity diagonal and the triangles (HTHM values)
112
In the MTMM matrix, the monomethod blocks are made up by the....
Reliability diagonals and the triangles (HTMM values)
113
MTMM matrix rule 4 interpretation
This allows us to see if the pattern of convergent and divergent validity is about the same
114
* run through lecture 8/9 slides and see analysis of MTMM matrix
do it
115
Name 3 approaches to intelligence testing and what they are concerned with respectively
1. The psychometric approach (structure of a test, its correlates and underlying dimensions) 2. The information processing approach (how we learn and solve problems) 3. The cognitive approach (how we adapt to real-world demands)
116
What are the four common definitions of intelligence
Ability to adapt to new situations Ability to learn new things Ability to solve problems Ability for abstraction
117
Today, intellectual ability is conceptualized as ....
multiple intelligences
118
Name two NB intelligence concepts from Binet
1. Age differentiation - older kids greater ability than younger kids, and mental and actual age can be differentiated 2. General mental ability - which is the total product of different and distinct elements of intelligence
119
Name two of Weschler's contributions to the field of intelligence testing
Intelligence has certain specific functions Intelligence is related to separate abilities
120
Name the 4 critiques of Binet's work by Weschler
1. Binet scale was not appropriate for adults 2. Non-intellective factors were not emphasized (e.g social skills and motivation) 3. Binet did not take into account the decline of performance that should be expected with aging 4. Mental age norms do not apply to adults
121
Briefly explain the difference between fluid intelligence and crystallized intelligence
Fluid intelligence (gf) Abilities that allow us to think, reason, problem-solve, and acquire new knowledge Crystallized intelligence (gc) The knowledge and understanding already acquired
122
What is the purpose of intelligence testing in children ( 1 mark) and adults (4 marks)?
Children - school placement Adults - Neuropsychological assessment Forensic assessment Disability grants Work placement
123
The ..... measure is the gold standard of intelligence testing and provides a measure of ....
Weschler intelligence tests Full scale IQ (FSIQ)
124
List the 2 subtests FSIQ and the 2 sections of each of these subtests. Additionally, give a test used to assess each of these 4 categories
Verbal IQ (VIQ): - Verbal comprehension index (VCI); test = vocabulary - Working memory index (WMI); test = arithmetic Performance IQ (PIQ): - Perceptual organization index (POI); test= picture completion - Processing speed index (PSI); test = digit-symbol coding * breakdown of this on lecture 10 slide 10/11
125
What is one of the most stable measures of intelligence, and the last to be affected by brain deterioration
Vocabluary
126
Which intelligence test assesses concentration, motivation and memory, and is the most sensitive to educationally deprived/intellectually disabled individuals?
Arithmetic
127
Which intelligence test measures ability to comprehend instructions, follow directions and provide a response
Information
128
Which intelligence test measures judgement in everyday situations? What list 3 types of questions used in this.
Comprehension 1. Situational action 2. Logical explanation 3. Proverb definition
129
For FSIQ scoring, do raw scores carry meaning?
No. Different subtests have different ranges, and same raw scores for people of different ages not comparable. Raw scores are converted to scale scores with set means and SDs
130
Briefly descirbe picture completion intelligence tests
A picture in which an important detail is missing Missing details become smaller and harder to spot
131
Which intelligence test tests New learning, Visuo-motor dexterity, Degree of persistence and speed of processing information
Digit-symbol coding
132
In which intelligence test must participants find some sort of relationship between figures?
Matrix reasoning This measures: Fluid intelligence Information processing Abstract reasoning
133
Describe the implications possible relationships between PIQ and VIQ
VIQ = PIQ If both are low, can provide evidence for intellectual disability PIQ > VIQ Cultural, language, and/or educational factors Possible language deficits (e.g., dyslexia) VIQ > PIQ Common for Caucasians and African-Americans
134
List the 4 personality types according to Hippocrates
1. Sanguine 2. Phlegmatic 3. Choleric 4. Melancholic
135
List the 3 tenets of personality theory today
1. Stable characteristics (personality traits, basic behavioural/emotional tendancies) 2. Personal projects and concerns - what a person is doing and wants to achieve 3. Life story/narrative - construction of integrated identity
136
What are traits? (3 marks)
1. Basic tendencies/predispositions to act in a certain way 2. Consistencies in behaviour 3. Influence behaviour across a variety of situations
137
Traits are measured via....
structured personality measures
138
What are the BIG 5 in terms of traits
Openness Conscientiousness Extraversion Agreeableness Neuroticism
139
List 4 structured personality tests
1. The Big 5 Test 2. The 16 personality factor test 3. Myers-Briggs type indicator 4. Minnesota multiphasic personality inventory (MMPI)
140
The Big 5 traits provide a framework for understanding ....
...personality disorders
141
What is the most widely used objective personality test?
The Minnesota multiphasic personality inventory (MMPI)
142
List 3 of the 5 ways in which the MMPI is used
1. Help develop treatment plans 2. Help with diagnosis 3. Help answer legal questions 4. Screen job candidates 5. Part of therapeutic assessment
143
..... Personality tests assess tenet two of modern personality theory (personal projects and concerns). These tests measure....
Unstructured ...motives that underlie behaviour
144
3 examples of unstructured personality tests are....
1. Thematic Apperception Test (TAT) 2. The Rorschach test 3. Draw a person test
145
Describe the process of the Thematic Apperception Test (TAT). Then list and briefly describe the three major motives put forward by the TAT.
One must make up a dramatic story about ambiguous black and white pictures, describing the feelings and thoughts of characters 1. Achievement motive Need to do better 2. Power motive Need to make an impact on people 3. Intimacy motive The need to feel close to people
146
List 2 precautions when using personality test cross-culturally
1. Constructs must have the same meaning across cultures 2. Bias analysis must be done
147
List 2 solutions to culturally biased tests
1. Caution in interpretation 2. Cross-cultural adaption of test
148
In the MTMM, the validity diagonals are: A) the intercepts of different traits measured by the same method B) the intercepts of the same traits measured by different measures C) the intercepts of different traits measured by different methods D) The intercepts of the same traits measured by the same measure
B