Psychometrics Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What determines choice of format in item writing? (2 marks)

A

Objectives and purposes of test (eg do we want to measure extent/amount of interaction, or quality of interaction)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Difference between objective and purpose of a study?

A

Purpose - broad goal of research
Objective - how are we practically going to achieve that

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

List 4 of the 9 item writing guidelines

A
  • clearly define what you want to measure
  • generate an item pool (best items are selected after analysis)
  • Avoid long items
  • Keep the reading difficulty appropriate
  • use clear and concise wording (avoid double-barrelled items and double negatives)
  • Use both pos & neg worded items
  • use culturally neutral items
  • (for MCQS) - make all distractors plausible & vary position of correct answer
  • (for true/false Qs) - equal numbers of both and make both statements the same lenth
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

List the 5 categories of item formats

A
  1. Dichotomous
  2. Polytomous
  3. The Likert format
  4. The Category format
  5. Checklists and Q-sorts
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Advantage of the dichotomous format (3 marks)

A
  • easy to administer
  • quick to score
  • requires absolute judgement
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Disadvantages of the the dichotomous format (3 marks)

A
  • less reliable (50% chance of correct answer)
  • encourages memorization instead of understanding
  • often the truth is not black and white (true false is an oversimplification)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Minimum number of options for a polytomous format?

A

3 (but 4 is commonly used, and considered more reliable)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

3 guidelines in writing distractors in the polytomous format

A
  • distractors must be clearly written
  • distractors must be plausible correct answer
  • avoid “cute” distractors
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Advantages of polytomous questions (4 marks)

A
  • easy to administer
  • easy to score
  • requires absolute judgement
  • more reliable than dichotomous (less chance of guessing correctly)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Formula for correcting guessing

A

R-(W/(n-1))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Fields in which Likert scales are predominantly used (2 marks)

A

Attitude and Personality questionnaires

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How can one avoid a the neutral response bias in Likert Scales

A

have an even number of options

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How does one score negatively worded items from a Likert scale

A

Reverse score

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Suggested best no. of options in a category format question?

A

7

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Disadvantages of the category format (2 marks)

A
  • tendency to spread answers across all categories
  • susceptible to the groupings of things being rated (rate an item lower if those other items in the category are really good - i.e not objective)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

When best to use category format questions? (2 marks)

A
  • when people are highly involved in a subject (more motivated to make a finer discrimination)
  • when you want to measure the amount of something (eg levels of road rage)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Two tips when using the category format

A
  • make sure your endpoints are clearly defined
  • use a visual analogue scale (ideal with kids, for e.g smily face on one side of scale and frowny face on the other to describe how they’re feeling)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Where are Checklists format questions commonly found?

A

Personality measures (e.g a list of adjectives, tick those that describe you)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Describe the process of Q-sort format questions

A

Place statements into piles, piles indicate the degree to which you think a statement describes a person/yourself

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

In terms of Item analysis, describe item difficulty and give another name for it

A

The proportion of people who get a particular item correct (higher value = easier item)
AKA facility index
p = no of correct answers/no of participants

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Ideal range for optimum difficulty level

A

0,3 - 0,7

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

How to calculate ODL (optimum difficulty level) for an item

A

Halfway between 100% the chance of guessing the answer correctly (1+chance)/2
E.g: For a item with 4 options, ODL = (1+0,25)/2 = 0.625

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

How should difficulty levels range across items in a questionnaire

A

You want most items around the ODL and a few at the extremes. The distribution of p-values (difficulty levels) should be approximately normal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Why does one need a range of item difficulty levels?

A

To discriminate between ability of test-takers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

List 3 exceptions to having optimum difficulty levels

A
  • need for difficult items (e.g selection process)
  • need easier items (e.g special education)
  • need to consider other factors (e.g boost confidence/morale at start of test)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

p (an item difficulty level) tells us nothing about…

A

…the intrinsic characteristics of an item. It’s value is related to a given sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Item discriminability is good when…

A

people who did well on the test overall get the item correct (and vice versa)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Describe the extreme groups method when calculating item discriminability

A

calculated by looking at proportion of people in the upper quartile who got the item correct minus the proportion of people in the lower quartile who got the item correct
{in other words, the difference in item difficulty when comparing the top and bottom 25%}
Di = U/Nu-L/Nl

*Should be a positive number of item has good discriminability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

A red flag in item discriminability?

A

A negative number

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Describe the point biserial method when calculating item discriminability

A

Calculate an item-total correlation
(if test-taker fails the item but does well on the overall test, i-tc will be negative)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Can item-total correlations be used for likert-type scales and other formats such as category to polymous formats?

A

yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Results from item-total correlations can be used to decide….

A

which items to remove from the questionnaire

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Item characteristic curves (ICCS) are visual depictions of…

A

the relationship between performance on an item and performance on the overall test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Give the x- and y-axes of an ICC

A

x-axis = total score on test
y-axis = proportion {of test takers who got the item} correct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

3 steps to drawing ICCs

A
  1. Define categories of test performance (eg specific total scores/percentages)
  2. Determine what proportion of people w/in each category got the item correct
  3. Plot your ICC
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Briefly explain Item Response Theory (IRT)

A

Test difficulty is tailored to the individual - wrong answer = decrease difficulty, right answer = increase difficulty. Test performance is defined by the level of difficulty of items answered correctly

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Name the program through which Item Response Theory is often administered

A

The Adaptive Computer-based test (ACT)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Advantages of Item Response Theory (3 marks)

A
  • increase morale
  • quicker tests
  • decrease chance of cheating
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

In terms of measurement precision, name the three types of tests

A
  1. Peaked conventional
  2. Rectangular conventional
  3. Adaptive
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Described Peaked Conventional tests (3 points)

A
  • Test individuals at average ability.
  • Doesn’t assess high or low levels well
  • high precision for average ability levels, low precision at either end
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Describe Rectangular Convention tests (2 points)

A
  • equal number of items assessing all ability levels
  • relatively low precision across the board
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Describe Adaptive Conventional tests

A
  • test focuses on the range that challenges each individual test-taker
  • precision is high at every ability level
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Describe criterion-referenced tests

A

The test is developed based on learning outcomes - compares performance with some objectively defined criterion (What should the test-taker be able to do?)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

How does one evaluate items in criterion-referenced tests? And how should the score/frequency graph look

A

2 Groups - one given the learning unit and one not given the learning unit. Graph should look like a V

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

List 3 limitations of criterion-referenced tests

A
  1. tell you you got something wrong, but not why
  2. Emphasis on ranking students rather than identifying gaps in knowledge
  3. Teaching to the test - not to education
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

What is referred to as the “test blueprint”

A

The test specifications

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

List 4 of the 7 things that test specifications should describe

A

1 Test (response) format
2 Item format
3 Total number of test items (test length)
4 Content areas of the construct(s) tested
5 Whether items or prompts will contain visual stimuli
6 How test scores will be interpreted
7 Time limits

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

In terms of response format, list 3 ways in which participants can demonstrate their skills

A
  1. Selected response (eg Likert scale/MCQ/dichotomous)
  2. Constructed response (eg essay/fill-in-the-blank)
  3. Performance response (eg block design task)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

In terms of response format, give an example of objective vs subjective formats

A

Obj - MCQ or Likert
Subj - Essays, projective tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

List 5 types of item response format

A
  1. Open-ended - eg open ended essay q (no limitations on the test taker)
  2. Forced-choice items - MCQS, true/false qs.
  3. Ipsative forced choice (leads the test-taker into a certain direction, but still somewhat open. e.g I find work from home….)
  4. Sentence completion
  5. Performance based items
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

List the two determinants of test length

A
  1. Amount of administration time available
  2. Purpose of the measure (eg screening vs comprehensive)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

When test length increases compliance ….. because people get ….. and …..

A

decreases; fatigued and bored

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

How many more items should be in the initial version of the test than the final one?

A

50%

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

Having good ….. ensures that all domains of a construct is tested

A

Content areas

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

….. refers to the ways in which knowledge or symptoms are demonstrated (and these are therefore tested for)

A

manifestations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Reliability is the desired ….. or ….. of test scores and tells us about the amount of ….. in a measurement tool

A

consistency or reproducibility; error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Why is test-retest not always a good measure of reliability?

A

Participants learn skills from the first administration of the test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

Normally we perform roughly around our true score, and so our scores are…..distributed

A

normally

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

……is something we can use to increase reliability

A

internal consistency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

Name the 4 classical test theory assumptions (NB to know these)

A
  1. Each person has a true score we could obtain if there was no measurement error
  2. There is measurement error - but this error is random
  3. The true score of an individual doesn’t change with repeated applications of the same test, even though their observed score does
  4. The distribution of random errors and thus observed test scores will be the same for all people
61
Q

List the 2 assumptions of Classical test theory: the domain sampling model

A
  1. If we construct a test on something, we can’t ask all possible questions - So we only use a few test items (sample)
  2. Using fewer items can lead to the introduction of error
62
Q

Reliability = variance of observed score on test/ X

X = ?

A

X = variance of true score
* this is a logic estimation - not a calculation we can actually fo

63
Q

An individuals true score is unknowable - but we can calculate the range in which it should fall by taking into account the reliability of the measurement tool, otherwise know as the….

A

….Standard Error of Measurement (SEM)
SEM= SD√1-r (r=reliability of the test)

64
Q

Formula for creating confidence intervals with the SEM

A

The z-score for a 95% confidence interval = 1.96
Therefore:
Lower bound = x-1.96(SEM)
Upper bound = x+1.96
(SEM)

65
Q

List the 4 type is reliability (and two sub-types of type 4)

A
  1. Test-retest rel
  2. Parallel forms rel
  3. Inter-rater rel
  4. Internal consistency
    • split-half
    • coefficient/cronbach’s
      alpha
66
Q

Give the name of the correlation between the 2 scores in test/re-test reliability and the source of error in t-rt rel

A
  • the correlation of stability
  • source of error = time sampling
67
Q

Issues with test-retest rel (3 mark)

A
  1. Carry over effects (attitude or performance at T2 influenced be performance at T1
  2. Practice effects
  3. Time between testing (too little time = remember responses, too much time = maturation)
68
Q

In Parallel forms reliability, name the correlation between the 2 scores and give the source of error

A

Name = coefficient of equivalence
Source of error = item sampling

69
Q

In terms of Parallel forms reliability, list four ways to create a parallel test to give
the participant

A
  1. response alternatives can be reworded
  2. order of questions changed
  3. change wording of question
  4. different items altogether
70
Q

Explain Inter-rater reliability (1 mark). Give the names of the correlation between raters’s scores (2 marks) and give acceptable ranges of correlation scores

A
  • IRR = how consistently multiple rates agree (more raters = more reliability)
  • correlation between 2 rates = Cohen’s Kappa, between more than 2 raters = Fleiss’ Kappa
  • > .75 = excellent agreement
  • .50 - .75 = satisfactory
  • > .40 = poor
71
Q

Describe internal consistency and give the source of error

A

IC = the extent to which different items within a test measure the same thing
Source of error = internal consistency

72
Q

Give one advantage and one disadvantage or split-half reliability

A

ADV = only need 1 test
DISADV = how do we divide the test into equivalent halves (correlation with change each time depending on which items go to each half)

73
Q

What problem is created by splitting a test in half for split-half reliability?

A

halving the length of the test also decreases the reliability (domain sampling model says fewer items = lower reliability)

74
Q

Name the correction used to adjust for the number of items in each half of the test when calculating split-half reliability

A

Spearman-Brown correction

75
Q

What does Cronbach’s/ Coefficient Alpha measure

A

the error associated with each test item as well as error associated with how well the test items fit together

76
Q

What level of reliability is satisfactory for Cronbach’s alpha?

A

≥ 0.70 = exploratory research
≥ 0.80 = basic research
≥ 0.90 = applied scenarios

77
Q

When does Cronbach’s alpha become unhelpful?

A

When there are too many items - as this artificially inflates your CA scores

78
Q

List 3 factors that influence Reliabilty

A
  1. Number of items in a test
  2. Variability of the sample
  3. Extraneous variable (testing situation, ambiguous items, unstandardized procedures, demand effect etc)
79
Q

List 5 ways to improve reliability

A
  1. Increase/decrease the number of items
  2. Item analysis
  3. Inter-rater training
  4. Pilot-testing
  5. Clear conceptualisation
80
Q

List three things that can affect your Cronbach’s Alpha score

A
  1. Number of test items
  2. Bad test items (too broad, ambiguous, easy etc)
  3. Multi-dimensionality
81
Q

Explain the difference between internal consistency and homogeneity

A

IC = how inter-related the items are
HG = unidimensionality, the extent to which it is only made up on one thing

82
Q

Name 3 of the 5 ways that Cronbach’s alpha is often described

A
  1. The mean of all split-half reliabilities (not accurate)
  2. A measure of first-factor saturation
  3. The lower bound of the reliability of a test
  4. It is equal to reliability in conditions of essential tau-equivalence
  5. A more general version of the KR coefficient of equivalence
83
Q

Describe the difference between Cronbach’s Alpha and Standardized Item Alpha

A

CA -> Deals with variance (how much scores vary) and covariance(the amount by which items vary together (co-vary))
SIA-> deals with inter-item correlation (the correlation of each item with every other item)(SIA derived from CA)

84
Q

Give an example to illustrate the difference between variance and co-variance

A

Think about your group of friends – all of you probably ‘fit together’ pretty well

You co-vary a lot: You have a lot of shared variance and little unshared variance
As a group, you are internally consistent

Now think about the PSY3007S class as a whole

There is a fair amount of shared variance between people in the class, but the class probably has a lot more varied people in it than your group of friends
The PSY3007S class therefore has more variance and less covariance than your group of friends
As a class, you are less internally consistent than your group of friends

85
Q

Why is Cronbach’s Alpha a better measure of reliability than split-half?

A

SH reliability relies on inter-item covariance, but doesn’t take variance into account. CA takes variance into account, which accounts for error of measurement. CA will therefore be smaller than split-half rel, and is a better estimate

86
Q

If a test measures only one factor =

If a test measures more than one factor =

A

unidimensional

multi-dimensional

87
Q

True or false: the higher the level of Cronbach’s Alpha, the more likely it is that the test is made up of one factor

A

FALSE

Multi-dimensional tests can have high CA values too. (EG the WAIS-III measures 2 factors - Verbal IQ and Performance IQ, yet it has very good reliability and CA scores)

People assume the above to be true because the confuse the terms INTERNAL CONSISTENCY and HOMOGENEITY

88
Q

Why do people assume high CA values indicate unidimensionality?

A

CA measure how well items fit together (the co-varience of items).
It makes sense that some people assume that the more covariance items have, the more they should fit together to make up one thing (i.e the more they should measure one factor only).

BUT, internal consistency and unidimensionality are not the same thing!

89
Q

The question behind Validity is….

A

is the test measuring what it claims to measure?

90
Q

Why is validity important? (2 marks)

A
  1. Gives meaning to a test score
  2. Indication of the usefulness of a test
91
Q

If a test is not valid, then reliability is….

If a test is not reliable then it is…

A

moot

also not valid

92
Q

Name the four broad types of validity, and two sub-types of two of the broad types

A
  1. Face validity
  2. Content validity
  3. Criterion validity - Concurrent & Predictive
  4. Construct validity - convergent & divergent
93
Q

Briefly describe face validity and how it is determined

A

On the surface (its face) the measure seems to measure what it claims to measure.

Determined through a review of the items, not through a statistical analysis

94
Q

Content validity is the….

A

…degree to which a test measures an intended content area

95
Q

How is content validity established?

A

It is established through judgement by expert judges and statistical analysis such as factor analysis

96
Q

Name and briefly describe 2 potential errors in of content validity

A
  1. Construct under-representation: A test does not capture important components of the construct
  2. Construct-irrelevant variance:
    When test scores are influenced by things other than the construct the test is supposed to measure (e.g test score influenced by reading ability or performance anxiety
97
Q

Which needs to be established first; reliability or validity?

A

Reliability

98
Q

Criterion validity is…

A

how well a test score estimates or predicts a criterion behaviour or outcome, now or in future

99
Q

Name and briefly describe 2 types of criterion vali

A

Concurrent criterion validity: The extent to which test scores can correctly identify the current state of individuals
Predictive validity: How well does performance on one test predict future performance on some other measure?

100
Q

In construct validity we look at…

A

the relationship between the construct we want to measure and other constructs (to what other constructs is it similar or different

101
Q

A construct is …

A

A hypothetical attribute

Something we think exists, but is not directly measurable or observable (e.g., anxiety)

102
Q

Name and briefly describe the 2 sub-types of construct validity.

A
  1. Convergent validity:
    High correlations with between tests that measure similar constructs
  2. Divergent/discriminant validity
    Scores on a test have low correlations with other tests that measure different constructs
103
Q

Name and briefly describe 2 factors affecting validity

A
  1. Reliability (Any form of measurement error can reduce validity
    But you can have reliability without validity, but your test would just then be useless)
  2. Social Diversity (Tests may not be equally valid for different social/cultural groups
    E.g., a test of superstition in one culture might be a test of religiosity in another)
104
Q

How does one establish construct validity properly? Briefly describe this method

A

Multitrait-Multimethod (MTMM) matrix: A correlation matrix which shows correlations between tests measuring different traits/factors, measured according to different methods

105
Q

List the 4 rules of MTMM

A

Rule 1: The values in the validity diagonal should be more than 0, and large enough to encourage further exploration of validity (evidence of convergent validity)

Rule 2: A value in the validity diagonal should be higher than the values lying in its column and row, in the heterotrait-heteromethod triangles (HTHM triangles are divergent validity values, validity diagonal values are convergent validity values - conv val must be > then div val)

Rule 3: A value in the validity diagonal should be higher than the values lying in its column and row, in the heterotrait-monomethod triangles (HTMM triangles also = divergent val values)

Rule 4: There should be more or less the same pattern of correlations in all the different triangles

106
Q

To establish validity of a new scale, one could…..

A

correlate it with an already established scale via a MTMM matrix

107
Q

In the MTMM, the reliability diagonals are:
A) the intercepts of different traits within the same method
B) the intercepts of the same traits across different measures
C) the intercepts of different traits across different methods
D) the intercepts of the same traits within the same method

A

D

108
Q

Reliability diagonals are also called…

A

monotrait-monomethod values

109
Q

In the MTMM, the reliability diagonals are:
A) the intercepts of different traits measured by the same method
B) the intercepts of the same traits measured by different measures
C) the intercepts of different traits measured by different methods
D) The intercepts of the same traits measured by the same measure

A

D

110
Q

Validity diagonals are also called..

A

monotrait-heteromethod values

111
Q

In the MTMM matrix, the hetermethod block is made up by the….

A

Validity diagonal and the triangles (HTHM values)

112
Q

In the MTMM matrix, the monomethod blocks are made up by the….

A

Reliability diagonals and the triangles (HTMM values)

113
Q

MTMM matrix rule 4 interpretation

A

This allows us to see if the pattern of convergent and divergent validity is about the same

114
Q
  • run through lecture 8/9 slides and see analysis of MTMM matrix
A

do it

115
Q

Name 3 approaches to intelligence testing and what they are concerned with respectively

A
  1. The psychometric approach (structure of a test, its correlates and underlying dimensions)
  2. The information processing approach (how we learn and solve problems)
  3. The cognitive approach (how we adapt to real-world demands)
116
Q

What are the four common definitions of intelligence

A

Ability to adapt to new situations

Ability to learn new things

Ability to solve problems

Ability for abstraction

117
Q

Today, intellectual ability is conceptualized as ….

A

multiple intelligences

118
Q

Name two NB intelligence concepts from Binet

A
  1. Age differentiation - older kids greater ability than younger kids, and mental and actual age can be differentiated
  2. General mental ability - which is the total product of different and distinct elements of intelligence
119
Q

Name two of Weschler’s contributions to the field of intelligence testing

A

Intelligence has certain specific functions

Intelligence is related to separate abilities

120
Q

Name the 4 critiques of Binet’s work by Weschler

A
  1. Binet scale was not appropriate for adults
  2. Non-intellective factors were not emphasized (e.g social skills and motivation)
  3. Binet did not take into account the decline of performance that should be expected with aging
  4. Mental age norms do not apply to adults
121
Q

Briefly explain the difference between fluid intelligence and crystallized intelligence

A

Fluid intelligence (gf)
Abilities that allow us to think, reason, problem-solve, and acquire new knowledge

Crystallized intelligence (gc)
The knowledge and understanding already acquired

122
Q

What is the purpose of intelligence testing in children ( 1 mark) and adults (4 marks)?

A

Children - school placement

Adults - Neuropsychological assessment
Forensic assessment
Disability grants
Work placement

123
Q

The ….. measure is the gold standard of intelligence testing and provides a measure of ….

A

Weschler intelligence tests

Full scale IQ (FSIQ)

124
Q

List the 2 subtests FSIQ and the 2 sections of each of these subtests. Additionally, give a test used to assess each of these 4 categories

A

Verbal IQ (VIQ):
- Verbal comprehension index (VCI); test = vocabulary
- Working memory index (WMI); test = arithmetic

Performance IQ (PIQ):
- Perceptual organization index (POI); test= picture completion
- Processing speed index (PSI); test = digit-symbol coding

  • breakdown of this on lecture 10 slide 10/11
125
Q

What is one of the most stable measures of intelligence, and the last to be affected by brain deterioration

A

Vocabluary

126
Q

Which intelligence test assesses concentration, motivation and memory, and is the most sensitive to educationally deprived/intellectually disabled individuals?

A

Arithmetic

127
Q

Which intelligence test measures ability to comprehend instructions, follow directions and provide a response

A

Information

128
Q

Which intelligence test measures judgement in everyday situations? What list 3 types of questions used in this.

A

Comprehension

  1. Situational action
  2. Logical explanation
  3. Proverb definition
129
Q

For FSIQ scoring, do raw scores carry meaning?

A

No. Different subtests have different ranges, and same raw scores for people of different ages not comparable. Raw scores are converted to scale scores with set means and SDs

130
Q

Briefly descirbe picture completion intelligence tests

A

A picture in which an important detail is missing

Missing details become smaller and harder to spot

131
Q

Which intelligence test tests New learning, Visuo-motor dexterity, Degree of persistence and speed of processing information

A

Digit-symbol coding

132
Q

In which intelligence test must participants find some sort of relationship between figures?

A

Matrix reasoning
This measures:
Fluid intelligence
Information processing
Abstract reasoning

133
Q

Describe the implications possible relationships between PIQ and VIQ

A

VIQ = PIQ
If both are low, can provide evidence for intellectual disability

PIQ > VIQ
Cultural, language, and/or educational factors
Possible language deficits (e.g., dyslexia)

VIQ > PIQ
Common for Caucasians and African-Americans

134
Q

List the 4 personality types according to Hippocrates

A
  1. Sanguine
  2. Phlegmatic
  3. Choleric
  4. Melancholic
135
Q

List the 3 tenets of personality theory today

A
  1. Stable characteristics (personality traits, basic behavioural/emotional tendancies)
  2. Personal projects and concerns - what a person is doing and wants to achieve
  3. Life story/narrative - construction of integrated identity
136
Q

What are traits? (3 marks)

A
  1. Basic tendencies/predispositions to act in a certain way
  2. Consistencies in behaviour
  3. Influence behaviour across a variety of situations
137
Q

Traits are measured via….

A

structured personality measures

138
Q

What are the BIG 5 in terms of traits

A

Openness
Conscientiousness
Extraversion
Agreeableness
Neuroticism

139
Q

List 4 structured personality tests

A
  1. The Big 5 Test
  2. The 16 personality factor test
  3. Myers-Briggs type indicator
  4. Minnesota multiphasic personality inventory (MMPI)
140
Q

The Big 5 traits provide a framework for understanding ….

A

…personality disorders

141
Q

What is the most widely used objective personality test?

A

The Minnesota multiphasic personality inventory (MMPI)

142
Q

List 3 of the 5 ways in which the MMPI is used

A
  1. Help develop treatment plans
  2. Help with diagnosis
  3. Help answer legal questions
  4. Screen job candidates
  5. Part of therapeutic assessment
143
Q

….. Personality tests assess tenet two of modern personality theory (personal projects and concerns). These tests measure….

A

Unstructured
…motives that underlie behaviour

144
Q

3 examples of unstructured personality tests are….

A
  1. Thematic Apperception Test (TAT)
  2. The Rorschach test
  3. Draw a person test
145
Q

Describe the process of the Thematic Apperception Test (TAT).

Then list and briefly describe the three major motives put forward by the TAT.

A

One must make up a dramatic story about ambiguous black and white pictures, describing the feelings and thoughts of characters

  1. Achievement motive
    Need to do better
  2. Power motive
    Need to make an impact on people
  3. Intimacy motive
    The need to feel close to people
146
Q

List 2 precautions when using personality test cross-culturally

A
  1. Constructs must have the same meaning across cultures
  2. Bias analysis must be done
147
Q

List 2 solutions to culturally biased tests

A
  1. Caution in interpretation
  2. Cross-cultural adaption of test
148
Q

In the MTMM, the validity diagonals are:
A) the intercepts of different traits measured by the same method
B) the intercepts of the same traits measured by different measures
C) the intercepts of different traits measured by different methods
D) The intercepts of the same traits measured by the same measure

A

B