Issues of Measurement and Testing Flashcards

1
Q

What is dimensionality?

A

Whether an instrument (test) measures a single constuct (dimension) or many

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is reliability?

A

Whether a test measures a construct consistently

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are psychometric properties expressed as?

A

An index, coefficient or other numerical quantity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is standardisation?

A
  • The process of establishing norms for a test
  • The use of uniform procedures, same conditions, scored by same criteria allowing results to be compared
  • The transformation of data into a distribution of standardised scores, often having a mean of 0 and SD of 1

APA Dictionary of Psychology (2023)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the 2 common standardisation methods?

A

Norm-referencing and Criterion -referencing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is norm-referencing?

A

“Compares the score of an individual with those of other candidates who took the test under similar conditions (norm group) “ - (Rust,2007)
- norm group should be representative of whole population
- Allows for meaningful comparisons

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is criterion-referencing?

A
  • Scores compared with some objectively assessed reference point or standard
  • Not commonly used in personality testing, but potentially relevant for psychopathology diagnosis
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a percentile rank?

A

A number where a certain % of scores fall below

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are z-scores?

A

The value of a z score tells you how many standard deviations you are away from the mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is validity?

A

The degree to which empirical evidence and theoretical rationales support the interpretations based on test scores or other measures (West & Finch, 2007)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is reliability?

A

A measure of reproducibility or dependability of measurements, free from error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is test-retest reliability?

A

Stability over time/repeatability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is internal consistency?

A

Whether all items are measuring the same thing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is inter-rater reliability?

A

The degree to which different raters’ scores/codes/ratings are correlated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What factors influence test retest reliability?

A
  • Characteristics of test takers e.g. illness, tiredness
  • Characteristics within tests - poor instructions, complexity
  • Differences in conditions - time of day, distractions
  • Time gaps - must be minimum of 3 months
  • Difficulty level - floor/ceiling effects
  • Sample size and sampling
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What’s the most used index of internal consistency?

A

Cronbach’s Alpha - reliability should never be below.7

17
Q

How can bias be avoided?

A

Several raters and ensure assessments are consistent across raters

18
Q

What are the 2 common ways to measure inter-rater reliability?

A

Percent agreement and Cohen’s Kappa

19
Q

What is a percent agreement?

A

% of items the judges agree on, between 0 and 1

20
Q

What is Cohen’s Kappa?

A

Calculates the % of items raters agree on, while accounting for the fact that raters may happen to agree on items due to chance
- Ranges between 0 and -1, with -1 indicating systematic disagreement between raters
- 0.7-0.8 is acceptable

21
Q

What is construct validity?

A

The degree to which an assessment tool adequately measures a hypothesised psychological construct

22
Q

What are the different types of construct validity?

A
  • Face
  • Content
  • Convergent
  • Discriminant
  • Predictive
23
Q

What is face validity?

A

The extent to which a test appears to measure what it claims to based on fave value

24
Q

What is content validity?

A

Concerned with a test’s ability to include or represent all of the contents of a particular construct

25
Q

What is convergent validity?

A

The extent to which scores from a new test correlate with other measures of the same phenomenon

26
Q

What is discriminant validity?

A

Refers to the extent to which a test score does not correlate with the scores of theoretically unrelated measures

27
Q

What is predictive validity?

A

Evidence that a test score or other measurement correlates with another variable that can only be assessed at some point after the test has been administrated

28
Q

What features of psychometric tools might influence the degree to which they are valid and reliable?

A
  • Methods of data collection and bias
  • Purposes of tests
  • Cross culture validity?
29
Q

What are the strengths of psychometric tests?

A
  • Objective and scientific way of describing people and their behaviour
  • Usually quick and easy
  • Allows for statistical analysis
30
Q

What are the weaknesses of psychometric tests?

A
  • Difficult to make valid and reliable
  • Culture bias
31
Q

What are some sources of innacuracy and bias in self report measures?

A

Extreme responding, dissent bias (agree/disagree with questtionaire irrespective of content), SDB, Recall bias, hostility bias

32
Q

How did Crowne (1964) theorise SDB?

A

Socially desirable responses reflect a repressive defense against vulnerable self-esteem

33
Q

What did Zemore find about SDB and when?

A

Higher SDB scores have been linked with outcomes such as better attendence of drugs/alcohol treatment programmes

34
Q

What is the Lees-Haley Fake Bad Scale (FBS)/ MMPI Symptom Validity Scale?

A
  • 43 items in the Minnesota Multiphasic Personality Inventory selected by Less-Haley (1991) to detect malingering in personal injury claimants
35
Q

What is impression management?

A

Ways people attempt to control how they are perceived by others (Goffman, 1959)

36
Q

How do people mitigate against impression management?

A
  • Lie scales to flag who is lying
  • Forced choice items
  • Inconsistency scales
  • Multiple assessment methods (other than self-report)