Exam Flashcards

1
Q

Psychological test

A

⭐️An objective procedure for sampling and quantifying human behaviour to make an inference about a particular psychological construct using standardised stimuli, and methods of administration and scoring.

⭐️✖️Psychological tests can ASSIST in making a diagnosis, but should not be considered the only tool in making this determination. This is because it does not take into account other factors which should be considered (e.g. their behaviour outside the testing situation)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why do we need psychological tests?

A
  • Human judgement is SUBJECTIVE and FALLIBLE. Some factors that can influence the outcomes of human judgement include stereotyping, personal bias, positive and negative halo effect, errors of central tendency.
  • They are better than personal judgement in informing decision making in many situations because of the nature and defining characteristics of these tests
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Psychological TESTING

A

⭐️It is the PROCESS of administering a psychological test and obtaining and interpreting the test scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Psychological ASSESSMENT

A

A broad process of ANSWERING REFERRAL QUESTIONS, which includes but is not limited to psychological testing. It can also include observation, interview and checking records.

⭐️Acknowledges that tests are only one type of tool used by professional assessors (along with other tools, such as the interview or case history data), and that the value of a test, or of any other tool of assessment, is intimately linked to the knowledge, skill, and experience of the assessor.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Construct

A

⭐️A hypothetical entity with theoretical links to other hypothesised variables, that is postulated to bring about the consistent set of observable behaviours, thoughts or feelings that is the target of a psychological test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Reliability

A

⭐️The confidence we can have that the score is an accurate reflection of what the test purports to measure.

We want to be reasonably certain that the measuring tool or test that we are using is consistent and is yielding the same numerical measurement every time it measures the same thing under the same conditions.

⭐️It is the proportion of the total variance attributed to true variance. The greater the proportion of the total variance attributed to true variance, the more reliable the test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Types of reliability:

Internal consistency

A

⭐️Splits a test into subtests which are then correlated with all the other subtests and the average correlation is calculated (can also look at internal consistency of individual subtests, by examining the items within the subtests)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Types of reliability:

Split half reliability

A

⭐️Splits a test into two equivalent forms and correlating the scores on the two halves.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Types of reliability:

Test-retest reliability

A

⭐️Examines whether the score we have obtained on a measure remains stable over time (e.g. Personality trait). Time between testing varies across measures and can be a source of error variance.

When the interval between testing is greater than 6 months, the estimate of test-retest reliability is often referred to as the “coefficient of stability”.

✖️Experience, practice, memory, fatigue and motivation may intervene and confound an obtained measure of reliability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Types of reliability:

Inter-rater reliability

A

⭐️Examines the extent to which the score obtained by one informant (e.g. parent) correlates with the score obtained by another informant (e.g. teacher)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Factors that can affect the RELIABILITY of test results

A

❕How recently the test was developed
❕The type of test it is (tests of cognitive abilities and (self-reported) personality are generally more reliable than other tests)
❕The standard error of measurement
❕The length of the test (long forms are generally more reliable than their short form or screening equivalent)
❕The interval between testing and retesting
❕Who the test was developed for (i.e. Cultural considerations)
❕Personal (e.g. Fatigue, motivation, anxiety) and environmental (e.g. Time of day, lighting, external noise) factors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Validity

A

⭐️The extent to which the test measures what it purports to measure (based on what we currently know).

❕A test can be reliable without be it valid, BUT it cannot be valid without being reliable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Types of RELIABILITY

A
  1. Internal consistency
  2. Split half reliability
  3. Test-retest reliability
  4. Inter-rater reliability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Types of VALIDITY

A
  1. Construct validity
  2. Face validity
  3. Predictive validity
  4. Content validity
  5. Convergent validity
  6. Discriminant (or divergent) validity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Types of validity:

Content validity

A

⭐️The extent that the content of the test items represents all facets of the construct being measured (e.g. A cumulative final exam in introductory statistics would be considered content-valid if the proportion and type of introductory statistics problems on the test approximates the proportion and type of introductory statistics problems presented in the course)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Types of validity:

Face validity

A

⭐️The extent that items appear to be valid for the area that is being measured (informal and may be subjective).

⭐️Refers more to what a test APPEARS to measure than what the test ACTUALLY measures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Types of validity:

Predictive validity

A

⭐️The extent that scores on a test allow us to predict scores on some criterion measure (e.g. Is the Conners 3 an adequate screening tool for ADHD?)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Types of validity:

Construct validity

A

⭐️The extent that the test truly reflects the construct that it purports to measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Types of validity:
Construct validity:
Convergent validity

A

⭐️Tests the extent that the content of the items in one test correlate (have a similar relationship with) the content of items in another measure of the same (or a similar) construct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Types of validity:
Construct validity:
Discriminant (divergent) validity

A

⭐️The extent that the content of the items on one test is different from (does not overlap with) the content of the items in a contrasting measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Factors that can affect the VALIDITY of test results

A

❕External events unrelated to the construct being measured (e.g. The death of a family member prior to taking an exam)
❕Factors not considered by test developers but which are found to be relevant to the construct
❕Scores on one construct correlating highly with scores on an unrelated construct (e.g. Measures of creativity correlating more highly with IQ reads than with other measures of creativity)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Scores

A

Z scores
M = 0 SD = 1

T scores
M = 50 SD = 10

Standardised scores
M = 10 SD = 3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Standardisation

A

⭐️The process of administering a test to a REPRESENTATIVE SAMPLE of test takers for the purpose of ESTABLISHING NORMS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Norms

A

⭐️Tables of the distribution of scores on a test for specified groups in a population that allow interpretation of any individual’s score on the test by comparison to the scores for a relevant group.

❕Ideally, norm samples should be representative of the reference group. They should take into account demographic characteristics that relate to the construct of interest (e.g. Age, gender, education level, SES, ethnicity)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Types of NORMS

A

👶🏻👦🏻👧🏻👨🏻Age
🚌 Grade
👫 Gender

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Possible implications if test administration is not followed exactly as outlined in a test manual

A

✖️Result obtained may not be representative of the individual’s abilities, behaviour or skills. This could further influence, for example, their eligibility for finding within the school system, or for Centrelink benefits.
✖️Individual may receive a diagnosis that does not represent their functioning or behaviour in that area (false positive), or alternatively, not being diagnosed (false negative)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

ERROR in psychological testing

A

✖️You can never control every extraneous variable when administering a test. You can only attempt to control as many as possible.
✖️Error can also occur during test development (e.g. A test may not include items to measure specific facets or factors of a broader construct of interest)
✖️Use of psychological tests in contexts in which they were not developed for can call into the question the validity of conclusions being drawn
✖️If a test is translated into another language without using rigorous processes, the items may not measure what they were designed to measure in the language in which it was developed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Important factors in determining whether a psychological test is suitable for a client

A

❕Age of the client 👶🏻👦🏻👨🏻 and the age range that the test was developed for
❕In an English speaking country, the English language proficiency of the client being assessed (particularly relevant for IQ testing)
❕How long it has been since the client was assessed (if the test has been administered before)
❕Whether the client (or their parent) understands what the assessment is for
❕The psychometrics (i.e. The science of measuring mental capacities and processes) of the assessment tool you have selected
❕Whether you have the skills/necessary experience to administer and interpret the tests needed to assess this client

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Error and Error Variance

A

⭐️✖️Factors other than what a test attempts to measure that influence performance on the test.

⭐️The component of a test score attributable to sources other than the trait/ability measured.

  • Assessee
  • Assessor
  • Measuring instruments
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is a “good test”?

A

✔️Includes clear instructions for administration, scoring, and interpretation
✔️Does not require a lot of time and money to administer, score and interpret it
✔️Psychometric soundness: reliability and validity (⭐️often it becomes a question of which combination of instruments, in addition to multiple sources of data will enable us to meet our objective)
✔️Contains adequate norms. Norm-referenced testing and assessment aims to yield information on a testtaker’s standing or rank of relative to some comparison group of testtakers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Norm-referenced testing and assessment

A

⭐️Norms: the test performance data of a particular group of testtakers that are designed for use as a reference when evaluating or interpreting individual test scores.

⭐️Normative sample: the group of people whose performance on a particular test is analysed for reference in evaluating the performance of individual testtakers.

⭐️Norming: the process of deriving norms.

✔️This data are used as a reference source for evaluating and placing into context test scores obtained by individual testtakers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Things to ask yourself when picking a psychological test or measuring technique

A

❓Why use this particular instrument or method? (What is the objective of using a test and how well does the test under consideration meet that objective?)
❓Are there any published guidelines for the use of this test?
❓Is this instrument reliable?
❓Is this instrument valid?
❓Is this instrument cost-effective?
❓What inferences may be reasonably made from this test score, and how generalisable are the findings?
Factors that may affect the generalisability of findings include:
✖️The way in which test items are worded and the extent to which they are comprehensible by members of different groups
✖️How a test was administered (e.g. Did read administration deviate from given instructions and conditions?)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Standardisation

A

⭐️The process of administering a test to a representative sample of testtakers for the purpose of establishing norms.

A test is said to be standardised when it has clearly specified procedures for administration and scoring, typically including normative data.

34
Q

Sampling

A

⭐️Sample: a portion of the universe of people deemed to be representative of the whole population.

⭐️Sampling: the process of selecting the portion of the universe deemed to be representative of the whole population.

✔️Stratified sampling would help prevent sampling bias and ultimately aid in the interpretation of the findings

35
Q

Standardised tests

A

⭐️Standard: that which others are compared to or evaluated against.

⭐️Standardising: making or transforming something into something that can serve as a basis of comparison or judgement.
-Test developers standardise tests by developing replicable procedures for administering, scoring and interpreting the test so that there will be little deviation from examiner to examiner in the way that a standardised test is administered

36
Q

Types of NORMS

A
  1. Percentiles: an expression of the percentage of people whose score on a test or measure falls below a particular raw score; it is a ranking that conveys information about the relative position of a score within a distribution of scores
    ✖️Real differences between raw scores may be minimised near the ends of the distribution and exaggerated in the middle of the distribution.
  2. Age norms: indicate the average performance of different samples of testtakers who were at various ages at the time the test was administered
  3. Grade norms: indicate the average test performance of testtakers in a given school grade.
    ✖️Do not provide information as to the content or type of items that a student could or could not answer correctly
  4. Developmental norms: norms developed on the basis of ahh trait, ability, skill, or other characteristic that is presumed to develop, deteriorate, or otherwise be affected by chronological age, school grade, or stages of life.
  5. National norms
  6. National anchor norms
  7. Subgroup norms
  8. Local norms: provide normative information with respect to the local population’s performance on some test
37
Q

Criterion-referenced testing and assessment

A

⭐️A method of evaluation and a way of deriving meaning from test scores by evaluating an individual’s score with reference to a set standard (e.g. To be eligible for a high school diploma, students must demonstrate at least a sixth-grade reading level)

The focus is in the testtaker’s performance; what they can or cannot do

✖️Important information about an individual’s performance relative to other testtakers is lost
✖️Has little or no meaningful application at the upper end of the knowledge/skill continuum- this is better identified in tests that employ norm-referenced interpretations

38
Q

Reliability coefficient

A

⭐️An index of reliability, a proportion that indicates the ratio between the true score variance on a test and the total variance.

39
Q

Types of ERROR

A
  1. Measurement error: all of the factors associated with the process of measuring some variable, other than the variable being measured.
  2. Random error: error in measuring a targeted variable caused by unpredictable fluctuations and inconsistencies of other variables in the measurement process.
  3. Systematic error: error in measuring a variable that is typically constant or proportionate to what is presumed to be the true value of the variable being measured.
40
Q

Sources of Error Variance

A

⭐️During test construction:
-Item/content sampling: variation among items within a year as well as to variation among items between tests.
⭐️During test administration:
-Factors that influence the testtaker’s attention or motivation (e.g. Room temperature, level of lighting, amount of ventilation and noise, events of the day, instruments used to enter responses, pressing emotional problems, physical discomfort, lack of sleep, the effects of drugs or medication, illness, therapy, formal learning experiences, casual life experiences).
-Examiner-related variables (e.g. Nodding, eye movements, their level of professionalism)
⭐️Test scoring and interpretation
-Scorers (e.g. In behavioural measures of social skills, examiners must score or rate patient with respect to the variable “social relatedness”)
-Technical glitches in computer programs may contaminate the data

Other sources:
✖️Sampling error (the extent to which the population of voters in the study actually was representative of voters in the election)
✖️Methodological error (e.g. Interviewers may not have been trained properly, wording in the questionnaire may have been ambiguous)

41
Q

Parallel-forms and Alternate-forms reliability estimates

A

⭐️Coefficient of equivalence: used to evaluate the degree of the relationship between various forms of a test

Parallel forms: exist when for each form of the tests the means and the variances of observed test scores are equal; scores obtained on parallel tests correlate equally with other measures
⭐️Parallel forms reliability: an estimate of the extent to which item sampling and other errors have affected test scores on versions of the same test, when for each form of the test, the means and variances of observed test scores are equal.

Alternate forms: different versions of s test that have been constructed so as to be parallel.
⭐️Alternate forms reliability: an estimate of the extent to which these different forms of the same test have been been affected by item sampling error (testtakers doing better or worse on a specific form of the test not as a function of their true ability, but simply because of the particular items that were selected for inclusion in the test), or other error.

42
Q

Reliability:

Measures of internal consistency estimate of reliability

A
  1. Split-half reliability estimates
    ⭐️Obtained by correlating two pairs of scores obtained from equivalent halves of a single test administered once.
    ❕You should not simply split a test in the middle as this would likely raise or lower the reliability coefficient.
    ❕Different amounts of fatigue for the first as opposed to the second part of the test, different amounts of test anxiety, and differences in item difficulty are all factors to consider.
  2. ⭐️Inter-item consistency: the degree of correlation among all the items on a scale.
    ✔️Useful in assessing HOMOGENEITY of the test (I.e. Those that contain items that measure a single trait; the more homogenous the more inter-item consistency) and HETEROGENEITY of the test (i.e. The degree to which a test is composed of items that measure more than one trait)
    ⭐️✔️Test homogeneity is desired because it allows relatively straightforward test-score interpretation.
    ✖️Insufficient tool for measuring multifaceted psychological variables (e.g. Intelligence, personality)
  3. Cronbach’s coefficient alpha: the mean of all possible split-half correlations, corrected by the Spearman-Brown formula.
    Ranges from 0 (absolutely no similarity) to 1 (perfectly identical); a value above .90 is too high and indicates redundancy in the items
  4. Average proportional distance (APD): a measure that focuses on the degree of difference that exists between item scores.
    ✔️Compared to Cronbach’s alpha, it is not connected to the number of items on a measure.

❕All indices of reliability provide an index that is a characteristic of a particular group of test scores, not of the test itself.
❕Measures of reliability are estimates and estimates are subject to error. The precise amount of error inherent in a reliability estimate will vary with various factors, such as the sample of testtakers from which the data were drawn)

43
Q

Reliability:

Measures of inter-scorer reliability

A

⭐️If the reliability coefficient is high, the prospective test user knows that test scores can be derived in a systematic, consistent way by various scorers with sufficient training.

Coefficient of inter-scorer reliability: determines the degree of consistency among scores in the scoring of a test.

44
Q

The standard error of measurement

A

⭐️Provides a measure of the precision of an observed test score. It provides an estimate of the amount of error inherent in an observed score or measurement.

In general the relationship between the SEM and the reliability of a test is inverse; the higher the reliability of a test, the lower the SEM.

It is used to estimate or infer the extent to which an observed score deviates from a true score.

A confidence interval represents a range or band of test scores that is likely to contain the true score.

45
Q

The standard error of the difference between scores

A

⭐️A statistical measure that can aid a test user in determining how large a difference should be before it is considered statistically significant.

✔️Enables comparisons to be made between scores (e.g. How did this individual’s performance on test 1 compare with his or her performance on test 2?)

46
Q

Types of validity:
Criterion-related validity:
The validity coefficient

A

⭐️Provides a measure of the relationship between test scores and scores on the criterion measure (e.g. The correlation coefficient computed from a score on a psychodiagnostic test and the criterion score assigned by psychodiagnosticians).

❕The type of correlation coefficient used depends on the type of data, sample size, shape of the distribution, etc.

✖️Attrition in the number of subject may adversely affect the validity coefficient

⭐️It should be high enough to result in the identification and differentiation of testtakers with respect to target attribute(s).

47
Q

Evidence of CONSTRUCT VALIDITY

A
  1. The test is homogeneous, measuring a single construct
    -correlations between subtest scores and total test score are generally reported as evidence of homogeneity
  2. Test scores increase or decrease as a function of age, the passage of timeC or an experimental manipulation as theoretically predicted
  3. Test scores obtained after some event or the mere passage of time differ from pretest scores as theoretically predicted
    ❕Should include a control group to rule out alternative explanations of the findings
  4. Test scores obtained by people from distinct vary as predicted by the theory
  5. Test scores correlate with scores on other tests in accordance with what would be predicted from a theory that covers the manifestation of the construct in question
48
Q

Types of TEST BIAS

A
  1. Rating error
  2. Leniency error
  3. Severity error
  4. Central tendency error
  5. Halo effect
49
Q

Clinical interview

A

⭐️Like a psychological test in that it is a method for gathering data about an individual

🏆Goal: To have the interviewee explore his or her situation.

✔️Allows for results of assessment to be put in context (e.g. Client presents as very dad BUT you find out someone close to them recently passed away)

50
Q

Principles of effective interviewing

A

⭐️Designed to facilitate the flow of communication

✔️The ‘right’ attitude (warmth, genuineness, acceptance, understanding, openness, honesty, fairness. Be interested, be involved).
🚫Avoid judgemental or evaluative statements, probing statements, hostile responses, false reassurances, interrogating people (‘why’ questions are not usually a good idea as they may be accusatory)

✔️Use open-ended questions

✔️Keep the interaction flowing: active listening, transitional phrases

✔️Make ‘understanding’ statements: let people know you understand what they are saying (repeat or paraphrase), summarise what you have been told, seek clarification if you are not sure, communicate empathy (how you think the interviewee feels)
✔️✔️Extremely helpful in helping the interviewee uncover and explore underlying feelings.

51
Q

Structured interviewing

A

⭐️A printed set of questions are used. The questions are asked in a specific order or sequence.
✔️Standardised set of rules for probing so that all interviewees are handled in the same manner; allows for norms to be developed and applied
✔️Used to determine if an individual meets the criteria (DSM) for a particular mental disorder

52
Q

Unstructured interviewing

A

⭐️No specific questions or guidelines. Questions are decided on the basis of how the interviewee responds together with a broad map of the areas to be covered.

✔️Allows the individual to heard

53
Q

Interviewing:

Case History

A

🏆Goal: to understand the interviewee’s background so that you can interpret individual test scores.

Possible areas:
🎓Education
⚽️Hobbies/interests 
🏆Accomplishments
👫Relationship experiences
👯Social networks
👮Job history 
👨‍👩‍👧Family history
54
Q

Mental Status Examination (MSE)

A

⭐️A systematic appraisal of the appearance, behaviour, mental functioning, and overall demeanour of the person.

If provides a “snapshot” of their functioning at a given point in time.

❕Important in determining a person’s capacity to function, and whether psychiatric follow-up is required.
❕Judgements need to be considered at the developmental level of the person and age-appropriateness of the behaviours (e.g. Is a person dressed like a child?)

55
Q

Mental Status Examination (MSE):

Components

A
✅Appearance
✅Behaviour
✅Mood (longer lasting experience of feelings - "season") and affect (feelings you may have in the moment - "weather")
✅Speech
✅Cognition
✅Thoughts - content and process 
✅Perception
✅Insight
✅Judgement
56
Q

Referral question

A

⭐️A request for psychological testing or assessment usually raised by a client or other professionals who work with the client; it can be general or specific

Formulation of a clear and specific referral question facilitates:
✅Derivation of hypotheses about a case
✅Selection of appropriate psychological assessment instruments
✅Interpretation of results
✅Provision of recommendations

57
Q

DSM-5:

20 Diagnostic Categories

A
  1. Neurodevelopmental Disorders (ASD)
  2. Schizophrenia and Psychotic Disorders (Schizophrenia)
  3. Bipolar and Related Disorders (Bipolar I Disorder)
  4. Depressive and Depressive Disorders (Disruptive Mood Dysregulation Disorder)
  5. Anxiety Disorders and Phobias (GAD)
  6. Obsessive-Compulsive and Related a Disorders (BDD)
  7. Somatic Symptom and Related Disorders (Somatic a Symptom Disorder)
  8. Feeding and Eating Disorders (Anorexia Nervosa)
  9. Elimination Disorders (Encopresis)
  10. Sleep-Wake Disorders (Narcolepsy)
  11. Sexual Dysfunctions (Erectile Disorder)
  12. Gender Dysphoria
  13. Disruptive, Impulse-Control, and Conduct Disorders (Oppositional Defiant Disorder)
  14. Substance-Related and Addictive Disorders (Substance Use Disorders)
  15. Neurocognitive Disorders (Major and Mild Neurocognitive Disorders)
  16. Trauma and Stressor Related Disorders (Adjustment Disorders)
  17. Dissociative Disorders (Dissociative Identity Disorder)
  18. Personality Disorders (BPD)
  19. Paraphilic Disorders (Transvestic Disorder)
  20. Other Mental Disorders (Other Specified Mental Disorder due to Another Medical Condition)
58
Q

Achievement tests

A

⭐️Designed to measure the degree of learning that has taken place as a result of exposure to a relatively defined learning experience (i.e. Past learning) (e.g. How to prepare dough for use in making pizza)

  • tend to draw on narrower and more formal learning experiences
  • tend to focus on the learning that has occurred as a result of relatively structured input
59
Q

Achievement tests:

Uses

A

✅To gauge student progress toward instructional objectives
✅To compare an individual’s accomplishment to peers
✅To determine what instructional activities and strategies might best propel the students toward educational objectives
✅Help school personnel make decisions about a student’s placement in a particular class, acceptance into a program, or advancement to a higher grade level
✅Help in gauging the quality of instruction in a particular class, school, school district, or state
✅To screen for difficulties; may precede administration of more specific tests designed to identify areas that may require remediation

60
Q

Achievement tests:

Types

A
  1. Teacher developed tests
  2. Standardised tests
    - Group administered (NAPLAN)
  • Individually administered (WIAT-III, WJ-IV ACH)
    ✅Allow us to see how they compare to their peers
    ✅❕Grade norms are more appropriate than age norms for achievement tests because you know that generally speaking the children have had the same exposure to educational materials
61
Q

Achievement tests:
Wechsler Individual Achievement Test - Third edition (WIAT-III):
Subtests

A

Reading 📚

  1. Word Reading (participant reads a list of words from a card that gradually increase in difficulty).
  2. Reading comprehension (participant reads passages either to themselves or aloud and then answers literal and inferential questions based on the text. They have the text to refer back to when the questions are asked).
  3. Pseudoword Decoding (participant reads a list of nonsense words that gradually increase in difficulty; they must draw in their knowledge of phonological processing and word rules).
  4. Oral Reading Fluency (participant reads aloud sentences and short passages and then answers questions based on the text. They have the text to refer back to when the questions are asked; reading speed, expressive language, and comprehension are all assessed).

Writing ✒️

  1. Alphabet Writing Fluency (child writes as many alphabet letters as they can in 30 seconds; measures automaticity in written letter formation and sequencing).
  2. Spelling (participant writes out the spelling of words that gradually increase in difficulty; word is read aloud and also put in a sentence for context).
  3. Sentence Composition (participant is required to join 2 or 3 sentences using conjunction. Sometimes the conjunction is given to the student. Response is scored based on semantics, grammar and mechanics).
  4. Essay Composition (participant writes a paragraph or letter to the editor. Paragraph/letter to the editor has a time limit. Response is scored based on organisation, grammar and mechanics).

Mathematics ➕➖➗✖️

  1. Math Problem Solving (participant solves problems presented visually. They may use a pencil and paper to aid problem solving if needed).
  2. Numeric operations (participant solves numeric mathematical problems with a pencil and paper, which gradually increase in difficulty).
  3. Math fluency- addition, subtraction, multiplication (measure speed and accuracy of participant’s math calculations; participant solves as many sums as they can in a 60 second time limit)

Oral Language 🐵

  1. Listening comprehension (participant is given a word or sentence and asked to point to which picture corresponds with that word or sentence).
  2. Oral expression (participant engages in word naming, repeats sentences, and/or says the word that best corresponds to a picture).
  3. Oral word fluency (list as manny words as they can in 60 seconds)
62
Q

Aptitude tests

A

⭐️Tend to focus more on informal learning or life experiences; typically used to assess future learning potential

63
Q

Aptitude tests:

Potential uses

A

To measure readiness to:

  • enter a particular preschool program
  • enter elementary school
  • successfully complete a challenging course of study in secondary school
  • successfully complete college-level work
  • successfully complete graduate-level work, including a course of study at a professional or trade school

⭐The operative assumption is that an individual who was able to master detain basic skills should be able to master more advanced skills

-tend to draw on a broader range of information and ability and may be used to predict a wider variety of variables

64
Q

Clinical neuropsychology

A

⭐️Conducting neuropsychological assessment on individuals with or suspected to have a brain injury or impairment.

Brain INJURY: organic brain injuries (e.g. Head trauma, hypoxia)

Brain IMPAIRMENTS: deficits in brain functioning that have a genetic component (e.g. A neurodevelopmental disorder) or are neurodegenerative (e.g. Alzheimer’s)

  • Providing psycho-education, counselling or psychotherapy for individuals with brain injury or impairment and/or their families
  • Planning, conducting and evaluating neuropsychological rehabilitation for individuals with brain injury or impairment based on the result of neuropsychological assessment
65
Q

Neuropsychological assessment

A

⭐️Application of neuropsychological tests and other data-collection techniques to answer referral questions or solve problems for individuals with known or suspected brain injury or impairment.

✅Tests can be used to assess where damage to the brain may be

⭐️Assessment links test performance to areas of the brain that may be damaged.

66
Q

Cognitive testing:

Uses

A
✅Understand learning difficulties
✅Predict occupational success
✅Provide vocational guidance 
✅Determine access to Government funding
✅Assess brain injury (pre and post)
✅Determine diagnoses such as intellectual disability, specific learning disorder, language disorder
✅Giftedness
✅Gage general strengths and weaknesses

❕Needs to be used in conjunction with other tests (e.g. Behavioural observations, reports from stakeholders, etc.)

67
Q

Cognitive testing

A

⭐️Provides test scores that indicate level of functioning in underlying cognitive construct/s being measured (e.g. Processing speed- the ability to perform simple cognitive tasks quickly and fluently)

68
Q

Construct Irrelevant Variance

A

⭐️When an assessment is too broad, containing excess reliable variance associated with other distinct constructs (e.g. PRI was found to be a measure of Gf and Gv).

-Can occur at the broad ability and subtests level

69
Q

Cross-Battery Assessment (XBA)

A

⭐️An assessment method comprised of pillars, guidelines, and procedures that assist practitioners in measuring a wider and more in-depth range of cognitive abilities and processes than that represented by a single ability battery (cognitive or achievement) in a manner that is psychometrically respectable and based on contemporary theory.

70
Q

Methods of Personality test construction

A
  1. Content (logical/rational) Method
  2. Theory Approach
  3. Factor Analysis (Data Reduction) Approach
  4. Criterion Keying Approach
  5. Rational-empirical approach
71
Q

Methods of Personality test construction:

Content (logical/rational) Method

A

⭐️Items created that are logically related to the construct being assessed

✅Usually easy to generate items that have good face validity
✖️But high face validity makes it easy to distort responses (e.g. BDI-II)

✅Clinical experience can be helpful in item creation

72
Q

Methods of Personality test construction:

Theory Approach

A

⭐️Items are tied closely to a particular theory such that items are designed to measure traits or types presumed to exist on the basis of the theory.

✅Operationalises theory and allows for further research and development of theory
✖️Utility of test is limited by the validity of theory with it being hard to determine if the test adequately reflects the theory (e.g. A test based on psychoanalytic theory would have items to assess id, ego and superego functioning).

73
Q
Methods of Personality test construction:
Factor Analysis (Data Reduction) Approach
A

⭐️FA used to reduce a large number of observed phenomena (i.e. Test items) into a minimum number of meaningful variables (or factors)

-Results tend to guide theory development, rather than vice versa

✖️Empirically identifies basic personality dimensions by figuring out ‘what goes with what’ BUT results depend on the initial pool of items and different FA methods can produce different results.

74
Q

Methods of Personality test construction:

Criterion Keying Approach

A

⭐️Empirical (and atheoretical) approach that selects items on the basis of differentiating between defined groups (schizophrenic and non-schizophrenic).

✖️Subtest groups are separated by clearly defined cut-score BUT distributions of groups tend to overlap.

75
Q

Methods of Personality test construction:

Rational-empirical Approach

A

⭐️A combination of all of the methods of personality test construction. It is a way of constructing psychological tests that relies on both reasoning from what is known about the psychological construct to be measured in the test, and collecting and evaluating data about how the test and the items that comprise it actually behave when administered to a sample of respondents.

Development of a read by means of this approach is as follows:

  1. Create a large, preliminary pool of test items from which the test items for the final form of the test will be selected.
  2. Administer the preliminary pool of items to at least two groups of people:
    a) A criteria group composed of people known to possess the trait being measured.
    b) A randomly selected group of people who may or may not possess the trait being measured.
  3. Conduct an item analysis to select items indicative of membership in the criterion group. Items in the preliminary pool that discriminate between membership in the two groups in a statistically significant fashion will be retained and incorporated in the final form of the test.
  4. Obtain data on test performance from a standardisation sample of testtakers who are representative of the population from which future testtakers will come. The test performance for Group b members on items incorporated in the final form of the test may be used for this purpose if deemed appropriate. The performance of Group b members on the test would then become the standard against which future testtakers will be evaluated. After the mean performance of Group b members on the individual items of the test has been identified, future testtakers will be evaluated in terms of the extent to which their scores deviate in either direction from the Group b mean.
76
Q

Personality tests

A

⭐️Designed to measure TYPICAL behaviour

  • no right or wrong answers
  • self-report method of assessment
77
Q

Response set

A

⭐️A person’s tendency, either conscious or unconscious, to respond to items in a certain way, independent of the person’s true feeling about the item.

✖️❕Responses to test items are independent of the construct the test is trying to measure
✖️❕Results in construct irrelevant variance which threatens validity

Common types:

  • Acquiescence
  • Socially Desirable Responding (e.g. “Faking good”, “faking bad”; self-deceptive enhancement, impression management- Paulhus’ Two-Factor Model)
78
Q

Personality

A

⭐️An individual’s unique constellation of psychological traits that is relatively stable over time.

79
Q

Personality trait

A

⭐️Any distinguishable, relatively enduring way in which one individual varies from another.

80
Q

Personality Assessment Inventory (PAI)

A

✅In contrast to the MMPI-2 the clinical scales were constructed to measure particular constructs that are presented by the names of the individual scales, with content validity and discriminative validity playing important roles in the development of the PAI.

✅No overlapping scales (improves discriminant validity)
✅4 response options allowing gradation of responses
✅Extra information from scales such as:
-Suicide ideation
-Aggression
-Attitudes towards treatment

❕It is not a diagnostic measure. It’s results should be used to inform diagnostic information derived from other sources (e.g. structured interviews)