PSYC4022 Testing and Assessment Week One Gathering Information Flashcards

1
Q

Trait

A

Any distinguishable, relatively enduring way in which one individual varies from another

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

States

A

Any distinguishable, relatively enduring way in which one individual varies from another but less enduring that Traits

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Sensation Seeking

A

the need for varied, novel, and complex sensations and experiences and the willingness to take physical and social risks for the sake of such experiences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Cumulative Scoring

A

Inherent in cumulative scoring is the assumption that the more the testtaker responds in a particular direction as keyed by the test manual as correct or consistent with a particular trait, the higher that testtaker is presumed to be on the targeted ability or trait.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Error

A

Long-standing assumption that factors other than what a test attempts to measure will influence performance on a test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Error Variance

A

The component of a test score attributable to sources other than the trait or ability measured.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Classical Test Theory (CTT) or true score theory

A

The assumption is made than each test-taker has a true score on a test that would be obtained but for the action of measurement error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Psychometric Soundness

A

Reliability and Validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Reliability

A

The consistency of the measuring tool.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Validity

A

A test is considered valid for a particular purpose if it does, in fact, measure what it purports to measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Norms

A

Also referred to as normative data, norms provide a standard with which the results of measurement can be compared.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Norm

A

behaviour that is usual, average, normal, standard, expected or typical.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Normative Sample

A

Is that group of people whose performance on a particular test is analyzed for reference in evaluating the performance of individual testtatkers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

to norm or norming

A

The process of deriving norms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Standardisation or Test Standardisation

A

The process of administering a test to a representative sample of testtakers for the purpose of establishing norms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Sampling

A

The process of selecting the portion of the universe deemed to be representative of the whole population is referred to as Sampling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Sample

A

a portion of the univese of people deemed to be representative of the whole population.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Stratified Sampling

A

The process of inlcuding everyone in your representative population. i.e. All religions, races etc included in the Manhattan area.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Stratified Random Sampling

A

If everyone in the sample has the same chance of being included.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Standard Error of measurement

A

A statistic used to estimate the extent to which an observed score deviates from a true score.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Standard Error of Estimate

A

In regression, an estimate of the degree of error involved in predicting the value of one variable from another

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Standard Error of the mean

A

A measure of Sampling Error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Standard Error of Difference

A

A statistic used to estimate how large a difference between two scores should be before the difference is considered statistically significant.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Purposive Sampling

A

Arbitrarily selected sample based on representativeness of the population.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Incidental Sampling/ Convenience Sampling

A

A sample based on the greatest level of convenience.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Percentile

A

The percentage of people who fall below a particular raw score.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Percentage Correct

A

The distribution of raw scores - the number of items answered correct multiplied by 100 and divided by the total number of items.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Age Norms/ Age-Equivalent Scores

A

Indicate the average performance of different samples of testtakers who were at various ages at the time the test was administered

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Grade Norms

A

Are developed by administering the test to representative samples of children over a range of consecutive grade levels.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Developmental Norms

A

Age or Grade Norms which develop, deteriorate or otherwise be affected by chronological age, school grade or stage of life.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

National Norms

A

are derived from a normative sample that was natioanlly representative of the population at the time the norming study was conducted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

National Anchor Norms

A

You can anchor the scores on one test against the scores on another test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Equipercentile Method

A

the equivalency of scores on different tests is calculated with reference to corresponding percentile scores.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Subgroup Norms

A

Segmentation of the normative sample.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Local Norms

A

Provide normative information with respect to the local population’s performance on some test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Fixed Reference Group Scoring System

A

The distribution of scores from one group of testtakers (the fixed reference group) is used as the basis for the calculation of test scores on future administrations of the test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Norm Referenced

A

When you compare scores on a test to other scores on that test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Criterion Referenced Evaluation

A

When you compare scores to some other criterion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Psychological Testing

A

involves measuring psychological variables by means of methods to obtain a sample of behaviour.

40
Q

Psychological Measurement

A

is the integration of psychological data gathered via psychological tests, interviews, case studies and behavioural observation, for the purpose of making a psychological evaluation.

41
Q

Testing

A

Is the administration of a psychological test, is an integral part of assessment.

42
Q

Assessment

A

Involves much more than testing.

43
Q

Distal

A

Furthest From

44
Q

Proximal

A

Closest Too

45
Q

Reliability Coefficient

A

The proportion that indicates the ratio between the true score variance on a test and the total variance.

46
Q

Variance

A

A useful statistic in measuring in describing test score variability (SD squared).

47
Q

True Variance

A

Variance from true difference or reliability.

48
Q

Error Variance2

A

Variance from irrelevant, random sources.

49
Q

Random Error

A

is a source of error in measuring a targeted variable caused by unpredictable fluctuations and inconsistencies of other variables in the measurement process. Sometimes referred to as “noise”

50
Q

Systematic Error

A

Error that is typically constant or proportionate to what is presumed to be thet true value being measured.

51
Q

Test-Retest Reliability

A

Test-Retest Reliability is an estimate of reliability obtained by correlating pairs of scores from the same people on two different administrations of the same test.

52
Q

Coefficient of Stability

A

When the interval between testing is > 6 months in a test re-test, the estimate of tst-retest reliability is often referred to as the coefficient of stability

53
Q

Coefficient of Equivalence

A

The degree of the relationship between various forms of a test can be evaluated by means of an alternate-forms or parallel-forms coeffecieint of reliability, which is often terms the coefficient of equivalence.

54
Q

Split-Half Reliability

A

An estimate of split-half reliability is obtained by correlated two pairs of scores obtained from equivalent halves of a single tests administered once.

55
Q

Spearman Brown Formula

A

Allows a test developer or user to estimate internal consistency reliability from a correlation of two halves of a test.

56
Q

SEM Formula

A

SD*(SQRT 1-r)

57
Q

Z-T Score Conversion

A

X(SD) - M

58
Q

SD Formula

A

(X-M)/n

59
Q

What is the Objective of Testing?

A

To obtain some gauge, usually numerical in nature, with regard to an ability or attribute.

60
Q

What is the Objective of Assessment?

A

Typically, to answer a referral question, solve a problem, or arrive at a decision through the use of tools of evaluation.

61
Q

What is the process of Testing?

A

Testing may be individual or group in nature. After test administration, the tester will typically add up the number of correct answers or the number of certain types of responses… with little if any regard for the how or the mechanics of such content.

62
Q

What is the Process of Assessment?

A

Assessment is typically individualised. In contrast to testing, assessment more typically focuses on how an individual processes rather than simply the result of that processing.

63
Q

What is the Role of Evaluator in Testing?

A

The tester is not key to the process; practically speaking, one tester may be substituted for another tester without appreciably affecting the evaluation.

64
Q

What is the Role of the Evaluator in Assessment?

A

The assessor is key to the process of selecting tests and/or other tools of evaluation as well as in drawing conclusions from the entire evaluation.

65
Q

What is the Skill of the Evaluator in Testing?

A

Testing typically requires technician-like skills in terms of administering and scoring a test as well as interpreting a test result.

66
Q

What is the Skill of the Evaluator in Assessment?

A

Assessment typically requires an educated selection of tools of evaluation, skill in evaluation, and thoughtful organisation and integration of data.

67
Q

What is the Outcome of Testing?

A

Typically, testing yields a test score or series of test scores

68
Q

What is the Outcome of Assessment?

A

Typically, assessment entails a logical problem-solving approach that brings to bear many sources of data designed to shed light on a referral question.

69
Q

What are the 5 Micro-Skills of Interviewing?

A

Interview Micro-Skills

  1. Squarely face the client.
  2. Open Posture
  3. Lean Toward the Client
  4. Eye Contact
  5. Relax
70
Q

What the things to avoid in an Interview?

A

Interview Micro-Skills - Things to Avoid

  1. Non-Listening
  2. Partial Listening
  3. Tape-recorder Listening
  4. Rehearsing
  5. Interruptions
  6. Question threat
71
Q

Name 8 Tools of Assessment

A

Tools of Assessment

  1. Tests
  2. Portfolio Assessment
  3. Performance-based assessment
  4. the case history
  5. behavioural observation
  6. role-play tests
  7. computerised assessment
  8. assessment using simulations or video
72
Q

Name 16 sources of information for an Assessment

A
  1. Referral
  2. Consent/ Limitations
  3. Procedures/ Documents
  4. Mental Status Examination
  5. Psychosocial History
  6. Mental Health History
  7. History of Present Problem
  8. Past Intervention/ Responses
  9. Response Style/ Psychometric Testing
  10. Psychological Formulation
  11. Diagnosis
  12. Client Goals/ Proposals
  13. Risk
  14. Recommendations/ Intervention Plans
  15. Report & Technical Addendum
  16. Informing Interview
73
Q

Give a brief History of the Clinical Interview

A
  1. Synder (1945) - non-directive approach encouraged self-exploration
  2. Strupp (1958) - importance of interviewer experience
  3. Rogers (1961) - therapeutic alliance and client-centred approaches
  4. 1960‟s fracturing of approaches
  5. 1980’s greater granularity in disorders paved way for very specific diagnostic criterion
  6. 1980’s hybrid of structured and non-structured
  7. 1990’s managed health care’s impact on practice
  8. 1990’s computer-assisted interviewing
  9. 1994 - single session therapy
  10. 1990’s repressed memories
  11. 2000’s cultural awareness
74
Q

Name 4 Biases for Assessment

A
  1. Halo
  2. Confirmatory
  3. Physical Attractiveness (Gilmore et al, 1986)
  4. Interviewee distortions
75
Q

Mehrabian (1972) broke down information received into verbal and non-verbal information. What % of information is gathered through facial expressions, tone and content of what is being said?

A
  1. 55% facial expression.
  2. 38% tone
  3. 7% content of what is being said
76
Q

What are the phases of clinical assessment? (According to Maloney & Ward (1976)

A
  1. Phase 1 – Initial Data Collection.
  2. Phase 2 – Development of Inferences
  3. Phase 3 – Reject, Modify or Accept Inferences
  4. Phase 4 – Develop and Integrate Hypothesis
  5. Phase 5 – Dynamic Model of the Person
  6. Phase 6 – Situational Variables
  7. Phase 7 – Prediction of Behaviour
77
Q

What are 3 sources of error variance?

A

Sources of Error Variance
1. Assessees are sources of error variance.
2. Assessors are also sources of error variance.
3 Measuring Instruments are sources of error variance.

78
Q

There are 11 cues from which you can take information for an assessment. What are they?

A
  1. Personal Information Cues
  2. Medical Cues
  3. Immediacy Cues
  4. Speech Cues
  5. Language Cues
  6. Physical Cues
  7. Cognitive Cues
  8. Risk Assessment Cues
  9. Collateral Information Cues
  10. Overt Behavioural Cues
  11. Personal History Cues
79
Q

Give some examples of a Personal Information Cue

A

Gender, Occupation, race, religious affiliations, socioeconomic status, appearance, lifestyle factors.

80
Q

Give some examples of a Medical Cue

A

Medication prescribed, compliance, blood serology, previous diagnosis, current diagnosis, family history of diagnosis.

81
Q

Give some examples of an Immediacy Cue

A

Engagement, affect, communication style, facial expressions, emotional expression, personality traits/ temperament, transferences.

82
Q

Give an example of a Speech Cue

A

Tone, flow, perserverative, slurred, volume, pitch, pace.

83
Q

Give an example of a Language Cue

A

Descriptors, words used, developmentally appropriate, use of humour

84
Q

Give an example of a Physical Cue

A

Breathing, eye contact, voice, body movements.

85
Q

Give an example of a Cognitive Cue

A

Attention, memory, intelligence, intellectual disability, judgement, decision making, perceptions.

86
Q

Give an example of a Risk Assessment Cue

A

Risk of Harm to self, others, intent, means and plan.

87
Q

Give an example of a Collateral Information Cue

A

Congruency between verbal and non-verbal, consistency between collateral, psychometrics and narrative. Referral Source and Question.

88
Q

Give an example of an Overt Behavioural Cue

A

Behaviour in Waiting Room, occupation of space in therapeutic environment, feedback from client.

89
Q

Give an example of a Personal History Cue

A

Psychosocial History, relationship status, conflicts, support networks.

90
Q

What are 10 things you want from an interview?

A
  1. Standardisation (see Groth-Marnat, 2009)
  2. Intake Interview
  3. Planning
  4. Rapport-Building
  5. Open-ended questions (TED)
  6. Active Listening/ Attending behaviours (beware negative attending)
  7. Open mind-set - avoid biases like?
  8. Accurate (and discrete) note taking.
  9. How would you approach note taking with a client?
    “I’m going to jot down a few notes to make sure I’m remembering everything correctly. Is that alright with you?” (Shea, 1998)
  10. Empathy -> Active listening, paraphrasing.
91
Q

What are the 5 P’s of a Clinical Interview?

A
  1. Presenting
  2. Precipitating
  3. Perpetuating
  4. Pre-morbid
  5. Protective
92
Q

Describe a Presenting Issue

A

Exactly what are the thoughts, behaviours, feelings associated with their concerns.

93
Q

Describe a Precipitating Issue

A

(Distal) if these experiences started 6 months ago, tell me about the month or so before that ALSO (Proximal) so these experiences come on in waves, talk me through the last time it happened, starting 10 minutes before you noticed the feelings.

94
Q

Describe a Perpetuating Issue

A

What makes things worse for you? What things aid these feelings to continue happening (for example, panic attacks).

95
Q

Describe a Pre-morbid Issue

A

Previous physical health and mental health status also risk factors (e.g. homelessness, history of abuse) though you wouldn’t just plough in with “have you ever been abused” it would need to come in a long way down the track when true trust and rapport have developed.

96
Q

Describe a Protective Issue

A

What helps this person continue to function (e.g. they are working, have a close family, have a good education).