Assessments Flashcards

1
Q

Assessment Competencies (KSAO Model)

A
  1. Knowledge of psychometrics and theoretical information
  2. Skills: proficiency in different methods of assessment - Test administration, scoring and interpretation, interviewing, observation and communication of assessment findings.
  3. Abilities: rapport building, critical and integrative thinking, and psychological mindedness.
  4. Other Characteristics: attitudes and values such as respect for the person of the client and appreciation of diversity, and using precision/accuracy/attn. to detail/and good communication skills.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
  • Assessment is broader than testing. So the testing is the smaller part of the assessment processes and results.
  • Process starts with a question? The client has a difficulty. Start with the referral question or the presenting problem.
  • Then decide how to collect info on how to understand and help the client. Interviews, observations, and let’s do some formal testing as well.
A

o You would consider formal testing if it is going to increase access to programs/services. You may refer the parents for an assessment by a psychologist. Don’t hesitate to do this. Ie) Autism and support – IEP’s. Perhaps a history of mental illness.
o The patient might be reporting some symptoms and maybe they need a more in depth assessment.
o Formal testing always gives us data/numbers to provide us objectivity and you can have counter-transference and we get our own opinions/thoughts in the way. It gives you a bit of a distance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

It is important to get #’s to compare those #’s with the rest of the population. The norm sample allows us to compare our client with others like him/her….same age/culture/gender/socio-economic status.

People are inter-changeable from a humanistic point of view and people share commonalities. We share genders….there are categories to compare our individual clients.

A

Find out how your client is doing. Ie) in terms of depression you are very depressed. 95% of the entire population and this is very serious. I am not dumb/stupid like I thought I was. Telling them how they are in the big picture might help them.

We trust #’s for good reason as they mean something. There is always value behind a #. #’s are infused with values. It is the beginning of a long process of interpreting and finding meanings.

What context/purpose are you using the test?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

After the test and the scores mean decisions need to be made

Decisions mean that need to be made

Actions will come out of the decisions. Ie) a learning disability will lead to making an IEP plan for a student in school. There are consequences at multiple levels

There are benefits and harms.

A

Suffering brings meaning

Science helps us bring meaning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Overview of Psychometrics

A

Reliability: The thing a test needs to have is reliability. Your tests need to yield the same results over time. You need to be able to count on the data to make decisions. Your client may not always tell you the truth. People might not always circle the most accurate # on the test. You can take the average of all of the tests. The true score from all of the scores is X (Raw/Actual Score) = T + E (Error). This means the true score is. X is the observed score. You see it on the test. The # on the test. It is also called the raw score. T is the True Score. You can’t ever get the true score. You can’t observe it. No one has ever seen it, touched it. It’s a ghost. This is because it is idea that it is there, but you are never going to get it because there is always an amount of error. True score is always unknown. The point of measuring is to get to the closest truth as possible. Observed score is close to the true score with some error. The observed score is gravitating around the true score. They are attracted to the true score. If you are willing to take lots of results (1 million +) and find the average then that is the true score. No-one would ever do this. So we just assume that there is a true score.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The error is the same across participants

They are interchangeable units.

The error is the same across clients for CTT Theory. The error is individual.

A

These interchangeable units create a score.

This is a criticism of the theory – how can everyone have the same error….but this is the assumption

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Classical Test Theory:

A

Classical test theory is a body of related psychometric theory that predicts outcomes of psychological testing such as the difficulty of items or the ability of test-takers. Generally speaking, the aim of classical test theory is to understand and improve the reliability of psychological tests.
Classical test theory may be regarded as roughly synonymous with true score theory. The term “classical” refers not only to the chronology of these models but also contrasts with the more recent psychometric theories, generally referred to collectively as item response theory, which sometimes bear the appellation “modern” as in “modern latent trait theory”.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Item Response Theory

In psychometrics, item response theory (IRT) also known as latent trait theory, strong true score theory, or modern mental test theory, is a paradigm for the design, analysis, and scoring of tests, questionnaires, and similar instruments measuring abilities, attitudes, or other variables.

A

IRT: Individual scores = probability of a correct answer/endorsed item depending on the latent ability.
People who have high intelligence will get the answer right. People with low intelligence will not.
Connects the intelligence (Trait) with the outcome.

Each individual has attributes/characteristics called latent variable and test scores reflect how much of that latent ability one needs to have in order to respond correctly on a test or in order to endorse test items

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Reliability

A

a. Test-Retest
b. Alternate Forms
c. Alpha (split-half)
d. And also
i. Inter-rater reliability
ii. Generalizability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Validity

A

a. Content
b. Criterion
i. Concurrent - Construct Validity
ii. Predictive - Construct Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Standardization

A
  1. Standardization:
    a. Uniform Procedures
    b. Criteria and Norms
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Test on the score is able to predict your GPA. GPA is the criterion.

A

The test predicts your performance outside of the test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Convergent / Discriminate Evidence

A

Means that your measure will correlate with other measures of the same kind. Ie) anger is related to violent behaviours. It may correlate with # of offences that are committed. You have to figure that out. Or it won’t or should not correlate with something. Ie) violence doesn’t correlate with happiness. Convergent means that we expect things to correlate with some things.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Convergent / Discriminate Evidence

A

measures of constructs that theoretically should be related to each other are, in fact, observed to be related to each other (that is, you should be able to show a correspondence or convergence between similar constructs)

and

measures of constructs that theoretically should not be related to each other are, in fact, observed to not be related to each other (that is, you should be able to discriminate between dissimilar constructs)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Conceptual Model for Interpreting Assessment Data

A

Phase 1 – Intial Collection Phase
Phase -2 – Development of References
Phase 3 - Reject inferences – Modify Inferences - Accept Inferences
Phase 4 - Develop and Integrate Hypothesis
Phase 5 - Dynamic model of the person
Phase 6 - Situational Variables
Phase – 7 Prediction of Behaviour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Nomothetic Measurement

A

constructs being measured are assumed to be attributes describing people in general, so measurement is context free. The framework of measurement is universal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Idiographic Measurement

A

you pay attn. to the uniqueness of the person. We all have intelligence but how it is expressed is different.

18
Q

Advantages of Standardized Testing

A

Establishes the position of a person along commonly shared traits or characteristics (e.g., cognitive functioning, personality traits)

Standardization allows to get a more accurate evaluation on certain functions/processes
Informs treatment

Contributes to more comprehensive understanding and diagnosis

Important in decision making in certain contexts (e.g., forensic risk assessment)

Credibility for accessing funding and support (e.g., autism assessment, disability assessment)

19
Q

Misuses of Tests

A
  1. Lack of Relevance to the Presenting Problem
  2. Over-Reliance on Testing
  3. Over-Reliance on Computer-Assisted Testing
  4. Inappropriate Interpretation of Test Results
  5. Failure to follow the standardized procedures
  6. Using test results for a purpose other than the intended purpose of the test
  7. Use of test information by an unqualified and/or incompetent test-user
  8. Use of test information without considering all relevant information about the person being tested
20
Q

A Level Tests

A

Can be utilized by non-professional with the use of a manual
Various types of educational achievement or proficiency tests fall into this category. Also, self-assessment procedures for career exploration or personal growth can fit at this level.

21
Q

B level tests

A

Requires technical knowledge of test construction and use based on university-level training in tests & measurement. In practice, this often is taken to mean no more than an undergraduate course in testing, although that level of preparation is debated.
i.e. Vocational interest inventories, group intelligence and special aptitude tests and some personality inventories for “normal” populations belong to this level.

22
Q

C level tests

A

Requires an advanced degree in psychology or related mental health field, advanced training & supervised experience in test use and related procedures, and professional competence in the domain of testing (population, topic, etc.). Licensure as a psychologist usually covers all these requirements. Graduate students may purchase and use Level C tests if they are being supervised by someone with the appropriate qualifications.

Generally this includes individually administered intelligence tests, clinical tests, and complex personality tests such as:
Stanford-Binet Intelligence Scale
the Wechsler Scales
Minnesota Multiphasic Personality Inventory (MMPI-2)

23
Q

Test Score = True Score and Error

A

X = T+ e

X= observed score
T= true score
e= error
24
Q

Classical Test Theory

A
Differences between individuals are 
1. Real 
2. Important
3. Quantifiable 
Reflect a common dimension 
Individual differences are like traits and will be consistent over time and in a wide variety of settings.
25
Q

Normative scoring

A

A test taker’s performance is compared to the performance of a specific group

26
Q

Important things to consider:

A
Age
Ethnicity
Geographic region 
SES 
Grade level 
Gender
27
Q

Size

A

Bigger is Better

28
Q

Relevance

A

Is the normed group the right group?

How well you do on your GRE computation against the national sample vs. Against other psychology hopefuls

29
Q

Derived Scores: Standard Scores

A

Raw scores transformed to have a designated mean and standard deviation.
You can tell how far an examinee’s score lies from the mean of the distribution
z score
Mean of 0
Most scores lie between -3 and +3
-2.5 score means that the raw score fell below the mean of the group by 2 SDs

30
Q

Z Scores

A

68 - 95 -99.7 Rule

31
Q

T Score

A

eliminates the – and + concerns with the z score
Mean of 50
SD of 10

32
Q

Deviation IQ

A

Mean of 100

SD of 15 or 16 depending on the test

33
Q

Scaled Scores

A

Mean of 15

SD of 3

34
Q

Composite Scores (Intelligence Tests, WAIS IV)

A
Mean= 100
SD= 15
35
Q

Percentile Ranks

A

Tells us where in a distribution an individuals is (i.e. What percentage of the population has scores below that score) 2 percentile means 98 percent of the population scored above you while 98% means only 2 percent of the population scores above you.
Percentile ranks cannot be added subtracted, multiplied or divided.

36
Q

Standardization

A

Standardization at the test administration level

Standardization in the sense of standardizing the raw, observed scores

37
Q

Behavioural Observations and Standardization

A

Step one:

Select relevant target behaviours

Behaviours measured need to be defined with:

Objectivity

Complete and clear definitions

No vagueness only what is behaviourally observable
i.e. Feeling blue (what does that mean or look like?)

38
Q

Narrative Recording

A

Observers simply note behaviours of interest

Often utilized to help create the more quantitative questionnaires

39
Q

Interval Recording

A

AKA Time sampling, interval sampling or interval time sampling

Good for monitoring behaviours with moderate frequencies that are also overt

Often are organized by time (i.e. Look for 20 seconds record for 10 repeat)

Can calculate inter-rater agreement

40
Q

Event Recording

A

Here you wait for the behaviour to occur and then record relevant details of behaviours

Good for getting frequencies of behaviours

Here you would record how long the behaviour lasted and how intense it was

41
Q

Ratings Recordings/Rating Scales

A

Instead of direct observations of behaviours you rate the behaviours themselves after an observation period.

On a scale of 1-7 1 being in frequent and 7 being almost constantly how often does Ms. Michele have a solo on a Glee episode