assessment principles Flashcards

1
Q

define discriminative measurements

A

attempts to differentiate between two or more groups of people

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

define predictive measurements

A

attempts to classify people into a set of predefined measurement categories for purpose of estimating outcome

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

define evaluative measurement

A

pertains to measurement of change in an individual or group over time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

define descriptive measurement

A

pertains to efforts to obtain a ‘clinical picture’ or baseline of person’s skills

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what are the 4 types of assessment

A

non standardised
standardised
criterion referenced
norm referenced

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what does measurement enable therapists to do

A
  • quantify attributes of individuals
  • make comparisons
  • document on performance change
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

define evaluation

A

The process of determining the worth of
something in relation to established benchmarks
using assessment information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

define re-evaluation

A

process of critical analysis of client response

to intervention

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

define screening

A

A quick review of the client’s situation to determine if an occupational therapy evaluation is warranted

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

define testing

A

a systematic procedure for observing a person’s behaviour & describing it with the aid of a numerical scale or a category-system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

define evidence based practice

A

The integration of best research evidence available, clinical experience and patient values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

define non standardised assessments

A

Do not follow a standard approach or protocol

May contain data collected from interviews, questionnaires and observation of performance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

define standardised assessments

A
  • Are developed using prescribed procedures

- Are administered and scored in a consistent manner under the same conditions and test directions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

define descriptive assessments

A

to describe individuals within groups and to characterise differences

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

define evaluative assessments

A

use criteria or items to measure an individuals trait over time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

define predictive assessments

A

use criteria to classify individuals to predict trait against criteria

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

define criterion referenced assessment

A

client performance is assessed against a set of predetermined standards

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

define norm referenced assessment

A

client performance is assessed relative to the other students

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

pros of criterion referenced assessments

A
  • sets minimum performance expectations

- demonstrates what clients can and can not do

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

cons of criterion referenced assessments

A
  • hard to know where to set boundary conditions

- lack of comparison data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

define norm referenced assessments

A

Based upon the assumption of a standard normal (Gaussian) distribution with n > 30.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

pros of norm referenced assessments

A
  • ensures a spread

- shows client performance relevant to group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

cons of norm referenced assessments

A
  • in a strong group, some will be ensured an f

- above average performance is not necessarily good

24
Q

define reliability

A

The reproducibility of test results on more than one occasion by the same researcher using a measure.

range from 0 - 1

25
Q

define random error

A

errors that can not be predicted

26
Q

define systematic error

A

errors that have predictable fluctuations

27
Q

list the types of reliability

A
Intra-rater reliability
  Inter-rater reliability
  Test-retest reliability / temporal stability
  Alternate form reliability
  Split half reliability
  Internal consistency
28
Q

intra rater reliability

A

The stability of data collected by one person more than 2 times

29
Q

inter rater reliability

A

Detecting variability between 2 eaters who measure the same client

30
Q

test retest reliability

A

The reliability/stability of measurements when given to the same people over time

31
Q

alternate form reliability

A

the degree of correlation between two different, but equivalent forms from the same test completed by the same group of people

32
Q

split half reliability

A

the degree of correlation between one half the items of a test and the other half of the items of a test (e.g., odd numbered items correlated with the even numbered items)

33
Q

internal consistency

A

the the degree of agreement between the items in a test that measures a construct

34
Q

cronbachs coefficient alpha

A

used to assess internal consistency; estimate the reliability of scales or commonality of one item in a test with other items in a test; ranges from 0.10-0.99

35
Q

kappa (k)

A

used in assessments yielding multiple nominal placements since it corrects for chance

36
Q

weighted k

A

used to determine the reliability of a test when rating on an ordinal scale

37
Q

validity

A

the extent to which a test measures what it purports to measure

38
Q

construct validity

A

Establishes whether assessment measures a construct and its theoretical components

39
Q

what are the 3 parts of construct validity

A
  1. describe the constructs that amount for test performance
  2. compose hypotheses that explains relationship
  3. test hypotheses
40
Q

list the 4 subtypes of construct validity

A
  • convergent
  • divergent
  • discriminant
  • factor analysis
41
Q

covergent validity

A

Level of agreement between 2 tests that are being used to measure the same construct

42
Q

divergent validity

A

Distinguishing the construct from confounding factors

43
Q

discriminant validity

A

The level of disagreement when two tests measure a trait

44
Q

factor analysis validity

A

statistical procedure used to determine whether test items group together to measure a discreet construct or variable

45
Q

content validity

A

The extent to which a measurement reflects a specific domain

46
Q

criterion validity

A

Implies outcome can be used a substitute for ‘gold’ standard criterion test

47
Q

what are the 2 subtypes of criterion validity

A
  1. concurrent/congruent validity (degree to which results agree with others)
  2. predictive validity (extent to which measure can forecast)
48
Q

face validity

A

A test appears to measure what its author intended it to measure

49
Q

ecological validity

A

The outcome of an assessment can hold up in the real-world circumstances

50
Q

what are the 2 types of experimental validity

A
  • internal

- external

51
Q

sensitivity

A

Ability of a test to detect genuine changes in a client’s clinical condition or ability

52
Q

specify

A

A test’s ability to obtain a negative result when the condition is really absent (a true negative)

53
Q

responsiveness

A

providing evidence of the ability of a measure to assess and quantify clinically important change

54
Q

nominal measurement

A

only have two response options to items; for example male/female; yes/no wet/dry; happy/sad

55
Q

ordinal measurement

A

data has some order, with one score being better/worse than another.

56
Q

interval scales

A

the differences between any two scores ratings are identical (such as weight, temperature and distance); statistics can be used correctly.