Foundations of Assessment/The Assessment Process Flashcards

1
Q

the systematic process of gathering information about an individual to make decisions regarding their treatment plan

A

Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

process of collecting, analyzing, and reporting

A

Systematic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

once all info is gathered you must ANALYZE + PRIORITIZE

A

Main point of activity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Why is assessment important to client?

A
  • treatment placement decisions
  • gain client info (baseline, progress, discharge)
  • communication w/ client + caregivers
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Why is assessment important to program?

A
  • administrative requirements (reimbursement, regulations)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

When is assessment used?

A

during interventions (assessment + documentation = intervention)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what are the 5 basic principles of assessment

A

-systematic process

-logical connection

-yielding dependable and consistent results

-placement

-provide baseline info.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Current problems w assessment

A
  • research into functional meaning of assessment scores
  • modification of valid instruments
  • research using RT instruments w coefficient BELOW 0.80 (reliability)
  • reducing the number of agencies using homemade assessments and/or leisure-only surveys
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

CONSISTENCY
- refers to the scores or data and NOT the instrument
- high reliability does not assure validity!

A

reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

a method for determining the reliability of a test by comparing a test taker’s scores on the same test taken on separate occasions

A

test-retest reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Split-half reliability is a method used to assess the internal consistency of a test or assessment. It involves dividing the test into two equal halves (usually odd and even items) and then comparing the results from each half to see if they produce similar scores.
- ensure that an assessment tool consistently measures what it is intended to, regardless of how the questions are split. This helps in confirming that the tool is reliable and that the results are not dependent on the specific selection of items.

A

split-middle reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

types of reliability

A
  • stability
  • equivalence
  • internal consistency
  • objectivity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

a measure of how consistent a test is over time
- measures not accurate for knowledge or paper-pencil tests
- better indicator for physical fitness (HR and BP) or motor performance
- time administrations should be timed appropriately

A

stability (reliability)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The extent to which measurement on two or more forms of a test is consistent
- used for knowledge tests to determine reliability indices for standardized tests e.g., ACT or SAT)
- parallel or alternate forms method (e.g., English vs Spanish version)

A

equivalence (reliability)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

a way to determine if multiple items on a test that are intended to measure the same thing produce similar scores.
- A quiz that measures students’ ability to solve quadratic equations should have internal consistency. If a student answers one item correctly, they should also be able to answer similar items correctly

A

internal consistency (reliability)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

consistency scores across more than one tester
- AKA interrater reliability
- best used w/ behavioral observations or ratings
- < .50% agreement= “poor”
- behavioral observation (80% or greater)

A

objectivity (reliability)

17
Q

ACCURACY

A

validity

18
Q

types of internal validity

A

logical
- face
- content
statistical
- criterion
- concurrent
- predictive
- construct
- divergent
-convergent
- responsiveness

19
Q

extent to which respondents can tell what the items are measuring. the extent to which a test or measure appears to assess what it is supposed to assess, based on superficial judgment.
- LOGICAL

A

face validity

20
Q

the extent to which a test samples the behavior that is of interest
- LOGICAL
- best used for questionnaires or written instruments when comparison to another standard is not possible
- stronger than face validity

A

content validity

21
Q

comparison of scores to an acceptable standard or criterion
- STATISTICAL
- “gold standard” (most accurate)

A

concurrent validity

22
Q
A
23
Q
A