Validity And Efficacy Flashcards

1
Q

demonstrated when there is is clinical improvement from the
treatment in the real-world context

A

Treatment effectiveness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

provide a focus and a reason for undertaking treatment, which in turn guide treatment planning and evaluation

A

Ultimate outcomes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

self reported improvements that matter to the client in the
context of their own lives

A

Personal significance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

the degree to which actual implementation of the treatment in the real-world is consistent with the prototype treatment administered in the controlled conditions of the treatment efficacy study

A

Treatment fidelity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

When treatment efficacy is established, the improvement in client performance can be shown to be what 3 things

A
  1. Derived from the treatment rather than the extraneous factors
  2. Real and reproducible
  3. Clinically important
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Treatment efficacy research is aimed at demonstrating the benefits of treatment through well-controlled studies with:

A
  1. Internal validity
  2. Statistical significance
  3. Practical significance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

generally defined as the benefit of an intervention as compared to a control or standard program.

◦ It provides information about the behavior of clinical variables under controlled,
randomized conditions
◦ This allows researchers to examine theory and draw generalizations to large
populations

A

Efficacy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Five Phase Model of treatment outcome research

A
  1. Phase I treatment outcome research – studies are designed to establish whether a therapeutic effect
    exists in the clinical environment, to estimate its potential magnitude, and to help identify potentially
    useful treatment protocols
  2. Phase II treatment outcome research – studies are conducted to determine the appropriateness of the
    intervention. It helps define for whom the treatment is suitable and for whom it is not.
  3. Phase III - treatment outcome research studies that are more rigorous experimental designs and
    greater control is used
  4. Phase IV- treatment outcome research that explores efficacy of interventions to see of it is effective in
    the clinic (sometimes called translational research)
  5. Phase V- treatment outcome research continues to explore effectiveness but with a greater influence
    on efficiency. These studies identify the types of modifications or applications that are necessary or
    beneficial for delivering service in a cost-effective manner
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

studies are designed to establish whether a therapeutic effect exists in the clinical environment, to estimate its potential magnitude, and to help identify potentially
useful treatment protocols

A
  1. Phase I treatment outcome research
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

studies are conducted to determine the appropriateness of the intervention. It helps define for whom the treatment is suitable and for whom it is not.

A

Phase II treatment outcome research

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

treatment outcome research studies that are more rigorous experimental designs and greater control is used

A

Phase III

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

treatment outcome research that explores efficacy of interventions to see of it is effective in the clinic (sometimes called translational research)

A

Phase IV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

treatment outcome research continues to explore effectiveness but with a greater influence
on efficiency. These studies identify the types of modifications or applications that are necessary or
beneficial for delivering service in a cost-effective manner

A

Phase V

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

when the researcher reports a relationship between the intervention and the outcome (or progress) when no relationship (or progress) really exists.

A

Type 1 error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

when the researcher reports that no relationship (or
improvement/progress) exists between the intervention and the outcome, when there really was a relationship or improvement

A

Type 2 error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Observation

A

Quantifying measurements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

an abstract idea, theme, or subject matter that a researcher wants to measure. Because it is initially abstract, it must be defined.

A

Construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

The scales of measurement are:

A
  1. Nominal Scales
  2. Ordinal Scales
  3. Interval Scales
  4. Ratio Scales
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

used to categorize characteristics of subjects

A

Nominal scale

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

◦ Used to classify ranked categories

A

Ordinal scales

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

have equal distances between units of measurement

A

Interval scales

22
Q

Demonstrate equal distances between units of measurement and they have an absolute zero point.

A

Ratio scales

23
Q

There is almost always some error in measurement. Measurement error is the general degree of error present in measurement.

A

Measurement error

24
Q

Occurs when the instrument you are using either overestimates or underestimates the true score in one direction (consistently overestimates or underestimates)

A

Systematic Error

25
Q

These errors occur by chance and can affect a subject’s score in an unpredictable manner.

A

Random error

26
Q

Factors that can contribute to random errors are, but not limited to:

A

Fatigue of the subject
Environmental influences
Inattention of the subject or rater

27
Q

the degree of consistency with which an
instrument or rater measures a variable

A

Reliability

28
Q

The ratio of the true score variance to the total variance
observed on an assessment

A

Reliability coefficient

29
Q

The assessment is empirically evaluated through the following methods:

A
  1. Test-retest reliability
  2. Split-half reliability
  3. Alternate forms of equivalency reliability
  4. Internal consistency
30
Q

A metric indicating whether an assessment provides consistent results when it is administered on two different occasions

A

Test-retest reliability

31
Q

This is a technique used to assess the reliability of questionnaires

A

Split half reliability

32
Q

When there are multiple versions of the same test, it is important to determine if each
version of the test will provide consistent results.

A

Parallel forms of Reliability

33
Q

This is the extent to which the items that make up an assessment covary or correlate with each other. This may be referred to as the homogeneity of the assessment

A

Internal Consistency

34
Q

would occur if a first treatment condition affected participant
performance on a second treatment condition

A

Carryover effect

35
Q

a research participant’s performance in a study was influenced by their awareness of being in a research study

A

Hawthorne effect

36
Q

a potential change in data that occurs sometime from the beginning to the end of an experiment. These changes can arise due to factors such as participant fatigue or familiarity with assessment and/or intervention materials.

A

Order effect

37
Q

When you have two or more raters who are assigning scores based on subject observation, there may be variations in the scores.

A

Inter-rater reliability

38
Q

refers to test stimuli, methods, or procedures reflecting the assumptions that all populations have the same life experiences and have learned similar concepts and vocabulary.

A

Content bias

39
Q

disparity between the language or dialect used by the examiner, the child, and/or the language or dialect expected in the child’s response.

A

Linguistic bias

40
Q

means that the instrument being used measures what it is
supposed to measure

A

Validity

41
Q

◦ The assumption of validity of a measuring instrument based on its appearance as a reasonable measure of a given variable

A

Face validity

42
Q

refers to how well the test items measure the characteristics or behaviors of interest

A

Content validity

43
Q

refers to how well the measure correlates with an outside criterion

A

Criterion validity

44
Q

Criterion validity includes two types of evidence:

A
  1. Concurrent validity
  2. Predictive validity
45
Q

refers to how well the measure reflects a theoretical construct of the characteristic of interest

A

Construct validity

46
Q

2 Measures of validity

A
  1. Sensitivity- one who has the condition will be classified as having the condition
  2. Specificity – one who does not have the condition will be classified as not having the
    condition
47
Q

refers to how well a test detects a condition that is actually present

A

Test sensitivity

48
Q

refers to how well a test detects that a condition is not present when it is actually not present

A

Test specificity

49
Q

There is some interrelationship between reliability and validity. What is it?

A

If a measurement is valid, meaning it measures what it is supposed to measure, we can conclude that the measurement is relatively free from error, which enhances its reliability

50
Q

Can the rater be a source of error?

A

Yes :(