Validity And Efficacy Flashcards
demonstrated when there is is clinical improvement from the
treatment in the real-world context
Treatment effectiveness
provide a focus and a reason for undertaking treatment, which in turn guide treatment planning and evaluation
Ultimate outcomes
self reported improvements that matter to the client in the
context of their own lives
Personal significance
the degree to which actual implementation of the treatment in the real-world is consistent with the prototype treatment administered in the controlled conditions of the treatment efficacy study
Treatment fidelity
When treatment efficacy is established, the improvement in client performance can be shown to be what 3 things
- Derived from the treatment rather than the extraneous factors
- Real and reproducible
- Clinically important
Treatment efficacy research is aimed at demonstrating the benefits of treatment through well-controlled studies with:
- Internal validity
- Statistical significance
- Practical significance
generally defined as the benefit of an intervention as compared to a control or standard program.
◦ It provides information about the behavior of clinical variables under controlled,
randomized conditions
◦ This allows researchers to examine theory and draw generalizations to large
populations
Efficacy
Five Phase Model of treatment outcome research
- Phase I treatment outcome research – studies are designed to establish whether a therapeutic effect
exists in the clinical environment, to estimate its potential magnitude, and to help identify potentially
useful treatment protocols - Phase II treatment outcome research – studies are conducted to determine the appropriateness of the
intervention. It helps define for whom the treatment is suitable and for whom it is not. - Phase III - treatment outcome research studies that are more rigorous experimental designs and
greater control is used - Phase IV- treatment outcome research that explores efficacy of interventions to see of it is effective in
the clinic (sometimes called translational research) - Phase V- treatment outcome research continues to explore effectiveness but with a greater influence
on efficiency. These studies identify the types of modifications or applications that are necessary or
beneficial for delivering service in a cost-effective manner
studies are designed to establish whether a therapeutic effect exists in the clinical environment, to estimate its potential magnitude, and to help identify potentially
useful treatment protocols
- Phase I treatment outcome research
studies are conducted to determine the appropriateness of the intervention. It helps define for whom the treatment is suitable and for whom it is not.
Phase II treatment outcome research
treatment outcome research studies that are more rigorous experimental designs and greater control is used
Phase III
treatment outcome research that explores efficacy of interventions to see of it is effective in the clinic (sometimes called translational research)
Phase IV
treatment outcome research continues to explore effectiveness but with a greater influence
on efficiency. These studies identify the types of modifications or applications that are necessary or
beneficial for delivering service in a cost-effective manner
Phase V
when the researcher reports a relationship between the intervention and the outcome (or progress) when no relationship (or progress) really exists.
Type 1 error
when the researcher reports that no relationship (or
improvement/progress) exists between the intervention and the outcome, when there really was a relationship or improvement
Type 2 error
Observation
Quantifying measurements
an abstract idea, theme, or subject matter that a researcher wants to measure. Because it is initially abstract, it must be defined.
Construct
The scales of measurement are:
- Nominal Scales
- Ordinal Scales
- Interval Scales
- Ratio Scales
used to categorize characteristics of subjects
Nominal scale
◦ Used to classify ranked categories
Ordinal scales
have equal distances between units of measurement
Interval scales
Demonstrate equal distances between units of measurement and they have an absolute zero point.
Ratio scales
There is almost always some error in measurement. Measurement error is the general degree of error present in measurement.
Measurement error
Occurs when the instrument you are using either overestimates or underestimates the true score in one direction (consistently overestimates or underestimates)
Systematic Error
These errors occur by chance and can affect a subject’s score in an unpredictable manner.
Random error
Factors that can contribute to random errors are, but not limited to:
Fatigue of the subject
Environmental influences
Inattention of the subject or rater
the degree of consistency with which an
instrument or rater measures a variable
Reliability
The ratio of the true score variance to the total variance
observed on an assessment
Reliability coefficient
The assessment is empirically evaluated through the following methods:
- Test-retest reliability
- Split-half reliability
- Alternate forms of equivalency reliability
- Internal consistency
A metric indicating whether an assessment provides consistent results when it is administered on two different occasions
Test-retest reliability
This is a technique used to assess the reliability of questionnaires
Split half reliability
When there are multiple versions of the same test, it is important to determine if each
version of the test will provide consistent results.
Parallel forms of Reliability
This is the extent to which the items that make up an assessment covary or correlate with each other. This may be referred to as the homogeneity of the assessment
Internal Consistency
would occur if a first treatment condition affected participant
performance on a second treatment condition
Carryover effect
a research participant’s performance in a study was influenced by their awareness of being in a research study
Hawthorne effect
a potential change in data that occurs sometime from the beginning to the end of an experiment. These changes can arise due to factors such as participant fatigue or familiarity with assessment and/or intervention materials.
Order effect
When you have two or more raters who are assigning scores based on subject observation, there may be variations in the scores.
Inter-rater reliability
refers to test stimuli, methods, or procedures reflecting the assumptions that all populations have the same life experiences and have learned similar concepts and vocabulary.
Content bias
disparity between the language or dialect used by the examiner, the child, and/or the language or dialect expected in the child’s response.
Linguistic bias
means that the instrument being used measures what it is
supposed to measure
Validity
◦ The assumption of validity of a measuring instrument based on its appearance as a reasonable measure of a given variable
Face validity
refers to how well the test items measure the characteristics or behaviors of interest
Content validity
refers to how well the measure correlates with an outside criterion
Criterion validity
Criterion validity includes two types of evidence:
- Concurrent validity
- Predictive validity
refers to how well the measure reflects a theoretical construct of the characteristic of interest
Construct validity
2 Measures of validity
- Sensitivity- one who has the condition will be classified as having the condition
- Specificity – one who does not have the condition will be classified as not having the
condition
refers to how well a test detects a condition that is actually present
Test sensitivity
refers to how well a test detects that a condition is not present when it is actually not present
Test specificity
There is some interrelationship between reliability and validity. What is it?
If a measurement is valid, meaning it measures what it is supposed to measure, we can conclude that the measurement is relatively free from error, which enhances its reliability
Can the rater be a source of error?
Yes :(