1st exam Flashcards
Case control study:
a study design that examines a group of people who have experienced an event (usually an adverse event) and a group of people who have not experienced the same event, and looks at how exposure to suspect (usually noxious) agents differed between the two groups. This type of study design is most useful for trying to ascertain the cause of rare events, such as rare cancers.
Cohort Study:
a non-experimental study design that follows a group of people (a cohort), and then looks at how events differ among people within the group. A study that examines a cohort, which differs in respect to exposure to some suspected risk factor (e.g. smoking), is useful for trying to ascertain whether exposure is likely to cause specified events (e.g. lung cancer).
Randomized Controlled Trial:
a trial in which participants are randomly assigned to two or more groups: at least one (the experimental group) receiving an intervention that is being tested and another (the comparison or control group) receiving an alternative treatment or placebo. This design allows assessment of the relative effects of interventions.
Meta-analysis:
A statistical technique that summarizes the results of several studies in a single weighted estimate, in which more weight is given to results of studies with more events and sometimes to studies of higher quality.
Systematic Review:
a review in which specified and appropriate methods have been used to identify, appraise, and summarise studies addressing a defined question. It can, but need not, involve meta-analysis). In Clinical Evidence, the term systematic review refers to a systematic review of RCTs unless specified otherwise.
A Scientific Theory
A scientifically accepted general principle supported by a substantial body of evidence put forth to provide explanation of observed facts.
Retrospective:
information that has already been collected.
Qualitative
Approach
Qualitative
Assumes truth is subjective and relative to interpretation by the individual.
Data is captured in words from which patterns or themes are discerned.
Design format
Degree to which the investigator actively intervenes with the subjects
Experimental – randomization, intervention group, control group, high control
Quasi-experimental – an intervention but missing randomization or control group; overall a lower level of control.
Non-experimental – observational, report what is found; no intervention
Cross-sectional:
the question can be addressed by using information collected on subjects at a single point in time.
Longitudinal:
more than one measure taken over multiple time points.
Retrospective:
information that has already been collected.
Prospective:
information collected going forward.
Diagnostic Tests
Non-experimental and cross sectional.
Determines usefulness of test in correctly identifying a pathology or impairment.
Random assignment is not a priority since subject must have the condition under study.
Strongest design: individuals who appear to have condition are evaluated with the test as well as with a second test of which has been established as “Gold standard”.
Prognostic
Prospective/retrospective cohort or case control
Assess relationship between a factor and outcome
Non-experimental, but control through statistical adjustment
Cause & Effect between factors and outcomes cannot be established, only inferred, if:
Factor preceded the outcome (longitudinal design)
Relationship between factor/outcome strong
Findings consistent with previous studies
Clinical Measures
Development of new or modification of existing measurement instruments
Nonexperimental
Often cross-sectional but can be longitudinal if objective is to detect change over time
Repeated administrations determine reliability
Comparison to a superior instrument determine validity
Interventions
Used to determine benefit/harm
Efficacy: desired outcome under ideal conditions
Effectiveness: desired outcome under clinical conditions
Treatment needs to clearly precede the outcome
Experimental & Quasi Experimental designs
Efficacy study:
measuring the extent to which an intervention produces a desired outcome under ideal conditions
Effectiveness study:
measuring the impact of an intervention under usual clinical conditions
Outcomes Research
Focuses on the impact of clinical practice in the real world, i.e., end results of care
Nonexperimental designs, primarily observational format with less control features
Results reflect real-world conditions with confounding conditions
Used as basis for assessing quality of care across settings and disciplines
Frequently retrospective in approach, may be cross-sectional or longitudinal
Multiple groups with differing characteristics can be used
Self-reported outcome measures are used for studies about disability, health status, satisfaction, or quality of life
Primary data
is collected from subjects in real-time
Secondary data
is collected during routine business or a prior research activity
Power
is the probability that a statistical test will identify a relationship or difference
Components of the Research Article
Introduction
Methods
Results
Discussion
Introduction
Background/brief lit review
Purpose statement /Hypothesis
Methods
Subjects Study Design Variables Techniques/methods Statistical analyses
Results
Results and only results
Discussion
Highlight main findings
Discuss how data fits in literature
Summary and Conclusion including clinical implications
Null hypotheses
are predictions that no difference or relationship between variables will be demonstrated based on the research intervention.
Research hypotheses
are predictions of what the investigator thinks will happen or what the relationship between x and y is.
Impact factor
a measure of the frequency with which the “average article” in a journal has been cited in a given period of time.
Variables
Characteristic of an individual, object, or environmental situation that may take on different values:
dependent variable
(or response variable) is what you measure in the clinical research study and what is affected during the study
Independent Variable:
something you would like to evaluate with respect to how it affects something else.
Primary Independent Variable:
the independent variable the investigator is most interested in.
Secondary Independent Variable
a factor other than the primary independent variable that may influence the dependent variable.
“Extraneous” Variable: analogous to the definition for secondary independent variable.
Nominal:
classifies objects or characteristics, but that lacks rank order and a known equal distance between groups.
Ordinal:
classifies objects or characteristics, in rank order, but that lacks the mathematical properties of a known equal distance between categories; may or may not have a natural zero point.
Interval:
classifies objects or characteristics in rank order with a known equal distance between categories; but that lacks a known empirical zero point.
Ratio:
classifies objects or characteristics in rank order with a known equal distance between categories and a known empirical zero point.
Cut-Off Scores
designates a positive or negative test outcome. This information could be used to classify individuals into groups such as minimum, moderate or severe impairment.
Normative data
represent scores pulled from published literature. Normative data provides “normal” values for specific variables within a population. This type of research typically appears in validation studies and therefore may not represent the full range of outcomes clinicians may encounter; however, this data can provide approximate guidelines. Whenever possible, normative data is presented with data collected from other measures researchers or clinicians have used in the course of their work.
Norm-referenced standard:
scores are compared to the group’s performance in order to judge an individual’s performance; compared to scores from previously tested subjects, i.e., 60-69 y.o. males comfortable gait speed should be…..
Criterion-referenced standard:
scores of which are compared to an absolute standard in order to judge an individual’s performance – clinical examples include discharge criteria and determining the threshold for passing the physical therapy licensure exam.
Test-retest Reliability
Establishes that an instrument is capable of measuring a variable with consistency.
Clinical Bottom Line: If you are planning to use an instrument for individual decision-making, it is recommended that you use an instrument withan ICC > 0.9.
If you are planning to use the instrument to measure progress of a large group (as in research), an instrument with anICC > 0.7 is acceptable.
Interrater Reliability
Determines variation between two or more raterswho measure the same group of subjects.
Excellent Reliability: ICC > 0.75
Adequate Reliability: ICC 0.40 to < 0.74
Poor Reliability: ICC < 0.40
Intrarater Reliability
Determines stability of data recorded by one individual across two or more trials.
Excellent Reliability: ICC > 0.75
Adequate Reliability: ICC 0.40 to < 0.74
Poor Reliability: ICC < 0.40
Responsiveness
The ability of measure to detect change in the phenomenon of interest.
Standard Error of Measurement (SEM)
Clinical Bottom Line: The SEM is the amount of error that you can consider as measurement error.
SEM = Standard Deviation from the 1st test x (square root of (1-ICC))
Minimal Detectable Change (MDC)
Clinical Bottom Line: The MDC is the minimum amount of change in a patient’s score that ensures the change isn’t the result of measurement error.
Tells you whether a true change has occurred. If the change on an outcome measure exceeds the MDC, then you can be certain that the change in score is unlikely to have occurred without a true clinical change.
MDC=1.96 x SEM x square root of 2
Minimally Clinically Important Difference (MCID)
Clinical Bottom Line: The MCID is a published value of change in an instrument that indicates the minimum amount of change required for your patient to feel a difference in the variable you are measuring.
Floor Effects
Floor effects occur when a measure’s lowest score is unable to assess a patient’s level of ability. For example a measure that assesses caregiver depression may not be sensitive enough to assess low or intermittent levels of depression among caregivers.
Ceiling Effects
Ceiling effects occur when a measure’s highest score is unable to assess a patient’s level of ability. This might be particularly common for measures used over multiple occasions. For example, a patient’s pre-rehab score may be in-range at the initial evaluation, butthe patient’s ability exceeds the measure’s highest score over time.Therefore, it isunable to accurately assess progress as the patient improves.
Validity
The ability of a measure to capture what it is intended to capture.
Face Validity
An assumption that an instrument is valid based on its appearance (i.e.it is a reasonable measure of the variable being assessed).
Internal Consistency
The extent to which items in the same instrument all measure the same trait.Typically measured using Cronbach’s alpha.
Predictive Validity
Indicates that the outcomes ofan instrumentpredict a future state or outcome.
Criterion Validity
Reflects the degree to which a measures scores are related to scores obtained with a reference standard (Gold) - often indicates that the test could be used instead of a gold-standard
ConvergentValidity
Convergent validity refers to the degree to which two measures demonstrate similar results. For example, a new measure may assess gait speed using a new technique. Validation of this new measure would include outcomes obtained from established measures of gait speed. The degree to which these two assessments of gait speed converge provides evidence of the new measure’s validity.
Discriminant Validity
Reflects the degree to which an instrument can distinguish between or among different concepts or constructs
Content Validity
The items that make up an instrument adequately sample theuniverse of possible itemsthat compose the constructbeing measured.
Construct Validity
Establishes the ability of an instrument to measure an abstract concept and the degree to which the instrument reflects the theoretical components of it. Includes convergent and discriminant validity.
Research Validity:
the degree to which a study appropriately answers the question being asked; used to describe any study type
Internal Validity:
the degree to which a change in the outcome can be attributed to the experimental intervention rather than to extraneous factors
External Validity:
the degree to which research results may be applied to other individuals and circumstances outside of a study
Assignment
Problem: the process by which subjects are placed into two or more groups in a study; inadequate assignment procedures may threaten research validity (internal validity) by producing groups that are different from one another at the start of the study. Also referred to as allocation.
Solutions:
1. Use of randomization techniques to distribute characteristics equally
2. Statistical adjustments for baseline differences (not just a description of
the characteristics
3. Adequately defined inclusion/exclusion criteria
Attrition
Problem: refers to subjects who stop participation in a study for any reason; loss subjects may threaten research validity (internal validity) by reducing the sample size and producing unequal groups. Also referred to as drop out or mortality.
Solutions:
1. Recruit more subjects to replace those lost
2. Document and report subject characteristics of those lost
3. Reexamine groups statistically following attrition of subjects to determine if
different
4. Do not arbitrarily remove subjects to equalize groups following attrition (bias)
5. Statistical procedures can be used to collect data on dropouts (estimation of missing data and/or intention to treat analysis)
History
Problem: a threat to research validity (internal validity) characterized by events that occur during a study that are unrelated to the project, but may influence its results.
Solutions:
1. Randomize subjects to both a treatment and a control group so that you
distribute the “history threats” more equally
2. Schedule around predictable events that may influence outcomes
Instrumentation
Problem: a threat to research validity (internal validity) characterized by problems with the tools used to collect data that may influence the study outcomes.
Solutions:
1. Use validated instruments (eg., preferably instruments that have been validated and results published in a peer-reviewed journal)
2. Calibrate against a known measure
3. Proper training of personnel- description of orientation process
4. Use of statistics to validate methods/instruments used
5. Maintain consistent conditions throughout the study; implement a
protocol for collection of data
Maturation
Problem: a threat to research validity (internal validity) characterized by the natural processes of change that occur over time in humans that may influence a study’s results independent of any other factors- may include physical, emotional, psychological progress and/or decline
Solutions:
1. Randomize subjects to both a treatment and a control group.
2. Study subjects at the same time each day
3. Adequate rest is provided between repeated measures to reduce
fatigue and/or loss of interest
4. Specific intervention techniques are provided in random order to avoid the practice or familiarity effect
Testing
Problem: a threat to research validity (internal validity) characterized by a subject’s change in performance due to growing familiarity with the testing or measurement procedure or to inconsistent implementation of the procedures by study personnel.
Solutions:
1. Expose subjects to several testing procedures in an effort to avoid a learning effect
2. Average scores of multiple measures
3. Clearly describe the testing procedures to avoid unwanted influences-
use a script
4. Test administrators should be trained and competent
Compensatory Equalization of Treatments
Problem: a threat to research validity (internal validity) characterized by the purposeful or inadvertent provision of additional encouragement or practice to subjects in the control (comparison) group in recognition that they are not receiving the experimental intervention,
Solutions:
- Mask/blind the investigator/physical therapist such that they do not know what the interventions are in each group
- Provide a clear and explicit protocol for intervention administration
- Eliminate or minimize communication about the intervention between investigators or physical therapists
- Separate groups to different locations receiving different interventions
Compensatory Rivalry or Resentful Demoralization
Problem: a threat to research validity (internal validity) characterized by subjects in the control (comparison) group who, in response to knowledge about group assignment, change their behavior in an effort to achieve the same benefit as subjects in the experimental group.
Solutions:
- Keep groups separated to eliminate/minimize communication between groups
- Mask/blind both subjects and investigators to the intervention
- Provide clear and strong instructions about the importance of adhering to the intervention
Diffusion or Imitation of Treatment
Problem: a threat to research validity (internal validity) characterized by a change in subject behavior that may occur as a result of communication among members of different study groups.
Solutions:
- Keep groups separated to eliminate/minimize communication between groups
- Mask/blind both subjects and investigators to the intervention
- Provide clear and strong instructions about the importance of adhering to the intervention
Statistical Regression to the Mean
Problem: a threat to research validity (internal validity) that may occur when subjects produce an extreme value on a single test administration; mathematically the next test scores for these individuals mostly will move toward the mean value for the measure.
Solutions:
- Eliminate outliers from the baseline scores
- Take repeated baseline measures and average them in order to reduce extreme values
Investigator Bias
Results when investigators purposefully or inadvertently design, or interfere with, the study’s procedures such that the results systematically deviate from a true finding
Sources of Bias:
- Researchers responsible for enrolling subjects may respond to additional information by placing (or allocating) subjects into groups rather than following the preestablished assignment protocol.
- Researchers responsible for application of tests and measures may produce inaccurate results due to their knowledge of subjects’ group assignment or prior test results.
Threats to construct validity include:
- Construct under-representation: lack of sufficient definition of the variable
- Subject behavior changes in response to perceived or actual expectations of the investigators (experimenter expectancies)
- Interaction between multiple treatments or if the testing itself becomes a treatment
What is EBP?
EBP is the integration of clinical expertise, patient values, and the best research evidence into the decision making process for patient care.
Theoretical model
In clinical research a Theoretical model is often used to visually describe relationships among concepts and constructs; provides a framework from which predictions may be made and research conducted.
Quantitative Approach
Quantitative: The traditional scientific method.
Represent the vast majority of what you’ll draw upon as a clinician.
Assumes there is an objective truth that can be revealed by independent investigators.
Statistics are used to determine the answer.
Greater control (rigor) provides a higher expectation of quality