Exam 2 Flashcards
Design classification degree of experimental control
- in a true experimental design, subjects are RANDOMLY assigned to at least 2 COMPARISON groups
- experiment enables control over most threats to INTERNAL VALIDITY and provides the strongest evidence for CAUSAL relationships
- randomized control trial (RCT) is the gold standard of true experimental design
Are quasi-experimental designs true experiments?
- NO
- because they lack randomization and comparison groups
Types of group assignment for design classifications
- completely randomized design
- randomized block design
- repeated-measures design
Completely randomized design group assignment
- between subject design
- subjects assigned to groups based on a randomization process
Randomized block design group assignment
- subjects classified according to an attribute (blocking variable) (i.e. males vs females)
- then randomized to treatment groups (i.e. males get control and random group as well as females)
Repeated-measures design group assignment
- within-subjects design
- subjects act as own control
Variation with number of independent variables/factors
- single-factor designs have one independent variable
- multi-factor designs have 2+ independent variables
Single-factor design (one-way design) for independent groups
- 1 independent variable is investigated
- 1 or more dependent variables
Pretest-Posttest control groups design
- RCT with 2 groups based on random assignment
- independent groups = treatment arms
- testing pre- and post-treatment
- changes in experimental group are attributable to the treatment
- establishes cause-and-effect relationship
2-group pretest-posttest design
- comparison group receives a second form of the intervention
- 2 experimental groups formed by random assignment
- control group is not feasible or ethical
- compares new treatment with standard care
- can quantify the difference between pre and post by “delta” or the change
For GroupXTime interaction
- looks at if there is any change between the time (pre vs post) and comparing the groups’ changes to each other
- can also use a 2-way mixed design
- main effects: groups, time
- interaction: groupsXtime
Multi-group pretest-posttest control group design
- multiple intervention groups
- includes a control group
- conclude that treatment 1 is better than treatment 2 or vice versa AND that it is or is not better than no treatment
Internal validity with pretest and post-test designs
- strong internal validity
- initial EQUIVALENCE of groups can be established by pretest scores (important for inferring causality)
- SELECTION BIAS controlled because of random assignments
- HISTORY, MATURATION, TESTING, INSTRUMENTATION EFFECTS SHOULD AFFECT ALL GROUPS EQUALLY
Analysis of pretest-posttest designs
- often analyzed using CHANGE scores (diff between posttest and pretest)
- also can use analysis of covariance (ANCOVA) to compare posttest scores (using pretest scores as covariates)
Posttest only control group design
- same as pretest-posttest control group design, EXCEPT NO PRE-TEST
- used when dependent variables can only be assessed following treatment (i.e. length of stay in hospital)
- used when pretest is impractical or detrimental
- is an experimental design involving randomization and comparison groups (STRONG INTERNAL VALIDITY)
- assumes groups are equivalent prior to treatment (works best with large samples to increase probability of equivalency)
Multi-factor design for independent groups
- single factor designs have 1 independent variable (with 1+ levels), and do not account for interactions of severable variables
- multi-factor designs have 2+ independent variables
Factorial Design
- incorporates 2+ independent variables, with subjects randomly assigned to various combinations of levels of the two variables
- two-way (two-factor) design has 2 independent variables
- three-way (three-factor) design has 3 independent variables
Repeated measures Design
- up to now considered 2 independent GROUPS
- experimental and control groups created by RANDOM ASSIGNMENT and by BLOCKING
- can also use repeated measures design where one group of subjects is tested under ALL CONDITIONS, each subject acting as their OWN CONTROL (aka within-subject desgin)
Advantage of repeated measures design
- subject differences are controlled
- differences between experimental and control groups are nullified because no groups used
- physiological and other factors remain CONSTANT throughout experiment
- subjects acting as their own controls provides most equivalent “Comparison group” possible
Disadvantages of repeated measures designs
- LEARNING/PRACTICE effects when one person repeats measurements over and over
- CARRYOVER effects when exposed to multiple treatment conditions (must allow enough time for dissipation of previous effects)
- may NOT be TRUE EXPERIMENTS because NO RANDOMIZED COMPARISON GROUPS
- however, if they incorporate randomization of the order of repeated treatments/interventions then can be considered experiment
Single-factor designs for repeated measures
- one-way repeated measures design
- one group of subjects is exposed to all levels of one independent variable
- has element of looking like an experiment because randomized order of who gets what experiment in which order
Solution to problem of order effects
- randomize order of conditions/interventions for each subject so there is no bias in choosing order of testing
two-way design with 2 REPEATED MEASURES for multi-factor designs
- 2 repeated measures (=2 independent variables….i.e. type of lift and orthosis)
- each person exposed to 4 test conditions (2-way design…2X2 design)
Mixed Design for multi-factor repeated measures
- 2 independent variables (i.e. exercise is IND factor (experimental and contorl), and time is REPEATED factor (3 time periods during tests))
- 2 way design or 2X3 design
Multi-factor designs
- two-way design with 2 repeated measures
- mixed design
Group variable
- independent factor/variable
- this is because has 2 levels independently
Time variable
- is a repeated factor
- because measures at time 1, time 2, time 3, etc
Two-way factorial design
- incorporates TWO INDEPENDENT VARIABLES
- effect of INTENSITY (vigorous/moderate) on exercise behavior
- effect of LOCATION (home/community center) on exercise behavior
- 2X2 design means (2 independent variables and 2 levels of each independent variable = 4 groups total)
Main Effects of a twoo-way factorial design
- is there an effect of moderate versus vigorus exercise
- is there an effect of exercising at home or in community?
- this examines the MAIN EFFECT of each independent variable
Interactions of Two-way factorial design
- can examine INTERACTION EFFECTS between 2 independent variables
- effect of 1 variable varies at different LEVELS of the second variable
- i.e. maybe moderate exercise is more effective in changing exercise behavior but only when performed at a community center
Randomized Block Design
- used when there is a concern that an extraneous factor such as GENDER might INFLUENCE DIFFERENCES BETWEEN GROUPS
- build the variable into the design as an independent variable
Quai-experimental designs
- similar to experimental designs but lack random assignment, comparison group, or both
- may involve non-equivalent groups
- may be a reasonable alternative to RCT
- conclusions drawn must take into account biases of the sample
One-group designs pretest-posttest
- effect of treatment is determined by change in pre and post scores
- pretest –> intervention –> posttest
- vulnerable to threats to internal validity because no control group (i.e. history, maturation, testing)
One-way repeated measure design over time
- effect of treatment over time
- pretest –> intervention –> postest 1–> postest 2
- no control group so internal validity threatened
Mulit-group design pretest-posttest
- non-equivalent pretest-posttest control group design
- similar to pretest-posttest experimental design EXCEPT subjects not assigned to groups randomly (i.e. volunteers self-select groups)
- EXP: pretest-intervention-posttest
- CONTROL: pretest-nointervention-posttest
Multi-group design posttest only control group design
- static group comparison
- EXP: intervention–postest
- CON: no intervention–postest
- NOTICE NO PRETEST
Single subject designs
- draw conclusion on treatment effects based on 1 patient’s response
- controlled experimental approach
- independent variable is treatment
- dependent variable is target behavior (outcome)
- also called (N of 1 study, or time series designs)
Structure of a single subject study
- Repeated Measurements: Each session; observe trends
At least 2 testing phases: Baseline and Intervention - Target behavior is measured across both phases on multiple occasions
- Baseline phase: state of target behavior over time in the absence of treatment (control conditions)
How do single subject designs differ from traditional experimental designs?
-Multiple assessments in baseline and intervention phases
Ethical issues regarding baseline conditions
- withholding treatment
When treatment starts, any change from baseline to intervention phase is attributed to what?
- the intervention
Baseline data
- comparison for evaluating potential cause and effect relationship between intervention and target behavior
- Baseline period = A
- Intervention period = B
- A-B design
Baseline characteristics
- 2 baseline data characteristics are important for interpreting clinical outcomes
- Stability (consistency of response over time): stable or variable
- Trend: accelerating or decelerating
Length of phases
- best to have equal phase length
- often 1 week per phase (take daily measurements, minimum of 3-4 per phase)
- greater number of data points easier it will be to identify trends
- often measures can be taken more frequently than daily if behavior changes rapidly
- More than a single session
Target Behaviors
- Choose clinically relevant outcomes measures for a particular patient
- i.e. Strength, ROM, Gait speed, Balance measures, Pain
Limitations of A-B design
- Experiments can control for threats to internal validity
- To do this in the A-B single subject design is more challenging
- Other treatments/events (history effects)
- What other evidence can we include to strengthen design control ?
- to increase confidence that treatment caused the changes in target behavior
Additional control for A-B design
- Replication of effects
- Repeat phases
- withdrawal designs—treatment: no treatment
- Withdrawing and reinstating baseline and treatment conditions
- Withdraw intervention and show that target behavior occurs only in presence of treatment
- 2nd baseline period (A-B-A design)
- Could also include a 2nd intervention phase (A-B-A-B design—see over)
Visual Data Analysis
- Level (last data point of a phase to first data point of next phase)
- Trend (direction of change in a phase)
- Accelerating or decelerating
- Slope of a trend (rate of change in the data)
Single Subject Data Generalization
- Single subject research can provide data for clinical decision making
- Not enough to show effect during intervention period on a single patient
- Must also be able to show changes in the target behavior will occur in other individuals
- Generalization: external validity for the single case
- Assume treatment will be effective in others with similar characteristics
Observational Designs
- no manipulation of variables as in experimental designs
- exploratory or descriptive
Exploratory Research
- systematic investigation of RELATIONSHIPS among variables (i.e. association of leg weakness and falls)
- not used to establish cause-and-effect relationships between variables
What are the 2 ways to conduct exploratory research?
- retrospectively and prospectively
Prospective conduction of exploratory research
- variables measured in the present and follow subjects in study
Retrospective conduction of exploratory research
- Use of data that have already been collected
- Medical records, databases
- Researcher can’t control data collection methods
- Prospective studies are more reliable than retrospective studies
Longitudinal research
- follow a cohort over time taking repeated measurements
- can observe growth and change in individuals over time
- often involve large cohorts followed over long periods of time (i.e. framingham heart study)
- threats to internal validity relate to repeat testing and attrition
Cross-sectional research
- gather data as a “snap shot” in time
- very efficient
- all subjects tested more or less at same time
Correlational Study
- foundation of exploratory studied is process of correlation (degree of association)…covariation in data (extent to which one variable varies with another variable)
- purpose is to describe the nature of existing relationship among variables
- look at several variables at once to see which are related
- can make predictions (predictive correlation study) based on observed relationships between variables
- cholesterol level: age, diet, gender, genetics
- Regression (stats procedure)
Case-control study
- retrospective
- look at exposure to some sort of substance/condition (i.e smoking/vs not smoking, exercise/vs no exercise)
- example Q: is there a relationship between heart disease and smoking
- group 1: heart disease
- group 2: no heart disease
- not randomized
Purpose of a case-control study
- to determine if the frequency of an exposure (i.e. poor nutrition) is different in cases and controls
- Choice of controls is critical: Match cases and controls for age, gender, SES etc
Selection bias with case-control study
- choose cases and controls regardless of exposure history
- Beware of misclassification (ie cases and controls)
Observation bias with case-control study
- difference in the way info about disease or exposure is obtained from the groups
Interviewer bias with case-control study
- person collecting data elicits, records or interprets info differently from cases and controls
Recall bias with case-control study
- subjects remember exposure differently than the reality
Cohort Studies
- follows a group(s) overtime (prospective)
- group 1: exercisers
- group 2: sedentary (matched)
- look at rate of falls overtime
- ## not randomized
Causality in observational studies
- RCT (experimental): cause and effect relationships
- case control and cohort studies do not involve experiments or manipulation of variables
- causation (cause and effect i.e. did the exposure cause disease) is established by other methods
Causality
- Establish a time sequence: exposure precedes disease
- Strength of association: relative risk
- Biologic credibility
- Consistency with other studies
- Dose-response relationship
- 3 of 5 met= strong cause
Methodological studies use correlational
- use correlational methods to examine reliability and validity of measuring instruments
Historical studies and correlation
- reconstruct the past on the basis of archives and other records to suggest relationships of historical interest to a discipline
Survey
- a series of questions
- questionnaire (written or electronic)
Surveys can be used in what studies
- experimental
- exploratory
- descriptive
Interview
- ask questions and record answers
- structured and unstructured
Structured Interview
- standard set of questions
- same questions in same order to all subjects
- same response choices
Unstructured Interview
- less formal
- open ended
- conversational
- often used in qualitative studies
Questionnaires
- structured surveys
- self administered
- computerized or pen/pencil
- efficient as completed on subject’s own time
- reduced bias from interactions with an interviewer
- disadvantage is the potential for misunderstanding or interpreting questions
- mail, electronic distribution
Return rate on Questionnaires
- low return rates
- 30-60%
- limit external validity of results
Data collected via interview or questionnare are based on what
- sekf-reported data
- no direct observation by the researcher of the subject’s behavio
- potential for bia or inaccuracy
- recall bias
Survey Design
- delineate the overall research question
- what are the objectives of the study?
- outline of the questionnaire (relate to objectives)
- review existing instruments
- write questions that address each of the objectives
What should you do with the first draft of a survey?
- distribute a draft to colleagues
- ask for feedback
- revise
- distribute again
- helps establish content validity of the instrument
- should also do a pilot test on small sample (5-10)
Do surveys need a consent form?
- no.
- just by filling it out and sending it back is consent enough
Scales
- provide rating of degree to which subject possesses a characteristic/attitude/value
Likert Scale
- strongly agree, agree, neutral, disagree, strongly disagree
- likert scales coded 1-5
- calculate overall score by adding answers
- 1 item does not carry more weight than others
Types of scales
- likert
- visual analong
Visual analog scale
- place mark on 100 mm line
Delphi technique
- experts complete multiple (i.e. 3 rounds of questionnaries)
- researcher reviews and distributes findings after each round
- eventually come to consensus on an issue
- i.e. what should entry level knowledge be for a particular topic
Analysis of survey data
- code the data
- i.e. male 1, female 0
- fear of falling 1, no fear 0
descriptive statistics summarize responses
- mean age; years of education; etc
- categorical data: frequency/percentages: 25 males (33% sample),
- scores on a scale may be summed
- i.e. mean ABC 84%
Systematic Reviews ask what
- extremely specific types of questions
Systematic Reviews
- a systematic review involves the application of scientific strategies, in ways that limit bias, to the assembly, critical appraisal, and then synthesis of all relevant studies that address a specific clinical question
Systematic Review Process
- picture on slide 4
- systematic reviews week7
Narrative vs. Systematic Reviews
- narrative: broad questions, search strategy and selection of articles not usually described, appraisal not always rigorous, conclusions usually qualitative/descriptive
- systematic: focused question, search strategy described in detail, rigorous selection based on specific criteria, very rigorous appraisal, conclusions maybe qualitative or quantitative
Selection Criteria for Systematic Process
- subjects of the review are the studies
- specify inclusion/exclusion critera
- based on types of studies, types of participants, types of interventions, types of outcome measures
Question for Systematic Process
- Question is very specific
- Well described purpose statement
grey literature
- unpublished studies
Evaluate Methodologic Quality
- evaluate quality of selected studies (critical review, record on a form, evaluate quality of design and data analysis)
Types of study Bias
- selection bias
- performance bias
- attrition bias
- detection bias
Selection Bias
- did we pick the groups of control and experimental equally?
Performance Bias
- expect to see certain performance from one group over another
Attrition Bias
- did one group have a higher # of dropouts than the other?
Detection Bias
- idk.
Jaded Scale
- 3 questions
- was study randomized? (1 pt if yes)
- was study described as double blind? (1 pt yes)
- was there a description of qwthdrawals and droupouts? (1 pt if yes)
PEDro Scale
- 11 items
- similar to jaded scale
Data Synthesis of Methodologic Quality
- Heterogeneity (dissimilarity) or homogeneity of the included studies (variability across studies)
- Composition of treatment groups: Inclusion /exclusion criteria
- Design of study: Including length of follow up
- Management of patients: Treatments provided, Presence of complications
Analysis/synthesis of findings
- overall conclusions based on quality of evidence obtained
- often summarize findings in a table
Meta-Analysis
- a name that is given to any review article in which the results of several independent studies are combined statistically to produce a single estimate of the effect of a particular intervention
Forest Plots
- Represents the overall result of the meta-analysis
- square is the outcome for that study [relative risk (RR)]; size of square relates to weighting of study based on sample size
- line represents confidence interval (CI) around the RR
- diamond is combined overall estimate of results [includes pooled point estimate (center of diamond) and CI (horizontal tips of diamond)]
- If a CI of a result crosses the line of no effect, then either a significant difference does not exist b/w the treatment and the control or the sample size was too small to show an effect
Homogeneity in Meta-Analysis
- results of each individual trial are mathematically compatible with the results of the other trial
If the CIs do not overlap in a meta-analysis
- heterogeneous study
- no common treatment effect across the studies
Descriptive Research describes what about populations?
- characterictics
- behaviors
- conditions
Descriptive Research
- may involve prospective or retrospective data collection
- design may be longitudinal or cross-sectional
- surveys and secondary analyses of clinical databases often used as data sources for analysis
Categories of descriptive research
- developmental research
- normative studies
- qualitative research
- descriptive surveys
- case studies / case reports
Developmental Research
- involves description of developmental change and sequencing of behaviors in people over time (i.e. motor development in children, lifespan)
- longitudinal methods involve collecting data over time…focus on natural history of a disease
- can also use cross-sectional methods and study different age groups at a point in time
Normative Studies
- purpose is to describe typical or standard values for characteristics of a population
- describes norms as a mean and a range of acceptable values
- norms are used as a basis for prescribing interventions
Qualitative Research
- describes how individuals perceive their own experiences within a specific social context (what it means to live with a SCI)
- helps us understand the patient’s view of the world (important in designing interventions)
- data collected by interviews and observation (participant observation & field observation: non-participant)
Descriptive Surverys
- often used as a source of data to collect info about a specific group
- to describe their characteristics, or risk for disease, or other attributes
Case Studies
- important for developing a clinical knowledge base
- in-depth description of a person’s condition or response to treatment
- case series involves observations in a number of similar cases
- often involve unusual diagnoses that are challenging
- may highlight avenues for future research
Case studies Format
- comprehensive description of the subjects background, present status, and responses to intervention
Introduction of Case Studies Format
- introduction describes background literature to the patient’s problem
Patient History of Case Studies Format
- problems
- symptoms
- prior treatments
- demographic and social info
Results of Case Studies Format
- patient’s response and any follow-up data
Discussion of Case Studies Format
- interpretation of outcomes and conclusions
Critically Appraised Topic (CAT)
- brief summary of a search and critical appraisal of literature on a clinical question
- standardized format
- provides statement of clinical relevance
- initiated by a patient encounter usually due to gap in knowledge
- search for and appraises best evidence
- summarizes evidence
- integrates evidence with clinical expertise
- suggest how info can be applied to practice
- usually 1-2 pages, concise
When applying literature to patients what do you look at
- systematic review
- and then CAT
- both look at evidence that could apply literature to your patient
Systematic Review
- involves review involves the application of scientific strategies, in ways that limit bias, to the assembly, critical appraisal, and synthesis of all relevant studies that address a specific clinical question
Format of a CAT
- Title
- Author/date
- Clinical scenario (description of case that prompted the question)
- Clinical question (PICO format)
- Clinical bottom line (summary of how results can be applied)
- Search history/strategy
- Citations
- Summary of the study/ies (design; sample; intervention; outcome measures; data analysis)
- Summary of the evidence (results summarized)
- Critical comments on the study (internal/external and statistical validity of the study)
Using CATs
- useful at point of care
- can be created out of case conferences
- CAT banks established by institutions
- limited shelf life as new evidence becomes available
- not as rigorous as a systematic review
- 1-2 references and do not represent full scope of the literature on a topic