Exam 2 Flashcards
What experimental design is not a true experiment and why?
- A quasi-experimental design
- lack randomization
- lack comparison groups
What is the gold standard of true experimental design?
Randomized Control Trial
What occurs in a true experimental design?
- Subjects are randomly assigned to at least 2 comparison groups
- Experiment enables control over most threats to internal validity and provides the strongest evidence for causal relationships
What is a completely randomized design?
- Between subject design
* Subjects assigned to groups based on a randomization process
What is a randomized block design?
- Subjects classified according to an attribute (blocking variable)
- Then randomized to treatment groups
What is a repeated-measures design?
- Within-subjects design (everyone gets same interventions)
* Subject acts as own control
How many independent variables do single-factor designs have?
*One independent variable
How many independent variables do multi-factor designs have?
*Two or more independent variables
What is a single-factor design?
- One way design
- 1 independent variable is investigated
- 1 or more dependent variables
- Looking at how many independent variables there are, not how many dependent variables there are
What occurs with a RCT with 2 groups based on random assignment?
- Pretest-posttest control groups design
- Independent groups= treatment arms
- Testing pre and post treatment
- Changes in experimental group are attributable to the treatment
- Establishes cause and effect relationship
- Change in the experimental group is the post result minus the pre result to get the overall result. The change becomes the dependent variable
What occurs in a 2-group pretest posttest design?
- Comparison group recieves a second form of the intervention
- 2 experimental groups formed by random assignment
- Control group is not feasible or ethical
- Compares new treatment with “standard care”
What is a multi-group pretest posttest control group design?
- Multiple intervention groups
* Includes a control group
What are pre-test post-test designs strong in?
*Strong in internal validity
*
How can the initial equivalence of groups be established?
*By pretest scores (important for inferring causality)
How is selection bias controlled in pretest posttest designs?
*Controlled because of random assignments
What should effect groups equally in pretest posttest designs?
*History, maturation, testing, instrumentation
How is the analysis of pretest posttest designs often done?
- Often analyzed using change scores
- difference between posttest and pretest
- Also can use analysis of covariance (ANCOVA) to compare posttest scores
- using pretest scores as covariate
What is a posttest only control group design?
- Same as pre-test posttest control group design
- EXCEPT no pre-test
- Used when dependent variables can only be assessed following treatment
- e.g. length of stay in hospital (see example pg 199)
- Used when pretest is impractical or detrimental
- Is an experimental design involving randomization and comparison groups
- Strong internal validity
- Assumes groups are equivalent prior to treatment
- works best with large samples to increase probability of “equivalency”
What occurs in a multi-factor design for independent groups?
- single factor designs have 1 independent variable (with 1 or more levels) and do not account for interactions of severable variables
- Multi-factor designs have 2 or more independent variables
What occurs in a factorial design?
- Factorial design incorporates two or more independent variables, with subjects randomly assigned to various combinations of levels of the two variables
- Two-way (two-factor) design has 2 independent variables
- Three-way (three-factor) design has 3 independent variables
What is a survey?
- A series of questions
- interview
- questionnaire (written/electronic)
- Can be used in:
- experimental studies, exploratory studies, descriptive studies
What occurs in an interview?
- Ask questions and record answers
- Structured format
- Unstructured format
What occurs in a structured interview?
- Standard set of questions
- Same questions in same order to all subjects
- Same response choices
What occurs in an unstructured interview?
- Less formal
- Open ended
- Conversational
- Often used in qualitative studies
What occurs in questionnaires?
- Structured surveys
- Self-administered
- Computerized or pen/paper
- Efficient as completed on subject’s own time
- Reduced bias from interactions with an interviewer
- Disadvantage is the potential for misunderstanding or interpreting questions
- Mail, electronic distribution
- Low return rates (30-60%) limit external validity of results
What are data collected via interview or questionnaire based on?
- SELF REPORT!
- no direct observation by the researcher of subject’s behavior
- potential for bias or inaccuracy
- recall bias
What should be asked when making a survey design?
- Delineate the overall research question
- What are the objectives (guiding questions) of the study?
- These objectives focus the content of the questionnaire
- Outline of the questionnaire (relate to objectives)
- Review existing instruments
- Can they be adapted for my study?
- Write questions that address each of the objectives
What do you ask when you distribute your preliminary draft of survey to colleagues?
- Ask for feedback
- Revise
- Distribute again
- Helps establish content validity of the instrument
What size sample should you do a pilot test on?
- Small sample of 5-10 research subjects
- interview respondents for feedback
- revise
- retest
How do you select a sample for surveys?
- All PTs in Michigan
* Purchase mailing lists
How do you contact respondents?
- Cover letter
* Follow Up
What do scales of surveys provide?
*Provide rating of degree to which subject possesses a characteristic/attitude/value
What are the scales of surveys?
- Likert:
- Strongly agree
- Agree
- Neutral
- Disagree
- Strongly disagree
- Likert scales coded 1-5
- Calculate overall score by adding answers
- 1 item does not carry more weight than others
- *visual analog scale
- Place a mark on the 100 mm line
What is the Delphi Technique?
- Experts complete multiple e.g. 3 rounds of questionnaires
- Researcher reviews and distributes findings after each round
- Eventually come to consensus on an issue
- e.g. what should entry level knowledge be for a particular topic
How do you do an analysis of survey data?
- Code the data e.g. male 1, female 0;
- Fear of falling 1, no fear 0
- SA=5; A=4; N=3; D=2; SD=1
What does descriptive statistics do?
- Summarize responses
- Mean age; years of education etc.
- Categorical data
- Frequency/percentages: 25 males (33% of sample)
- 40% SA; 30% A; 20% Neutral etc
- Scores on a scale may be summed Ex. mean ABC=84%
What does an IRB do?
- Must approve survey research
- Protection from psychological harm
- confidentiality
- Informed consent must be provided by participants
What is the best way for clinicians to seek evidence for interventions and assessments?
- Systematic Reviews
- Cochrane Collaboration
What should a study question be?
- Question is very specific
* Well described purpose statement
What is a narrative review?
*There’s a broad question to be answered with search strategies and selection of articles not usually described. Appraisal is not always rigorous and the conclusions are usually descriptive/qualitative
What is a systematic review like?
*The question is focused and the search strategies/databases are often described in detail. The selection of articles are rigorous based on specific criteria. Appraisal is very rigorous and the conclusion maybe qualitative or quantivative (meta-analysis)
What are the selection criteria of a systematic review?
- “Subjects” of the review are the studies
- Specify inclusion/exclusion criteria
- Based on:
- Types of studies
- Types of participants
- Types of interventions
- Types of outcome measures
What is the search strategy for a systematic review?
- Select keywords
- Identify resources
- Databases
- Grey literature (unpublished studies)
- Publication bias
- Conduct the search and retrieve relevant papers
How do you evaluate quality of selected studies?
- Critical review
- Record on a form
- Evaluate quality of design and data analysis
What are the types of study bias?
- Selection bias
- Performance bias
- Attrition bias
- Detection bias
What is the Jadad Rating scale?
- 3 questions
- Was study randomized (1 point if yes)
- Was study described as double blind (1 pt yes)
- Was there a description of withdrawals and dropouts (1 pt yes)
How many items is the PEDro Scale?
*11 items
What is Data synthesis?
- Heterogeneity (dissimilarity) or homogeneity of the included studies (variability across studies)
- Composition of treatment groups (inclusion/exclusion criteria)
- Design of study (including length of follow up)
- Management of patients (treatments provided/Presence of complications)
What are the analysis/synthesis of findings?
- Overall conclusions based on quality of evidence obtained
* Often summarize findings in a table
What is a forest plot?
- Represents the overall result of the meta-analysis
- square is the outcome for that study [relative risk (RR)]; size of square relates to weighting of study based on sample size
- line represents confidence interval (CI) around the RR
- diamond is combined overall estimate of results [includes pooled point estimate (center of diamond) and CI (horizontal tips of diamond)]