Quantitative Research Flashcards
Definition of Quantitative Research
Research methods dealing with NUMBERS and anything that is MEASURABLE IN A SYSTEMATIC WAY of investigation of phenomena and their relationships. It is used to answer questions on relationships within measurable variables with an intention to explain, predict and control a phenomenon.
Purpose of Quantitative vs Qualitative Research
Quantitative: Measuring outcomes. Generalize results from sample to population - How much?
Qualitative: Deep understanding of phenomenon (often exploratory) - What? Why?
Data Collection of Quantitative vs Qualitative Research
Quantitative: Standardized techniques (e.g. tests, scales, questionnaires).
Qualitative: Unstructured or semi-structured techniques (e.g. interviews, open ended questionnaires, focus groups) (i.e. less structure).
Data analysis of Quantitative vs Qualitative Research
Quantitative: Numerical comparisons and statistical inferences.
Qualitative: Themes from descriptions.
Research Question of Quantitative vs Qualitative Research
Quantitative: Clearly defined (PICO)
Qualitative: Not (always) clearly defined (PICo, SPIDER, SPICE)
Goal of Quantitative vs. Qualitative Research
Quantitative: Verify the theory, Test hypothesis.
Qualitative: Development of theory, hypothesis.
Key Characteristics of Quantitative Research
- Process is deductive (To test the idea/s)
- Data is numeric (To enable statistical analysis).
- Pre-specified methods are used (to ensure scientific rigour).
4 Steps of Quantitative Research
Theory - Hypothesis - Obeservation/test - Confirmation/rejection
You have a theory, from it you make a hypothesis, test the hypothesis, confirm or deny the hypothesis.
Key Objectives of Quantitative Research
- To describe (Impact/burden of the problem)
- To evaluate (Connection between the dependent and independent variable vs causation) (To test a treatment).
Key Objectives of Quantitative Research
- To describe (Impact/burden of the problem).
- To evaluate (Connection between the dependent and independent variable vs causation) (To test a treatment).
- To predict (Identify variables that predict outcomes).
- To compare (Identify differences between groups) (provide a base of evidence for practice).
Research Designs
Descriptive (PO)
- Survey/Case reports
- Qualitative
Analytical (PICO)
- Observational analytic
- Experimental
Descriptive Design
- Without an intervention (retrospective)
- Not to quantify relationships
- Reveal important findings - make a new hypothesis
- N (number of participants) can be small, but # variables can be large.
- Case reports, Case-series, single case design, qualitative studies and surveys (cross-sectional) studies.
Analytical Design
- Quantify relationship between two factors: Effect of intervention/exposure on outcome.
- Test hypothesis
- Measuring intervention/exposure (observational analytic design: case-control, cohort, cross-sectional…)
Or - Researcher manipulates intervention/exposure (experimental design: RCT).
Quasi Experimental
Test causality with sub-optimal variable control (when you cannot control every confounding factor).
- Before - after design
True Experimental
Test causality with optimal variable control (no confounding factors).
- Randomized Control Trial
Case-Study/Case-Series
- No control group
- Explore new treatment/topic on which limited knowledge exists
- Often qualitative (rare in quantitative: used when not enough participants are found (rare diseases))
Participant with condition of interest → Information about clinical outcome.
Case-Control Design
- Retrospective
- Two groups: one with desired outcome - one without. What might have caused the desired outcome? Compare the differences.
- Data already existing. Does not modify the data.
Advantages of Case-Control Design
- Quick and cheap
- Only feasible method for very rare disorders or those with long lag between exposure and outcome.
- Fewer subjects needed than cross-sectional.
Disadvantages of Case-Control Design
- Reliance on recall or records to determine exposure status.
- Confounders
- Selection of control groups is difficult
- Potential bias: recall, selection
- Have to trust that everything regarding the intervention was done correctly.
Cohort Design
- Adaption of RCT, for when you cannot do random sampling.
- Controlling confounding factors more important than random sampling.
- The results cannot be verified 100% because of confounding factors.
- Prospective (can also be retrospective)
Participants → Exposure to intervention → Outcome
→ No exposure to intervention
Advantages of Cohort Design
- Ethically safe
- Subjects can be matched
- Can establish timing and directionality of events
- Eligibility criteria and outcome assessments can be standardized.
- Administratively easier and cheaper than RCT.
Disadvantages of Cohort Design
- Controls difficult to identify
- Exposure may be linked to a hidden confounder
- Blinding can be difficult
- No Randomization
- For rare diseases, large sample size or long follow-up necessary.
Cross-Sectional Design
- One time measurement
- No groups comparison
- No intervention
- Used to understand a phenomenon
- Which factors influence particular outcome
- Exploratory
- No causality
Participants → Measurement of outcomes and other factors at the same time
Advantages of Cross-Sectional Design
- Cheap and simple
- Ethically safe
Disadvantages of Cross-Sectional Design
- Establishes association at most, not causality
- Susceptible to Recall bias
- Confounders may be unequally distributed
- Group size may be unequal
Before-After Design
- Prospective
- Assess and compare outcomes before and after intervention
- No control group, not comparing groups
- More than 5 participants
- Bigger, more complicated outcome
Participants → Assessment → Intervention → Outcome
Single Case Design
- Same as Before-After Design, but less participants (max 5).
- Simple outcome
- Can have more than one intervention
- Prospective
- Participants studied during multiple phases
Individual client → baseline evaluation → intervention → evaluation → intervention
Randomized Control Trial
- Experimental study
- Gold standard of research
- Random allocation of participants in groups (increased internal validity).
- 1 Experimental group (exposed to intervention) vs 1 control group (not exposed to intervention)
- Tests effectiveness of intervention (causality)
- Highly controlled
Participants → stratification → randomization → experimental group OR control group → outcome
Advantages of Randomized Control Trial
- Unbiased distribution of confounders
- Blinding more likely
- Randomization facilitates statistical analysis
Disadvantages of RCT
- Expensive (time and money)
- Volunteer bias
- Ethically problematic at times (Participants that are dying that do a clinical trial might receive a placebo when actual intervention might help.
Randomized control trial vs Single Case Design
Major differences:
- Means by which the experimental control is achieved
- Number of participants
Both are scientifically credible when properly applied
RCT: Evaluate treatment effects by comparing two groups
SCD: Does this treatment work on this particular patient?
Independent Variable
Intervention
Dependent Variable
Variable that is being observed.
- Should only vary in response to the independent variable. (Has to be able to be modified by the independent variable).
Extraneous Variable
Same as confounding factors.
Need to control → isolate effect of the independent variable on the dependent variable.
Essential Elements to Experimental Design
(Randomized Control Trial)
- Random assignment
- Researcher-controlled manipulation of independent variable (No confounding factors)
- Researcher control of experimental setting, including control group
- Control of variance (sampling criteria, variables)
Validity of the Trial
An actual score that is measured by these points:
- Comparability of groups at the beginning
- Large numbers (power calculation essential)
- Blinding of raters/assessors and statisticians
- No confounding factors
- Reliability of the measurements
Potential Causes of Bias in Quantitative Research
- Researchers
- setting
- sample
- How groups were formed
- Measurement tools (Does it have good validity for the age group)
- Data collection process
- Data and duration of study
- Statistical tests and analysis interpretation
Why is rigor important?
- Validity of the study depends on it.
- Striving for excellence in research and adherence to detail.
How to uphold rigor in research?
- Precise measurement tools, a representative sample, and a tightly controlled study design.
- Logical reasoning is essential.
- Precision, accuracy, detail and order required.
Internal Validity
- Level to which the independent variable caused the outcome of the study.
- Are you actually measuring what was intended to measure?
- Avoiding confounding factors.
External Validity
Generalize results.
Can the outcome be generalized to the whole population.
Reliability
The accuracy and repeatability of the measured outcome (same results each time it’s done).
Reliable, not valid: dots in one place, not in the middle.
Valid, not reliable: dots spread around evenly.
Neither valid nor reliable: dots spread around half.
Both reliable and valid: dots in one place, in the middle.
Reliability vs Validity
Reliability: Consistency of a measure.
Validity: Accuracy of a measure.