Module 3 - Quantitative Flashcards
Quantitative designs can be used for four key purposes:
The researcher can choose from four main types of designs:
(1) describing a phenomenon in detail, (2) explaining relationships and/or differences among variables, (3) predicting relationships and/or differences among variables, and (4) examining causality.
Descriptive, correlational, quasi- experimental, and experimental.
These types of designs are broadly categorized as either experimental (to determine causality) or nonexperimental (to describe, examine, predict).
Differences between experimental and nonexperimental quantitative designs
Experimental: The researcher plays an active role by manipulating the independent variable. The IV is the intervention, or “treatment,” that the researcher wants to test in a specific group of people in order to determine the effect that the IV has on the outcome of interest, known as the dependent variable (DV).
- The two main types of experimental design are true and quasi-experimental.
- Quasi-experimental designs are similar to in that they also involve manipulation of the IV, but they lack either randomization or a control group.
Nonexperimental: Lack of researcher manipulation. The researcher “observes” how the variables of interest occur naturally, without the researcher trying to change how the conditions normally exist.
Retrospective designs
What are they used for?
- Research designs when researchers look back in time to determine possible causative factors; AKA ex post facto.
- Researchers start with the DV and look back in time to determine possible causative factors.
- Retrospective designs are considered to be “after the fact” because the DV has already occurred, which means that the IV cannot be manipulated and participants cannot be randomly assigned.
- Retrospective designs are often used in epidemiological studies.
- This type of retrospective study is called a case-control study because researchers started with a group of people who already had the disease (the “cases”) and compared them to similar people without the disease (the “controls”).
Cross-sectional Designs
- Nonexperimental design used to gather data from a group of participants at only one point in time; study design to measure cause and outcome variables as each exists in a population or representative sample at one specific point of time.
- Cross-sectional designs provide a snapshot by collecting data about both the IV and DV at the same time.
- For example, a researcher is studying the effect of self-efficacy on condom use in adolescents. If a cross-sectional design is used, it would be difficult to determine whether increased self-efficacy caused increased condom use or whether increased condom use resulted in a sense of increased self-efficacy.
Cohort comparison designs
- Nonexperimental cross-sectional design in which more than one group is studied at the same time so that conclusions about a variable over time can be drawn without spending as much time.
- Cohort comparison designs allow researchers to draw conclusions about variables over time even though data were collected at only one point in time.
- A cohort design could be used to study the factors associated with condom use among high school freshmen as well as high school sophomores, juniors, and seniors.
Repeated Measures Design
- Research designs where researchers measure subjects more than once over a short period of time.
- For example, a researcher wants to study the effect of exercise on blood pressure in older adults. The researcher first measures blood pressures on subjects for a baseline reading. Then subjects perform a cardio workout, and blood pressures are measured immediately after the workout and again 30 minutes later.
Longitudinal designs
Prospective desins
- Designs used to gather data about participants at more than one point in time
- Longitudinal designs, which may be either experimental or nonexperimental designs, are sometimes called prospective designs, because they are studies that begin in the present and end in the future.
- Nonexperimental, prospective designs are commonly used in epidemiological studies where researchers begin by identifying presumed causes and then follow participants into the future to determine whether the hypothesized effects actually occur.
Panel Design
- Longitudinal design where the same participants, drawn from the general population, provide data at multiple points in time over a long period of time and at specified intervals.
- The Nurses’ Health Study is an example of a large, prospective study done to investigate risk factors for chronic diseases in women. The original study in 1976 focused on investigating long-term consequences of oral contraceptive use, but the study is now in its third generation, with more than 275,000 nurses participating. Participants provide data every 2 years, with some study participants being surveyed since 1976.
Trend study
- A type of longitudinal design to gather data from different samples across time.
- Use nonexperimental designs to gather data about the variables of interest from different samples from the target population across time.
- Since 1990, the YRBSS has surveyed representative samples of students in grades 9 through 12. Repeated every 2 years, the surveys provide data regarding the prevalence of six categories of health-related risk behaviors among high school students. By using different samples of teens, researchers are able to determine whether the rates of risky behaviors are increasing, decreasing, or staying the same over time.
Follow-up study
- A longitudinal design used to follow participants, selected for a specific characteristic or condition, into the future
- Nonexperimental follow-up studies are also known as cohort studies. In a cohort study, the sample is assessed for risk factors and followed prospectively to examine disease outcomes.
- Follow-up studies may also be experimental in design. A researcher may design a supportive educational intervention to increase new mothers’ confidence in breastfeeding. The researcher would randomly assign mothers to either the new educational intervention group or the standard of care group.
Crossover designs
- A type of longitudinal study in which participants receive more than one experimental treatment and are then followed over time. Participants act as their own control group.
- Researchers manipulate the IV by randomizing the order in which the treatments are provided.
- An example of a crossover design is one in which the researcher is interested in determining whether relaxation techniques or exercise has a greater effect on reducing blood pressure. Some participants would be randomly assigned to receive training in relaxation techniques first, whereas others would be given instructions about exercise first.
Causality
The relationship between a cause and its effect.
The cause variable has the ability or power to produce a specific effect or outcome.
Cause variable: IV, Effect: DV
Probability
- Likelihood or chance that an event will occur in a situation.
- Research related to human health and functioning results in assertions of probability or how likely it is that the change in the DV was caused by the IV.
- Probability assertions leave open the possibility that there are other causes and factors affecting the result seen in the DV.
Control and manipulation
- Control refers to the ability to manipulate, regulate, or statistically adjust for the multitude of factors that can influence the DV.
- Manipulation is an important aspect of control in experimental designs and refers to the ability of researchers to control the IV. - The IV is considered to be the intervention, or treatment, that is being tested in an experimental study.
Confounding and extraneous variables
- Confounding results when extraneous variables influence and distort the relationship between the IV and the DV so that the findings are not really reflecting the true relationship.
- Extraneous variables: Factors that interfere with the relationship between the independent and dependent variables; confounding variable; Z variable.
Bias
Systematic error in selection of participants, measurement of variables, and/or analysis of data that distorts the true relationship between IV and DV.
Randomization
Random sampling
Random assignment
- Randomization is an effective way to control extraneous variables.
- Randomization: The selection, assignment, or arrangement of elements by chance.
- Random sampling means that every member or element (e.g., charts) of the population of interest has an equal probability of being selected to be included in the study.
- Random assignment means that all participants in the sample (not the population) have an equal chance of being assigned to either the treatment or the control group.
Between-groups and within-groups designs
- Between-groups design: Study design where two groups of participants can be compared.
- For example, a researcher who is studying condom use among adolescents may wish to know the practices of high school juniors and seniors as well as college freshmen and sophomores.
- Within-groups design: Comparisons are made about the same participants at two or more points in time or on two or more measures.
- For example, a researcher is interested in the effect of music therapy on patients’ levels of pain. Using a within-groups design, the researcher would measure the participants’ levels of pain before the intervention, conduct the intervention, and then measure pain levels after the intervention.
Study validity
Ability to accept results as logical, reasonable, and justifiable based on the evidence presented.
- internal and external
Internal validity
The degree to which one can conclude that the independent variable produced changes in the dependent variable.
Statistical conclusion validity
The degree that the results of the statistical analysis reflect the true relationship between the independent and dependent variables.
Construct validity
A threat to validity when the instruments used do not accurately measure the theoretical concepts.
External validity
The degree to which the results of the study can be generalized to other participants, settings, and times.
Threats to statistical conclusion validity
Low statistical power
- Low power is often due to a small sample size, which often happens in nursing research.
- A larger sample size increases the likelihood that a statistical test will be able to detect a small difference or relationship, reject the null hypothesis, and allow the researcher to accept that a true relationship or difference does exist.
Low reliability of the measures
- Instruments that are not reliable interfere with researchers’ abilities to draw accurate conclusions about relationships between the IV and DV.
- Assess if self-report instruments achieved an internal consistency reliability of .70 or higher; test–retest reliability of .80 or higher; and if more than one data collector was used, interrater reliability of at least .90 or higher.
Lack of reliability of treatment implementation
- This can occur if different researchers or their assistants have implemented the treatment (IV) differently to different participants or if the same researcher is inconsistent in implementing the treatment from one time to another
Threats to Internal Validity
Attrition rate
Participant burden
- Selection bias: occurs when the change in the DV is a result of differences in the characteristics of participants rather than a result of the IV
- History: occurs when the DV may have been influenced by some event other than the IV that occurred during the course of the study.
- Maturation: Over the course of a study, participants may change either by growing or becoming more mature.
- Testing: occurs when a pretest influences the way participants respond on a posttest
- Instrumentation: when there are inconsistencies in data collection.
- Mortality: the loss of participants before the study is completed.
The term attrition rate refers to the dropout rate
When planning a study, researchers must be mindful about the amount of effort and time required by participants to be in the study, which is known as participant burden.
Threats to Construct validity
Reactivity/Hawthorne effect
Double blind experiment
- Inadequately defined constructs: each theoretical concept needs its own definition, known as a conceptual definition.
- Bias: refers to a systematic error in selection of participants, measurement of variables, or analysis.
- Confounding: a possible source of bias in a study in which an unmeasured, or extraneous, variable (the confounder) distorts the true relationship between the treatment and outcome variable.
- Reactivity: the relationship of the IV to the DV may be influenced by the behavior of the research participant.
- Experimenter expectancies: When researchers have expected or desired outcomes in mind, they may inadvertently affect how interventions are conducted and how they interact with participants.
Reactivity: The influence of participating in a study on the responses of participants;
Hawthorne effect: Participants’ behaviors may be affected by personal values or desires to please the experimenter; reactivity.
Double-blind experimental designs: Studies in which participants and researchers are unaware whether participants are receiving experimental interventions or standard care.
Threats to External Validity
- Effects of selection: when the sample does not represent the population.
- Interaction of treatment and selection of subjects: where the independent variable might not affect individuals the same way.
- Interaction of treatment and setting: when an intervention conducted in one setting cannot be generalized to a different setting.
- Interaction of treatment and history: when historical events affect the intervention.
Experimental designs
Designs involving random assignment to groups and manipulation of the independent variable.
Three features must be present: randomization, control, and manipulation.
Essential Components of Experimental Designs
- Randomization in experimental designs is used in two ways. One way requires researchers to randomly select participants from the target population. The other way is to randomly assign participants to groups.
- The second component of experimental designs is control, which is related to randomization. A control group, for comparison to the experimental group, is one strategy researchers use to control for extraneous variables.
- Researchers must be able to manipulate the independent variable (IV) for a design to be considered experimental.
Randomized controlled trials (RCTs)
Clinical experimental studies that typically involve large samples and are sometimes conducted at multiple sites.
(1) They involve a large number of participants
(2) there are strict guidelines for including participants in a study;
(3) participants are randomly assigned to either the intervention or control group
(4) participants in each group must be equivalent on key characteristics at baseline
(5) the intervention is consistently implemented to all participants in the experimental group following a very rigidly defined protocol for implementation
(6) all participants in both groups are measured on the dependent variable (DV) using the same method of measurement at the same points in time.
Two-group pretest–posttest design
Participants are randomly assigned to the experimental or control group and are measured before and after the intervention; classic or true experiment.
Solomon four-group design
An experimental design involving four groups—some receive the intervention, others serve as controls; some are measured before and after the intervention, others are measured only after the intervention.
Multiple Experimental Groups Designs
Experimental designs using two or more experimental groups with one control group.
Factorial designs
Experimental designs allowing researchers to manipulate more than one intervention.
Researchers may compare multiple interventions (e.g., music and therapeutic touch combined) or multiple levels of interventions (e.g., music with therapeutic touch for 10 minutes, 15 minutes, or 20 minutes).
Crossover designs
Experimental designs that use two or more treatments; participants receive treatments in a random order.
Quasi-experimental designs
Quasi-experimental designs are similar to experimental designs in that they involve manipulation of the IV, but they do not have one of the other essential components of experimental designs.
They either lack randomization or a control group, which makes claims of cause and effect weaker than in true experimental designs.
(1) nonequivalent control group pretest–posttest, (2) one-group pretest–posttest, (3) one-group time series, and (4) preexperimental designs
Nonequivalent Control Group Pretest–Posttest Designs
A quasi-experimental design where participants are not randomly assigned to treatment or comparison groups, but are measured before and after an intervention.
One-Group Pretest–Posttest Design (No Randomization)
A quasi-experimental design where only one group is measured before and after an intervention.
One-Group Time Series Designs
A quasi-experimental design where one group is measured prior to administering the intervention and then multiple times after the intervention.