Week 5 - Intro to Quant Research & Analysis Flashcards
What is involved in rigor of Qual research?
- trustworthiness
- credibility
- dependability
- confirmability
- transferability/fittingness
- authenticity
what is involved in the rigor of Quant research?
- reliability & validity
- internal validity
- reliability
- objectivity
- external validity
define validity
-General definition
-Improves our ability to infer cause-and-effect
relationships
-Improves with control of extraneous factors
-Known threats
-the degree to which it can be inferred that an intervention (IV) rather than confounding factors causes the observed outcome. Confounds can be fatal to a study.
what are examples of validity?
- In a study—are the inferences made by the study accurate?
- In a measurement/tool—does the tool measure what it is supposed to measure?
t/f - Well Constructed Research Study Designs Avoid Multiple Threats to Strong Evidence
true - There are many tradeoffs and factors to consider when prioritizing design decisions to ensure study validity.
what are the 4 types of research validity?
- Internal
- External
- Statistical conclusion
- Construct
what are threats to internal validity?
- History
- Maturation
- Mortality
- Selection bias
- Testing
- Instrumentation
what is external validity?
- the degree to which study results can be generalized to settings or samples other than the group being studied.
- Concerns whether inferences about observed relationships will hold over variations in persons, setting or time. Relates to the generalizability of inferences – a critical concern for evidence-based nursing practice.
what are threats to external validity?
- Selection effects
- Reactive effects
- Measurement effects
t/f - Internal and external validity share an inverse relationship.
true
what is statistical conclusion validity?
- Type I and type II interpretation errors
- the degree to which inferences about relationships from a statistical analysis of the data are correct.
- concerns the validity of inferences that there truly is an empirical relationship or correlation, between the presumed cause and the effect. The researcher’s job is to provide string evidence that an observed relationship is real.
what are threats to statistical conclusion validity?
- Low statistical power
- Excessive homogeneity
- Lack of treatment fidelity
what is construct validity?
-A fundamental criterion for the measurement process in
quantitative research
-the degree to which an abstraction or concept designed by researchers adequately represents higher-order constructs as intended; requires careful attention to what we call things (labels); if studies contain construct errors, the evidence is misleading.
- Involves the validity of inferences “from the observed persons, settings, and cause-and-effect operations included in the study to the constructs that these instances might represent”
what are threats to construct validity?
- reactivity to the study situation
- researcher expectancies
- novelty effects
- compensatory effects
- treatment diffusion/contamination
what is an example of a question you would ask for internal validity?
If there is a true statistical relationship between cause and effect, is it the independent variable or is there an alternative explanation?
what is an example of a question you would ask for external validity?
Do inference concerns about relationships made in the study hold up over variation in persons, settings, contexts, over time? (generalizability)
what is an example of a question you would ask for statistical conclusion validity?
Does a true empirical relationship of cause-effect actually exist?
what is an example of a question to ask about construct validity?
What are the inferences from observed persons, settings, and cause-effect operations included in the study to the constructs these might represent?
what methods are used to control over participant characteristics in quant research?
Randomization Crossover Homogeneity Stratification/Blocking Matching Statistical Control
define temporal ambiguity as a threat of internal validity
when looking at cause and effect, it is important to establish that the cause (IV) preceded the effect (DV). RCT study designs carefully plan for timing and involve data collection over a series of time exposure points, conditions; however correlational and cross-sectional designs this may not be clear (if a data measurement is largely from 1 point in time).
define selection bias as a threat to internal validity
- self selection and convenience sampling (vs randomization) such that retained individuals in a study distort truth estimates.
- encompasses biases resulting from preexisting differences between groups (Most problematic & frequent)
define history as a threat to internal validity
- concurrent events at the same time period unrelated to the study but impact the DV (i.e. COVID-19)
- occurrence of external events that take place concurrently with the independent variable and that can affect the outcomes
define maturation as a threat to internal validity
- there are changes to the DV over time, such that it is no longer the same as when the study started.
- Refers to processes occurring during the study as a result of the passage of time rather than as a result of the independent variable. (relevant in health research – not just referring to age but rather to any changes that occur as a function of time)
define mortality/attrition in terms of a threat to internal validity
- differential loss of participants from different groups
- arises from attrition in groups being compared. If different kinds of people remain in the study in one group versus another, then these differences, rather than the independent variable, could account for observed differences on the outcomes.
define testing effect as a threat to internal validity
- when administration of a tests or measures impacts the DV completely separate from the IV (i.e. fatigue outcome and intensive testing requirements in cancer patients over several days)
- The effects of taking a pretest on people’s performances on a post test
define instrumentation as a threat to internal validity
- if the researcher changes the measure/tool/intervention during the course of the study (i.e. survey types); Instrumentation
- Changes in measuring instruments or methods of measurements between 2 points of the data collection
define selection effects as a threat to external validity
Pre-existing differences between groups in a study impact how study results can be applied to similar or different groups (sample representativeness).
define interactions between relationships and people as a threat to external validity
Unique interactions b/t people and DV; unique interactions as causal effect on treatment variation. Some interventions work better for different groups: cognitively impaired, Non-English speakers, cultural preferences, norms, and values.
define interactions between causal effects and treatment variation as a threat to external validity
Implementing an innovative treatment intervention or measurement may never again be possible (i.e. an enthusiastic project director) creating markedly different effects by alternative personnel in subsequent tests.
define low statistical power as a threat to statistical conclusion validity
the ability of a research design and analytic strategy to detect true relationships among variables.
define restriction of range (homogeneity) as a threat to statistical conclusion validity
control of extraneous variation to clarify relationships (i.e.: age ranges, diagnosis parameters); easy to do but it limits generalizability
define unreliable treatment implementation (poor intervention fidelity, treatment fidelity) as a threat to statistical conclusion validity
the intervention is not as powerful as it is on paper/proposal/publication; OR the extent to which an intervention is faithful to its plan and leads to an erroneous conclusion of ineffectiveness. Examples: Lack of standardization or use of procedural manuals; protocol integrity and non-compliance; Inadequate education/training of personnel; Patient centered-protocols are not followed by the study participants (Treatment Adherence); intervention suppression through variable treatment conditions (scheduling/timing and staffing variability).
define reactivity to the study (hawthorne effect) as a threat to external validity
Participants react in unique ways they otherwise wouldn’t have because they know they are being directly watched and recorded.
define researcher expectations as a threat to construct validity
- Nonverbal, verbal and behavioral cues that a researcher or staff may have on their participants. Options to correct are blinding the participants to the treatment, or researcher carefully notes staff and participant responses and makes corrections.
- Participants respond through subtle communication about desired outcomes.
define novelty effects as a threat to construct validity
when a treatment is new, participants and researchers might alter their behavior. It is a new innovation so individuals are especially enthusiastic OR participants are extremely skeptical and do not support the intervention.
define compensatory effects as a threat to construct validity
Compensatory effects can occur if HC staff try to compensate for the control group’s failure to receive a perceived beneficial treatment, thus becoming a compensatory rivalry intended to demonstrate they can do as well as those receiving a particular treatment.
define treatment diffusion/contamination as a threat to construct validity
Alternative treatment conditions can become blurred, which impedes good construct descriptions of the IV. This may occur when a participant in a control group condition receives services similar to those in the treatment condition. For example, the need to make accurate comparisons of smokers vs non-smokers in screening for a case-control study. Inclusion of participants who categorize themselves as ‘non-smokers’ but actually occasionally smoke ‘just on the weekends’ contaminates the validity of the entire group of true non-smokers.
what is a simple hypothesis
one in which there exits relationship between two variables one is called independent variable or cause and the other is dependent variable or effect
what is an example of a simple hypothesis
Smoking leads to cancer.
The higher ratio of unemployment leads to crimes.
what is a complex hypothesis
one in which as relationship among variables exists. In these types dependent and independent variables are more than two
what is an example of a complex hypothesis
Smoking and other drugs leads to cancer, tension, chest infections etc.
The higher ration of unemployment poverty illiteracy leads to crimes like dacoit etc.
what is a non-directional hypothesis?
A two-tailed non-directional hypothesis predicts that the independent variable will have an effect on the dependent variable, but the direction of the effect is not specified.
what is an example of a non-directional hypothesis
there will be a difference in how many numbers are correctly recalled by children and adults
what is a directional hypothesis
A one-tailed directional hypothesis predicts the nature of the effect of the independent variable on the dependent variable
what is an example of a directional hypothesis
adults will correctly recall more words than children
what is a statistical hypothesis (null)
The null hypothesis states that there is no relationship between the two variables being studied (one variable does not affect the other). It states results are due to chance and are not significant in terms of supporting the idea being investigated.
what are the 3 research variables we need to know
- independent
- dependent
- extraneous (confounding)
what is the independent variable
The presumed cause (the intervention)
what is the dependent variable
The presumed effect (Outcome)
what is the extraneous (confounding) variable
contaminating factors that might obscure the relationship between the variables of central interest. Contaminating factors.
what are the strengths of homogeneity
Easy to achieve in all types of research Could enhance interpretability of relationships
what are the limitations of homogeneity
Limits generalizability Requires knowledge of which variables to control Range restriction could lower statistical conclusion validity
what is intervention fidelity
concerns the extent to which the implementation of an intervention is faithful to its plan. There is growing interest in intervention fidelity and considerable advice on how to achieve it
what are the strengths of randomization
Requires knowledge of which variables to control, as well as measurement of those variables Requires some statistical sophistication
what are the limitations of randomization
Constraints (ethical, practical) on which variables can be manipulated Possible artificiality of conditions Resistance to being randomized by many people
what are the strengths of statistical power
Enhances ability to detect and interpret relationships Relatively economical means of controlling several confounding variables
what are the limitations of statistical power
Requires knowledge of which variables to control, as well as measurement of those variables Requires some statistical sophistication
what is reliability
refers to the accuracy and consistency of information obtained in a study