Research Design Flashcards
misra er al (2013)
conducted a cross sectional survey design which recruited patients and the dentist who provided treatment
patients and dentists were asked (4)
general issues discussed
specific info given about oral health
what procedures were performed
further actions planned and agreed to
who recalled more info? dentists or patients
dentists
dentists recalled giving more (2)
dental health education than patients remembered
discussing more future actions that patients remembered
technical aspects (crowns/bridges) of the encounter were more often reported by both dentist and patient than were
psychological issues (pain/embarrassment)
both indicated higher levels of — discussion, than dental health education or agreed future actions
procedure
there was no relationship found between patient really and
satisfaction
conclusions
dentist recall may be higher due to their control over the consultation structure
research
results from individual studies
evidence
cumulative results across studies
what is the synthesis of all valid research studies that answer a specific question?
evidence
what does evidence-based practice involve?
tracking down the available evidence, assessing validity, and using the best evidence to inform treatment decisions
in addition to a significant literature on a topic, clinicians must also consider (3)
clinical circumstances
experience and professional judgement
patient’s values and preferences
validity
“closeness to the truth”
the degree to which the design and method provide for accurate investigation of event in question
as research reviewers you have the opportunity to asses both internal and external validity
examples of threats to internal validity (3)
selection bias
maturation
instrumentation
— — to groups addresses many threats to internal validity, but not all
random assignment
external validity is the ability to generalize findings: (3)
beyond the subjects in the study
beyond the environmental constraints of the current study
to other temporal periods
as controls increase (increasing internal validity) the generalizability of findings may
suffer (decreasing external validity)
types of research bias (2)
systematic error that causes a preference for one outcome over another
problematic or incomplete controls that result in skewed observations
pretrial bias (2)
selection bias
channeling bias
selection bias
procedure for selection of participants is different across groups
channeling bias
placing participants in study conditions according to prognosis, age, fragility, etc
interviewer bias
error introduced by researcher collecting data
chronology bias
historic control subject to changes in practice
recall bias
skewed or faulty recollection of events/associations
transfer bias
differential attrition across conditions
misclassification bias
problems with operational definition of grouping variables
performance bias
differences in clinical quality of intervention across providers
citation bias
comparative evidence limited by what is accessible
publication bias
previous evidence not available due to publication preferences
confounding bias
observed association due to some unknown variable
quantitative inquiry rooted in empiricism
only those phenomena which can be measured are “real”
measures are often numeric scales
qualitative inquiry based in hermeneutics
the interpretation of contextual meaning
measures are subjective and dependent upon perceptual biases
evidence hierarchy (strongest to weakest) (6)
systematic reviews and meta-analyses randomized controlled trials cohort studies case control studies cross sectional studies case reports
experimental research is a type of — inquiry
quantitative
experimental research investigates
“cause”
experimental research
studies in which the researcher controls or manipulates the variables under investigation
observational research may be either
quantitative or qualitative
observational research without experimental
controls (may include comparison to natural groups)
observational research is sometimes called
“quasi experimental”
observational research
designs provide for investigation of relationships, but not cause
variable
any factor relevant to a particular study
may be known or unknown
examples of variables (4)
age
ethnicity
socioeconomic status
disease history
independent variable
a factor or condition that changes naturally or is intentionally manipulated (ex. grouping variable) by the investigator to observe the effect
- known and controlled by the experimenter
- “causative factor”
dependent variable
an observed variable in an experiment or study for which changes are determined by the level of one or more independent variables
- a factor directly affected by another
- “response” or “outcome”
confounding variable
statistically, an extraneous variable that correlates significantly with both the dependent variable and the independent variable
a factor not considered or recognized (unmeasured) by the researcher that has significant impact on the dependent variable or outcome of interest
-“confounding influences” or “error”
prospective research looks as
events that have not yet happened or constructs that have not yet been measured
retrospective research looks at
data that already exists
what is the strongest evidence for demonstrating cause and effect?
randomized control trial
why do we do random assignment?
it reduces the effect of bias due to intervening variables
-assumes that confounding conditions will be equally distributed across groups
appraising a study: what to look for (6)
who were the participants? how were they assigned to groups? what were the study conditions? hypotheses? assessment of intervention? how were the data analyzed?