research midterm Flashcards
basic research
acquisition of new knowledge, may not have direct clinical implications
applied (clinical) research
advances developments, answers questions with direct clinical application.
classified as explanatory, exploratory or descriptive.
types of clinical research
patient oriented
epidemiologic and behavioral studies
outcomes research and health sciences research
qualitative research
capture naturally occuring phenomena
quantitative research
logical and controlled relationship among variables
scientific method
systemic, empirical, controlled, critical
research process
identify question
design study
implement study
analyze data
share findings
experimental research
highest degree of validity
RCT is gold standard
efficacy
effectiveness of intervention under ideal conditions
effectiveness
in real world conditions
PICO
P- population or problem
I- intervention, independent variable
C- comparison
O- outcomes
characteristics of a good research question
important
ethical
feasible
what is a theory
untested hunch
a set of interrelated concepts that specifies relationships among variables
purpose of theories
summarize existing knowledge
predict
stimulate development of new knowledge
provides basis for asking questions for applied research
concepts
building blocks of theories
become variables
must be operationally defined
constructs
intangible concepts
not observable
measured by correlated behaviors
propositions
concepts integrated into generalized theory
state relationships between variables
models
simplification of theory
structural representation of concepts within phenomenon
deductive reasoning
theory to confirmation
top-down
few or nor prior observations
broad to specific
inductive reasoning
observation to theory
bottom-up
starts with empirical observation
specific to broad
characteristics of good theories
rational
testable
economical
relevant
adaptable
ways of knowing
best to worse
scientific evidence
inductive/deductive reasoning
experience
authority
tradition
scientific method steps
question
hypothesis
experiment
results
conclusion
evidence based practice
integration of best research evidence with clinical expertise and patient values
hierarchy of evidence
most bias control to least
experimental designs
quasi-experimental
non-experimental
case report/anecdote
evidence is not enough
patient values and expectations
your own clinical experience
benefits and risks
circumstances and setting
pyramid of certainty
SR of RCT
RCT
SR of cohort studies
cohort studies
SR of case control studies
case control studies
case studies
clinical experience, expert opinion, mechanism based reasoning
3 pillars of EBP
evidence
expertise
pt values
clinical circumstances
space
cost
skill
process of EBP (5 a’s)
ask
acquire
appraise
apply
assess
researchers must
assure pt rights
practice honesty and integrity
justify project based on potential scientific value of results
conduct meaningful research
obligations
minimize effect of personal bias in measurement
never falsify or misrepresent data
avoid conflict of interest
publish findings
how does research differ from clinical practice?
intents
innovative
plan
guiding ethical principles
respect for persons
beneficence
justice
4 components of informed consent
disclosure
comprehension
voluntariness
competence
do participants have to be informed of control group?
yes
do participants have the right to switch groups?
yes, and should be offered treatment at end of trial.
when does informed consent start?
before data collection starts and continues for duration of study. it is constant.
who are vulnerable populations?
prisoners
pregnant
children
developmental disability, mental illness
types of harm
physical- injury, side-effects, no improvement
economic
social
Nuremberg code
first formal guidelines
voluntary consent
competence of investigator
declaration of helsinki
independent review of protocols
belmont report
common rule
-respect for persons
-beneficence
-justice
national research act
clearly stated research design
informed consent
IRB
IRB members
at least 5
-not all same gender
-not all same professional group
-one member primarily concerned with nonscientific issues
-one public member
purpose of IRB
ensure respect for persons via informed consent
ensure beneficence via assessing risks and benefits
ensure justice via fair subject selection
ultimate goal: scientific truth
replication
appropriate power
validity in study design and analysis
nominal variables
no numerical order
dichotomous nominal variables
when there can only be two answers
ex. yes/no
ordinal variables
rank order, unequal variables
interval variables
rank order, equal intervals, no true zero
ratio variables
interval scale with true zero, no negatives
if convert to lower scale…
… lose information
can you convert to higher scale?
no.
reliabilty
reproducibility, consistency
can repeat on 2+ occasions
validity
accuracy, correctness
measurement error equation
observed score = true score +/- error
systematic error
always over/underestimate
random error
due to chance
unpredictable
regression to mean
random measurement errors are random
will equal out over time
rater measurement error
error in perception of recording
instrument measurement error
not calibrated
variability of characteristic being observed error
something that is always changing
ex. blood glucose
relative reliability
ratio of total variability between scores to variability within scores
(ICC and kappa)
absolute reliabilitiy
how much is likely due to error
(standard error of the measurement)
acceptable ICC values
> 0.90 = best for clinical measures
0.80 = acceptable
0.75 = good
<0.75 = poor to moderate
how to improve reliability?
standardize measurement methods
take multiple measurements
train and test observers
calibrate and improve instruments
automate instruments
blind to reduce bias
choose sample with range of scores
test-retest reliability
instrument is capable of measuring a variable consistently
considerations for test-retest
interval between tests
carryover
testing effects
change over time
intra-rater reliability
within-rater
same rater
rater bias
influenced by memory of first score
inter-rater reliability
between raters
internal consistency
often used to evaluate scales
how well items reflect same construct yield similar results
split half reliability
divide and compare halves
alternate forms reliability
2 versions of same instrument
same unit of measurement
issues affecting validity
levels of measurement
reliability
stability
baseline score
responsiveness
detect small but meaningful change
minimal detectable change (MCD)
real; not clinically significant
minimal clinically important difference (MCID)
real; clinically significant
normally larger than MCD
measurement validity (3 items)
can the test
-discriminate
-evaluate
-predict
face validity and who it’s judged by
appears to test what is intended
judged by users of test after development
content validity and who it’s judged by.
adequately represent concept
typically questionnaire
measured by expert panel review
3 requirements for content validity
represent full scope of construct
number of items proportionate to importance of component
no irrelevant items
criterion-related validity
can outcomes be substituted for established gold standard
concurrent validity
scores form new measure correlate with scores from well-established measure administered at same time
predictive validity
outcome of target test can be used to predict future criterion score
construct validity
ability to measure theoretical dimensions of construct
methods of construct validity
known groups method
convergence and divergence
factor analysis
population
persons, objects, events that meet specific criteria
target populations
larger populations to which results will be generalized
accessible population
actual population of subjects available to be chosen for study
sample
subgroup of population of accessible population which allows results to be generalized to population
inclusion criteria
what makes someone eligible
exclusion criteria
would preclude someone from being a subject if meet inclusion criteria
sampling bias
systematically misrepresents population
conscious or unconscious
sampling error
randomly misrepresents population
probability samples
random selection
considered representative of population
can estimate sampling error
non-probability samples
non-random
not considered representative of population
cannot estimate sampling error
simple random sampling
random sampling
systematic sampling
select every nth person
stratified random sampling
specify a number from each category
cluster sampling
multilayer/stage
counties>city blocks>households>individuals
convenience sampling
basis on availability
volunteers
quota sampling
specify number from each category, but not random
purposive sampling
subjects hand-picked by specific criteria