Research Design Flashcards
What is a theory ?
A general principle or body of principles offered to explain a phenomenon.
Like Dalton’s atomic theory or Einsteins theory of relativity
Range of nursing theories
Grand theories
Broadest scope, most abstract
Apply to all nursing activities
Mid-range theories
Narrower in scope
Bridge between grand theories & practice
Practice theories
Most narrow scope & least abstract
Jean Watson’s Caring Science Theory
Really looks into caring and what it consists of and its role in nursing
- read on if you desire
Caring can be effectively demonstrated & practiced only interpersonally.
Caring consists of carative factors that result in the satisfaction of certain human needs.
Effective caring promotes health & individual or family growth.
Caring responses accept a person not only as he or she is now but as what he or she may become.
A caring environment is one that offers the development of potential while allowing the person to choose the best action for himself or herself at a given point in time.
Caring is more “ healthogenic” than is curing. A science of caring is complementary to the science of curing.
The practice of caring is central to nursing.
Conceptual Models
Represent a less formal attempt to explain phenomena than theories
Deal with abstractions, assembled in a coherent scheme
Just understand that implicitly or explicitly, studies should have a ____________ or ______________ framework.
theoretical; conceptual
What is the caveat with nursing theories ?
Nursing “Grand Theories” evolved from efforts to establish nursing as a profession, separate from medicine.
Difficult to empirically test the aspirational, abstract grand theories, so less relevance to evidence-based practice.
From a population a portion of the population is selected to represent the entire population …. what is this called ?
Sampling
Eligibility criteria include
inclusion and exclusion criteria, specific characteristics that defines the population
What is a strata ?
Subpopulations of a population - such as male and female
What is the target population
The entire population of interest
What is a representative population ?
A sample whose key characteristics closely approximate those of the target population—a sampling goal in quantitative research
Representative samples are more easily achieved with …..
Probability sampling
Homogeneous populations
Larger samples
What is sampling bias ?
The systematic over- or under-representation of segments of the population on key variables when the sample is not representative
What is a sampling error?
Differences between sample values and population values
E.g. population mean age = 65.6 yrs, sample mean age = 59.2 yrs
Difference between probability sampling and non probability sampling …..
One involves random selection of elements with each having an equal, independent chance of being selected
The other does not involve random selection of elements
Types of nonprobability sampling
Convenience sampling
Snowball (network) sampling
Quota sampling
Purposive sampling
Convenience sampling involves
really whatever is most accessible and conveniently available
Most widely used approach by quantitative researchers
Most vulnerable to sampling biases
Snowball Sampling
Referrals from other people already in a sample
Used to identify people with distinctive characteristics
Used by both quantitative and qualitative researchers; more common in qualitative
Quota Sampling
Convenience sampling within specified strata of the population
Enhances representativeness of sample
Infrequently used, despite being a fairly easy method of enhancing representativeness
Consecutive sampling involves ….
COME, LETS GO, EVERYONE INSIDE, everyone one who is here!!!!
Involves taking all of the people from an accessible population who meet the eligibility criteria over a specific time interval, or for a specified sample size
A strong nonprobability approach for “rolling enrollment” type accessible populations
Risk of bias low unless there are seasonal or temporal fluctuations
Purposive (Judgemental ) Sampling
Sample members are hand-picked by researcher to achieve certain goals
Used more often by qualitative than quantitative researchers
Can be used in quantitative studies to select experts or to achieve other goals
Types of Probability Sampling
Simple random sampling
Stratified random sampling
Cluster (multistage) sampling
Systematic sampling
Simple Random sampling
Uses a sampling frame – a list of all population elements
Involves random selection of elements from the sampling frame
Example- a list of all households in Montgomery County - then 500 households are randomly selected
Stratified Sampling
Population is first divided into strata, then random selection is done from the stratified sampling frames
Enhances representativeness
Can sample proportionately or disproportionately from the strata
Cluster (Multistage) Sampling
Successive random sampling of units from larger to smaller units (e.g., states, then zip codes, then households)
Widely used in national surveys
Larger sampling error than in simple random sampling, but more efficient
Sample size adequacy is a key determinant of ___________ in quantitative research.
Sample size needs can and should be estimated through ________ for studies seeking causal inference.
sample quality; power analysis
The big question of data collection ?
do I collect new data specifically for research purposes or do i collect existing data (historical data, records, existing data set)
Major types of data collection methods ?
Self report; observation; biophysiologic measures
major considerations in choosing the data collection approach ….
Do you want more open-ended data or more objective, quantifiable data? How obtrusive is the method?
Structured self reports can be either
Interview schedule
Questions are prespecified but asked orally.
Either face-to-face or by telephone
Questionnaire
Questions prespecified in written form, to be self-administered by respondents
Advantages of Questionnaires (compared with interviews)
Lower costs
Possibility of anonymity, greater privacy
Lack of interviewer bias
Advantages of Interviews (Compared with Questionnaires)
Higher response rates
Appropriate for more diverse audiences
Opportunities to clarify questions or to determine comprehension
Opportunity to collect supplementary data through observation
What are scales used for ?
used to make fine quantitative discriminations among people with different attitudes, perceptions, traits
The Likert Scale is an example - Consist of several declarative statements (items) expressing viewpoints
Responses are on an agree/disagree continuum (usually 5 or 7 response options).
Responses to items are summed to compute a total scale score.
Semantic Differential Scores - Require ratings of various concepts
Rating scales involve bipolar adjective pairs, with 7-point ratings.
Ratings for each dimension are summed to compute a total score for each concept.
Visual Analog scale does what ?
measures subjective experiences (pain, nausea) on a straight line measuring 100 mm
Response set biases
Biases reflecting the tendency of some people to respond to items in characteristic ways, independently of item content
Observational Rating Scales
Ratings are on a descriptive continuum, typically bipolar
Ratings can occur:
at specific intervals
upon the occurrence of certain events
after an observational session (global ratings)
Evaluation of Observational Methods
Excellent method for capturing many clinical phenomena and behaviors
Potential problem of reactivity when people are aware that they are being observed
Risk of observational biases—factors that can interfere with objective observation
Evaluation of Self Report Methods
Strong on directness
Allows access to information otherwise not available to researchers
But can we be sure participants actually feel or act the way they say they do?
Difference between in vivo measurements and in vitro biophysiologic measurements
In vivo measurements occur on or within organisms body (blood pressure)
In vitro is performed outside the organisms body
Evaluation of biophysiologic measures
Strong on accuracy, objectivity, validity, and precision
May or may not be cost-effective for nurse researchers
Advanced skills may be needed for interpretation.
What is a psychometric assessment ?
What are the key criteria ?
an evaluation of the quality of a measuring instrument.
Key criteria in a psychometric assessment:
Reliability
Validity
An experimental research design contains what?
Intervention - Randomization - Control
Quasi Experimental
Intervention
but missing randomization and control
Nonexperimental
No intervention
Observational or descriptive
may have random sampling- but this is not the same as random assignment
Within subjects design - what is it ?
The same people in the experiment are compared at different times or under different conditions
Between subjects design
Different people are compared
Group A subjects take actual study drug
Group B subjects take placebo
What type of comparisons will be made to illuminate relationships ? Isn’t that the question ….
WIthin subjects
Between subjects
Single blind and double blind
Single blind - subjects don’t know which group they are in
Double blind- neither researchers or subjects know who is in which group
Prospective and Retrospective Data Collection
Prospective - looking forward
Retrospective - looking backward
Three key criteria for making causal inferences
cause must precede the effect in time
must be demonstrated relationship between the cause and effect
Relationship between the presumed cause and effect cannot be explained by a third variable
Biologic plausibility
Another criteria for causality - basically the causal relationship should be consistent with evidence from basic physiologic studies
Coherence
Another criteria for causality - multiple sources should be involved when it comes to establishing existence of relationship between cause and effect
What type of designs offer the strongest evidence of whether a cause results in an effect?
Experimental Designs
Characteristics of a true experiment
Manipulation
Control
Randomization
Crossover design
Subjects are exposed to 2+ conditions in random order
subjects “serve as their own control”
Factorial
More than one independent variable is experimentally manipulated
What is treatment fidelity ?
Also called intervention fidelity …
whether the treatment as planned was actually delivered and received
Quasi experiements involve an intervention but lack ……
randomization or control group
If there is no intervention, this is called ………….
observational research non experimental research
What are the two main categories of quasi experiments ?
Within subject designs - one group is studied before and after the intervention
Nonequivalent control group designs
- those getting the intervention are compared with a nonrandomized comparison group
Cause probing questions for which manipulation is not possible are typically addressed with a ………….
correlational design
There is prospective and retrospective correlational design
Is all research cause probing ?
No
Some research is descriptive (like ascertaining the prevalence of a health problem)
Others are descriptive correlational - purpose is to describe whether variables are related , without ascribing a cause and effect connection
Cross sectional design
Data are collected at a single point in time across different stratas or groups like ages
Longitudinal design
Data are collected two or more times during an extended period
Ways of controlling confounding variables
Achieving constancy of conditions
Control over environment, setting and time
Control over intervention via a formal protocol
A more ____________ sample may minimize confounders, but limits the ability to generalize outside the study
homogenous
Inclusion and Exclusion criteria work to exclude what ?
Confounding variables
What is intrinsic factor ?
Control over subject characteristics which is done through inclusion and exclusion criteria
Different methods of controlling intrinsic factor ?
Randomization Subjects as own controls (crossover design) Homogeneity (restricting sample) Matching Statistical control e.g., analysis of covariance
What is internal validity ?
the extent to which it can be inferred that the independent variable caused or influenced the dependent variable
What is external validity ?
the generalizability of the observed relationships to the target population
What is statistical conclusion validity /
the ability to detect true relationships statistically
Threats to internal validity
Temporal ambiguity - unclear whether presumed cause occured before the outcome
selection threat (single biggest threat to studies that do not use an experimental design)
What is history threat, maturation threat and mortality threat ?
These are all threats to internal validity
History threat - something else occuring at the same time as causal factor
Maturation threat - processes that result simply from the passage of time
Mortality threat - loss of participants for whatever reason
Threats to external validity
Selection bias - sample selected for the study does not accurately represent the target population
Expectancy Effect - (Hawthorne effect) makes effects observed in a study unlikely to be replicated in real life.
Threats to Statistical Conclusion Validity
Low statistical power (e.g., sample too small)
TIP: If researchers show no difference in outcome measure (DV) between experimental & control groups, sample size may have been too small to detect difference!
Weakly defined “cause”—independent variable not powerful
Unreliable implementation of a treatment—low intervention fidelity
What is reliability ?
The degree to which an instrument accurately and consistently measures the target attribute
Reliability coefficients range from ________ and are considered good/acceptable at 0.________ or more
0.00-1.0 ; 0.70
What are the 3 aspects of reliability that can be evaluated ?
Stability
Internal Consistency
Equivalence
Stability involves
the test- retest reliability
It is the the extent to which scores are similar on two separate administrations of an instrument
Internal Consistency is assessed by computing
the coefficient alpha (0.70 or more is desirable)
this is the most widely used approach to assessing reliability
What is internal consistency ?
The extent to which all the items on an instrument are measuring the same unitary attribute
An anxiety questionnaire should all have questions aimed at assessing anxiety levels
Equivalence is most relevant for ______________
structured observations
Assessed by comparing agreement between observations or ratings of two or more observers
Equivalence is the degree of similarity between alternative forms of an instrument or between multiple raters/observers using an instrument
Reliability is ______ in homogeneous than heterogeneous subject samples.
lower
Reliability is ____________ in shorter than longer multi-item scales.
lower
Reliability is necessary (but not sufficient) for validity.
True or false?
True
An instrument can be _____________ but not __________________ but it can’t be valid if it lacks _______________
reliable; valid; reliability
An instrument can be valid if it lacks reliability . true or false ?
False.
An instrument can be reliable but not valid
What is validity ?
The degree to which an instrument measures what it is supposed to measure
Four aspects of validity
Face validity
Content validity
Criterion-related validity
Construct validity
Face validity
Refers to whether the instrument looks as though it is an appropriate measure of the construct
Based on judgment; no objective criteria for assessment
Content validity is evaluated by _________
expert observation, often via the content validity index. (CVI)
What is criterion related validity ?
The degree to which the instrument is related to an external criterion
Validity coefficient acceptable score
Validity coefficient is calculated by analyzing the relationship between scores on the instrument and the criterion (.7 or higher)
Predictive validity
Predictive validity: the instrument’s ability to distinguish people whose performance differs on a future criterion (e.g., SAT is predictive of college GPA)
Concurrent validity
Concurrent validity: the instrument’s ability to distinguish individuals who differ on a present criterion (e.g., SAT & current GPA are positively correlated >.7)
Construct validity - what is it concerned with ?
What is this instrument really measuring?
Does it adequately measure the construct of interest?
What are two ways of assessing construct validity ?
Known-groups technique
Testing relationships based on theoretical predictions
E.g., a tool for fatigue scores high for patients receiving radiation therapy, low for healthy persons
Factor analysis
Statistical test to determine whether items load on single construct
Criteria for Assessing/Screening Diagnostic instruments
Sensitivity: the instruments’ ability to correctly identify a “case”—i.e., to diagnose a condition
Specificity: the instrument’s ability to correctly identify noncases, that is, to screen out those without the condition
What are some studies that involve an intervention ?
Mixed Method
Clinical trials
Evaluation research
Nursing intervention research
Studies that do not involve an intervention
Outcomes research
Surveys
Secondary analyses
Methodologic research
Mixed Method Research
Research that integrates qualitative and quantitative data and strategies in a single study or coordinated set of studies
What are clinical trials ?
Studies that develop clinical interventions and test their efficacy and effectiveness
May be conducted in four phases
What is phase 1 of clinical trial ?
finalizes the intervention (includes efforts to determine dose, assess safety, strengthen the intervention)
Phase 2 of clinical trial
seeks preliminary evidence of effectiveness—a pilot test; may use a quasi-experimental design
Phase III of clinical trial
fully tests the efficacy of the treatment via a randomized clinical trial (RCT), often in multiple sites; sometimes called an efficacy study
Phase 4 of clinical trial
focuses on long-term consequences of the intervention and on generalizability; sometimes called an effectiveness study
What does evaluation research do ?
Examines how well a specific program, practice, procedure, or policy is working
What does outcome analysis do ?
Seeks preliminary evidence about program success
Outcomes research
Designed to document the quality and effectiveness of health care and nursing services
key concepts:
Structure of care (e.g., nursing skill mix)
Processes (e.g., clinical decision-making)
Outcomes (end results of patient care)
Survey research obtains information via
self reports through face to face interviews, telephone calls,self administered questionnaires,
Survey research is better for an ___________ rather than an _________________ inquiry
extensive; intensive
What does a secondary analysis do ?
Study that uses previously gathered data to address new questions
Can be undertaken with qualitative or quantitative data
Cost-effective; data collection is expensive and time-consuming
Secondary analyst may not be aware of data quality problems and typically faces “if only” issues (e.g., if only there was a measure of X in the dataset).
What does methodologic research do ?
Studies that focus on the ways of obtaining, organizing, and analyzing data
Can involve qualitative or quantitative data
Examples:
Developing and testing a new data-collection instrument
Testing the effectiveness of stipends in facilitating recruitment
How confident are you going into this exam ?
I’m very confident !!!!!!! I will be victorious