Research Design Flashcards

1
Q

What is a theory ?

A

A general principle or body of principles offered to explain a phenomenon.

Like Dalton’s atomic theory or Einsteins theory of relativity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Range of nursing theories

A

Grand theories
Broadest scope, most abstract
Apply to all nursing activities

Mid-range theories
Narrower in scope
Bridge between grand theories & practice

Practice theories
Most narrow scope & least abstract

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Jean Watson’s Caring Science Theory

A

Really looks into caring and what it consists of and its role in nursing

  • read on if you desire

Caring can be effectively demonstrated & practiced only interpersonally.
Caring consists of carative factors that result in the satisfaction of certain human needs.
Effective caring promotes health & individual or family growth.
Caring responses accept a person not only as he or she is now but as what he or she may become.
A caring environment is one that offers the development of potential while allowing the person to choose the best action for himself or herself at a given point in time.
Caring is more “ healthogenic” than is curing. A science of caring is complementary to the science of curing.
The practice of caring is central to nursing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Conceptual Models

A

Represent a less formal attempt to explain phenomena than theories
Deal with abstractions, assembled in a coherent scheme

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Just understand that implicitly or explicitly, studies should have a ____________ or ______________ framework.

A

theoretical; conceptual

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the caveat with nursing theories ?

A

Nursing “Grand Theories” evolved from efforts to establish nursing as a profession, separate from medicine.
Difficult to empirically test the aspirational, abstract grand theories, so less relevance to evidence-based practice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

From a population a portion of the population is selected to represent the entire population …. what is this called ?

A

Sampling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Eligibility criteria include

A

inclusion and exclusion criteria, specific characteristics that defines the population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is a strata ?

A

Subpopulations of a population - such as male and female

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the target population

A

The entire population of interest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a representative population ?

A

A sample whose key characteristics closely approximate those of the target population—a sampling goal in quantitative research

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Representative samples are more easily achieved with …..

A

Probability sampling
Homogeneous populations
Larger samples

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is sampling bias ?

A

The systematic over- or under-representation of segments of the population on key variables when the sample is not representative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a sampling error?

A

Differences between sample values and population values

E.g. population mean age = 65.6 yrs, sample mean age = 59.2 yrs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Difference between probability sampling and non probability sampling …..

A

One involves random selection of elements with each having an equal, independent chance of being selected

The other does not involve random selection of elements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Types of nonprobability sampling

A

Convenience sampling
Snowball (network) sampling
Quota sampling
Purposive sampling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Convenience sampling involves

A

really whatever is most accessible and conveniently available

Most widely used approach by quantitative researchers
Most vulnerable to sampling biases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Snowball Sampling

A

Referrals from other people already in a sample

Used to identify people with distinctive characteristics
Used by both quantitative and qualitative researchers; more common in qualitative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Quota Sampling

A

Convenience sampling within specified strata of the population
Enhances representativeness of sample
Infrequently used, despite being a fairly easy method of enhancing representativeness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Consecutive sampling involves ….

A

COME, LETS GO, EVERYONE INSIDE, everyone one who is here!!!!

Involves taking all of the people from an accessible population who meet the eligibility criteria over a specific time interval, or for a specified sample size
A strong nonprobability approach for “rolling enrollment” type accessible populations
Risk of bias low unless there are seasonal or temporal fluctuations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Purposive (Judgemental ) Sampling

A

Sample members are hand-picked by researcher to achieve certain goals

Used more often by qualitative than quantitative researchers
Can be used in quantitative studies to select experts or to achieve other goals

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Types of Probability Sampling

A

Simple random sampling
Stratified random sampling
Cluster (multistage) sampling
Systematic sampling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Simple Random sampling

A

Uses a sampling frame – a list of all population elements
Involves random selection of elements from the sampling frame

Example- a list of all households in Montgomery County - then 500 households are randomly selected

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Stratified Sampling

A

Population is first divided into strata, then random selection is done from the stratified sampling frames
Enhances representativeness
Can sample proportionately or disproportionately from the strata

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Cluster (Multistage) Sampling

A

Successive random sampling of units from larger to smaller units (e.g., states, then zip codes, then households)
Widely used in national surveys
Larger sampling error than in simple random sampling, but more efficient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Sample size adequacy is a key determinant of ___________ in quantitative research.
Sample size needs can and should be estimated through ________ for studies seeking causal inference.

A

sample quality; power analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

The big question of data collection ?

A

do I collect new data specifically for research purposes or do i collect existing data (historical data, records, existing data set)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Major types of data collection methods ?

A

Self report; observation; biophysiologic measures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

major considerations in choosing the data collection approach ….

A

Do you want more open-ended data or more objective, quantifiable data? How obtrusive is the method?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Structured self reports can be either

A

Interview schedule
Questions are prespecified but asked orally.
Either face-to-face or by telephone

Questionnaire
Questions prespecified in written form, to be self-administered by respondents

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Advantages of Questionnaires (compared with interviews)

A

Lower costs
Possibility of anonymity, greater privacy
Lack of interviewer bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Advantages of Interviews (Compared with Questionnaires)

A

Higher response rates
Appropriate for more diverse audiences
Opportunities to clarify questions or to determine comprehension
Opportunity to collect supplementary data through observation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What are scales used for ?

A

used to make fine quantitative discriminations among people with different attitudes, perceptions, traits

The Likert Scale is an example - Consist of several declarative statements (items) expressing viewpoints
Responses are on an agree/disagree continuum (usually 5 or 7 response options).
Responses to items are summed to compute a total scale score.

Semantic Differential Scores - Require ratings of various concepts
Rating scales involve bipolar adjective pairs, with 7-point ratings.
Ratings for each dimension are summed to compute a total score for each concept.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Visual Analog scale does what ?

A

measures subjective experiences (pain, nausea) on a straight line measuring 100 mm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Response set biases

A

Biases reflecting the tendency of some people to respond to items in characteristic ways, independently of item content

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Observational Rating Scales

A

Ratings are on a descriptive continuum, typically bipolar
Ratings can occur:
at specific intervals
upon the occurrence of certain events
after an observational session (global ratings)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Evaluation of Observational Methods

A

Excellent method for capturing many clinical phenomena and behaviors
Potential problem of reactivity when people are aware that they are being observed
Risk of observational biases—factors that can interfere with objective observation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Evaluation of Self Report Methods

A

Strong on directness
Allows access to information otherwise not available to researchers
But can we be sure participants actually feel or act the way they say they do?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Difference between in vivo measurements and in vitro biophysiologic measurements

A

In vivo measurements occur on or within organisms body (blood pressure)
In vitro is performed outside the organisms body

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Evaluation of biophysiologic measures

A

Strong on accuracy, objectivity, validity, and precision
May or may not be cost-effective for nurse researchers
Advanced skills may be needed for interpretation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What is a psychometric assessment ?

What are the key criteria ?

A

an evaluation of the quality of a measuring instrument.
Key criteria in a psychometric assessment:
Reliability
Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

An experimental research design contains what?

A

Intervention - Randomization - Control

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Quasi Experimental

A

Intervention

but missing randomization and control

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Nonexperimental

A

No intervention

Observational or descriptive
may have random sampling- but this is not the same as random assignment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Within subjects design - what is it ?

A

The same people in the experiment are compared at different times or under different conditions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Between subjects design

A

Different people are compared
Group A subjects take actual study drug
Group B subjects take placebo

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

What type of comparisons will be made to illuminate relationships ? Isn’t that the question ….

A

WIthin subjects

Between subjects

48
Q

Single blind and double blind

A

Single blind - subjects don’t know which group they are in

Double blind- neither researchers or subjects know who is in which group

49
Q

Prospective and Retrospective Data Collection

A

Prospective - looking forward

Retrospective - looking backward

50
Q

Three key criteria for making causal inferences

A

cause must precede the effect in time
must be demonstrated relationship between the cause and effect
Relationship between the presumed cause and effect cannot be explained by a third variable

51
Q

Biologic plausibility

A

Another criteria for causality - basically the causal relationship should be consistent with evidence from basic physiologic studies

52
Q

Coherence

A

Another criteria for causality - multiple sources should be involved when it comes to establishing existence of relationship between cause and effect

53
Q

What type of designs offer the strongest evidence of whether a cause results in an effect?

A

Experimental Designs

54
Q

Characteristics of a true experiment

A

Manipulation
Control
Randomization

55
Q

Crossover design

A

Subjects are exposed to 2+ conditions in random order

subjects “serve as their own control”

56
Q

Factorial

A

More than one independent variable is experimentally manipulated

57
Q

What is treatment fidelity ?

A

Also called intervention fidelity …

whether the treatment as planned was actually delivered and received

58
Q

Quasi experiements involve an intervention but lack ……

A

randomization or control group

59
Q

If there is no intervention, this is called ………….

A

observational research non experimental research

60
Q

What are the two main categories of quasi experiments ?

A

Within subject designs - one group is studied before and after the intervention

Nonequivalent control group designs
- those getting the intervention are compared with a nonrandomized comparison group

61
Q

Cause probing questions for which manipulation is not possible are typically addressed with a ………….

A

correlational design

There is prospective and retrospective correlational design

62
Q

Is all research cause probing ?

A

No

Some research is descriptive (like ascertaining the prevalence of a health problem)

Others are descriptive correlational - purpose is to describe whether variables are related , without ascribing a cause and effect connection

63
Q

Cross sectional design

A

Data are collected at a single point in time across different stratas or groups like ages

64
Q

Longitudinal design

A

Data are collected two or more times during an extended period

65
Q

Ways of controlling confounding variables

A

Achieving constancy of conditions
Control over environment, setting and time
Control over intervention via a formal protocol

66
Q

A more ____________ sample may minimize confounders, but limits the ability to generalize outside the study

A

homogenous

67
Q

Inclusion and Exclusion criteria work to exclude what ?

A

Confounding variables

68
Q

What is intrinsic factor ?

A

Control over subject characteristics which is done through inclusion and exclusion criteria

69
Q

Different methods of controlling intrinsic factor ?

A
Randomization
Subjects as own controls (crossover design)
Homogeneity (restricting sample)
Matching
Statistical control 
e.g., analysis of covariance
70
Q

What is internal validity ?

A

the extent to which it can be inferred that the independent variable caused or influenced the dependent variable

71
Q

What is external validity ?

A

the generalizability of the observed relationships to the target population

72
Q

What is statistical conclusion validity /

A

the ability to detect true relationships statistically

73
Q

Threats to internal validity

A

Temporal ambiguity - unclear whether presumed cause occured before the outcome

selection threat (single biggest threat to studies that do not use an experimental design)

74
Q

What is history threat, maturation threat and mortality threat ?

A

These are all threats to internal validity

History threat - something else occuring at the same time as causal factor

Maturation threat - processes that result simply from the passage of time

Mortality threat - loss of participants for whatever reason

75
Q

Threats to external validity

A

Selection bias - sample selected for the study does not accurately represent the target population

Expectancy Effect - (Hawthorne effect) makes effects observed in a study unlikely to be replicated in real life.

76
Q

Threats to Statistical Conclusion Validity

A

Low statistical power (e.g., sample too small)
TIP: If researchers show no difference in outcome measure (DV) between experimental & control groups, sample size may have been too small to detect difference!
Weakly defined “cause”—independent variable not powerful
Unreliable implementation of a treatment—low intervention fidelity

77
Q

What is reliability ?

A

The degree to which an instrument accurately and consistently measures the target attribute

78
Q

Reliability coefficients range from ________ and are considered good/acceptable at 0.________ or more

A

0.00-1.0 ; 0.70

79
Q

What are the 3 aspects of reliability that can be evaluated ?

A

Stability
Internal Consistency
Equivalence

80
Q

Stability involves

A

the test- retest reliability

It is the the extent to which scores are similar on two separate administrations of an instrument

81
Q

Internal Consistency is assessed by computing

A

the coefficient alpha (0.70 or more is desirable)

this is the most widely used approach to assessing reliability

82
Q

What is internal consistency ?

A

The extent to which all the items on an instrument are measuring the same unitary attribute
An anxiety questionnaire should all have questions aimed at assessing anxiety levels

83
Q

Equivalence is most relevant for ______________

A

structured observations

Assessed by comparing agreement between observations or ratings of two or more observers

Equivalence is the degree of similarity between alternative forms of an instrument or between multiple raters/observers using an instrument

84
Q

Reliability is ______ in homogeneous than heterogeneous subject samples.

A

lower

85
Q

Reliability is ____________ in shorter than longer multi-item scales.

A

lower

86
Q

Reliability is necessary (but not sufficient) for validity.

True or false?

A

True

87
Q

An instrument can be _____________ but not __________________ but it can’t be valid if it lacks _______________

A

reliable; valid; reliability

88
Q

An instrument can be valid if it lacks reliability . true or false ?

A

False.

An instrument can be reliable but not valid

89
Q

What is validity ?

A

The degree to which an instrument measures what it is supposed to measure

90
Q

Four aspects of validity

A

Face validity
Content validity
Criterion-related validity
Construct validity

91
Q

Face validity

A

Refers to whether the instrument looks as though it is an appropriate measure of the construct

Based on judgment; no objective criteria for assessment

92
Q

Content validity is evaluated by _________

A

expert observation, often via the content validity index. (CVI)

93
Q

What is criterion related validity ?

A

The degree to which the instrument is related to an external criterion

94
Q

Validity coefficient acceptable score

A

Validity coefficient is calculated by analyzing the relationship between scores on the instrument and the criterion (.7 or higher)

95
Q

Predictive validity

A

Predictive validity: the instrument’s ability to distinguish people whose performance differs on a future criterion (e.g., SAT is predictive of college GPA)

96
Q

Concurrent validity

A

Concurrent validity: the instrument’s ability to distinguish individuals who differ on a present criterion (e.g., SAT & current GPA are positively correlated >.7)

97
Q

Construct validity - what is it concerned with ?

A

What is this instrument really measuring?

Does it adequately measure the construct of interest?

98
Q

What are two ways of assessing construct validity ?

A

Known-groups technique
Testing relationships based on theoretical predictions
E.g., a tool for fatigue scores high for patients receiving radiation therapy, low for healthy persons

Factor analysis
Statistical test to determine whether items load on single construct

99
Q

Criteria for Assessing/Screening Diagnostic instruments

A

Sensitivity: the instruments’ ability to correctly identify a “case”—i.e., to diagnose a condition

Specificity: the instrument’s ability to correctly identify noncases, that is, to screen out those without the condition

100
Q

What are some studies that involve an intervention ?

A

Mixed Method
Clinical trials
Evaluation research
Nursing intervention research

101
Q

Studies that do not involve an intervention

A

Outcomes research
Surveys
Secondary analyses
Methodologic research

102
Q

Mixed Method Research

A

Research that integrates qualitative and quantitative data and strategies in a single study or coordinated set of studies

103
Q

What are clinical trials ?

A

Studies that develop clinical interventions and test their efficacy and effectiveness
May be conducted in four phases

104
Q

What is phase 1 of clinical trial ?

A

finalizes the intervention (includes efforts to determine dose, assess safety, strengthen the intervention)

105
Q

Phase 2 of clinical trial

A

seeks preliminary evidence of effectiveness—a pilot test; may use a quasi-experimental design

106
Q

Phase III of clinical trial

A

fully tests the efficacy of the treatment via a randomized clinical trial (RCT), often in multiple sites; sometimes called an efficacy study

107
Q

Phase 4 of clinical trial

A

focuses on long-term consequences of the intervention and on generalizability; sometimes called an effectiveness study

108
Q

What does evaluation research do ?

A

Examines how well a specific program, practice, procedure, or policy is working

109
Q

What does outcome analysis do ?

A

Seeks preliminary evidence about program success

110
Q

Outcomes research

A

Designed to document the quality and effectiveness of health care and nursing services

key concepts:
Structure of care (e.g., nursing skill mix)
Processes (e.g., clinical decision-making)
Outcomes (end results of patient care)

111
Q

Survey research obtains information via

A

self reports through face to face interviews, telephone calls,self administered questionnaires,

112
Q

Survey research is better for an ___________ rather than an _________________ inquiry

A

extensive; intensive

113
Q

What does a secondary analysis do ?

A

Study that uses previously gathered data to address new questions
Can be undertaken with qualitative or quantitative data
Cost-effective; data collection is expensive and time-consuming
Secondary analyst may not be aware of data quality problems and typically faces “if only” issues (e.g., if only there was a measure of X in the dataset).

114
Q

What does methodologic research do ?

A

Studies that focus on the ways of obtaining, organizing, and analyzing data
Can involve qualitative or quantitative data
Examples:
Developing and testing a new data-collection instrument
Testing the effectiveness of stipends in facilitating recruitment

115
Q

How confident are you going into this exam ?

A

I’m very confident !!!!!!! I will be victorious