n224- definitions Flashcards
Quantitative designs:
Timing Limitations
Cross-sectional study:
- single point in time
- least strong to claim causality
- measured through surveys often and pre-existing evidence (ex lab results)
Retrospective:
- looking back in time
- stronger because more than one-time point
- case control studies are often retrospective.
Prospective:
- going forward
- strongest because more than one-time point and measuring before something happens (ie. it wasn’t there then it was)
Quantitative design:
Data collection
Self-reported data
Recall bias; retrospective studies
Quantitative design:
group limitations
Case control:
- defining the case population is difficult
- RECALL BIAS
Non-randomized groups:
-defining comparison is difficult
Randomized groups:
-considered strongest because differences between groups assumed to be evened out
Research capacity vs Research literacy
Research capacity:
- involves primarily the capacity to produce evidence
- involves skills such as searching, interpreting, and appraising evidence, designing a study to answer a research question, and conducting the study including sampling, ethical approval, data collection, analysis, and interpretation
Skills that are required to have research capacity:
- ability to seek, interpret, appraise, and apply research
- ability to review the literature to develop a research problem
- ability to develop study protocols, sample, analyze, and report on findings
Research literacy:
- involves primarily the capacity to use evidence
- is a necessary skill to be able to perform EIP
- involves skills such as searching, interpreting, appraising, and applying evidence to practice
grounded theory
Ethnography
pheno
Appraising
- Appraising the SOURCE:
- What type of source am I reading?
- When/where published?
- Is it high-quality e.g., undergone peer review, conducted by credible researchers?
- Is there any reason to suspect that researcher bias could affect the study e.g., through a conflict of interest?
- Who funded the study? Does it pose a conflict of interest?
- How does it read? Logical; over-under detailed; biased? - Appraising the LITERATURE REVIEW:
Quantity
-Does the literature review describe research findings from a variety of researchers, over time, and in broad and specific relation to the topic?
•
Quality
-Are the sources included in the literature review high quality? [consider if they are scholarly and hierarchy of evidence]
-Does the literature review read as objective e.g., having considered the breadth of literature and issues surrounding a topic?
-Do the authors integrate the research they are reviewing or does it read as merely a list of evidence on the topic; it should synthesize the literature (usually if it doesn’t do this well the argument as to why the research needs to be done is not clear)
-Does it clearly identify a knowledge gap and justify the need for the research study? - Appraising the RESEARCH QUESTION
- Is it easy to identify the purpose of the study?
Is the research question stated up front? Is the question answered by the study?
-Is the research question going to help address the knowledge gap? Or does it seem to just be replicating what other researchers have done? What is new about this study?
Is this research problem relevant and important for nursing practice?
- Appraising the DATA, in particular the SAMPLE:
-Who participated in the study?
-In what ways did the recruitment strategies limit who participated?
-Did enough people participate to answer the research question? [different for qualitative sample size lower, though quantitative higher sample preferred generally]
-Were people sampled who could contribute meaningfully to answer the research question?
-What biases are present in the sample?
o Look at demographic table; is the study population disproportionately represented? e.g., mostly high income, employed, female sex
o How do the sample biases affect the application of the study findings? i.e., can they only be applied to certain practice areas, particular communities?
-What do the authors identify as sample limitations?
- Appraising the METHODS and RESULTS:
Do the methods chosen seem to be useful to help collect data that can answer the research question?
Did the authors clearly explain the data collection study procedures? i.e., what they did to get participant data?
Is there evidence of ethics approval?
Do the results answer the research question?
- Appraising the DISCUSSION
Do the authors link the findings to the literature and explain ways the findings support or differ (or offer new findings) than previous literature?
o
Do the authors identify what future research should be conducted?
o
Do the authors describe the limitations of the study?
o
Have the researchers overgeneralized the application of the study findings?
o
Appraising
- Appraising the SOURCE:
- What type of source am I reading?
- When/where published?
- Is it high-quality e.g., undergone peer review, conducted by credible researchers?
- Is there any reason to suspect that researcher bias could affect the study e.g., through a conflict of interest?
- Who funded the study? Does it pose a conflict of interest?
- How does it read? Logical; over-under detailed; biased?
- Appraising the LITERATURE REVIEW:
Quantity
-Does the literature review describe research findings from a variety of researchers, over time, and in broad and specific relation to the topic?
•
Quality
-Are the sources included in the literature review high quality? [consider if they are scholarly and hierarchy of evidence]
-Does the literature review read as objective e.g., having considered the breadth of literature and issues surrounding a topic?
-Do the authors integrate the research they are reviewing or does it read as merely a list of evidence on the topic; it should synthesize the literature (usually if it doesn’t do this well the argument as to why the research needs to be done is not clear)
-Does it clearly identify a knowledge gap and justify the need for the research study? - Appraising the RESEARCH QUESTION
- Is it easy to identify the purpose of the study?
Is the research question stated up front? Is the question answered by the study?
-Is the research question going to help address the knowledge gap? Or does it seem to just be replicating what other researchers have done? What is new about this study?
Is this research problem relevant and important for nursing practice?
- Appraising the DATA, in particular the SAMPLE:
- Who participated in the study?
- In what ways did the recruitment strategies limit who participated?
- Did enough people participate to answer the research question? [different for qualitative sample size lower, though quantitative higher sample preferred generally]
- Were people sampled who could contribute meaningfully to answer the research question?
- What biases are present in the sample?
- Look at demographic table; is the study population disproportionately represented? e.g., mostly high income, employed, female sex
- How do the sample biases affect the application of the study findings? i.e., can they only be applied to certain practice areas, particular communities?
- What do the authors identify as sample limitations?
- Appraising the METHODS and RESULTS:
- Do the methods chosen seem to be useful to help collect data that can answer the research question?
- Did the authors clearly explain the data collection study procedures? i.e., what they did to get participant data?
- Is there evidence of ethics approval?
- Do the results answer the research question?
- Appraising the DISCUSSION
- Do the authors link the findings to the literature and explain ways the findings support or differ (or offer new findings) than previous literature?
- Do the authors identify what future research should be conducted?
- Do the authors describe the limitations of the study?
- Have the researchers overgeneralized the application of the study findings?
What distinguishes an experimental design from an observational design?
Manipulation of the environment or exposure (experimental design); observational design does not involve this
What would differentiate a quasi-experimental from an experimental study?
Quasi-experimental studies usually rely on natural differences i.e., they are not due to intentional manipulation = therefore, they lack control.
*Quasi-experimental studies also start generally with non-equivalent groups.
Intervention and control groups are not truly randomized non-equivalent comparison groups – the risk with non-equivalent groups is that you can’t rule out other confounding factors that could also account for the outcome
Give an example of a physiological or biological way to collect data
- blood pressure
- temperature
- hormone levels
- labs
The aim of good quantitative research design is to rule out competing explanations of what is the cause of the effect.
true
A researcher wanted to know whether a resiliency program improved staff morale.
• Which variable is the DV? The IV?
Staff morale (DV)(dependent on resiliency, is the one that changes); resiliency program (IV)(independent)
iv - cause
dv- effect, the variable being explained
operationalization of variables:
Researchers provide definitions to specify how variables will be measured and rules to establish how differences in measurement on outcomes will be interpreted
recall bias
is a systematic error that occurs when participants do not remember previous events or experiences accurately or omit details: the accuracy and volume of memories may be influenced by subsequent events and experiences. … Pre-existing beliefs may also impact on recall of previous events.
reliability and validity:
Reliability is about the consistency of measurement if repeated over and over, and validity is about the accuracy of a measure (tool measured what it said it measures)