Research Final Flashcards
What is the older EBP model?
3 circles
1- Clinical state & circumstances
2- Best research evidence
3- Patient preferences & actions
Overlapping section is clinical expertise.
What is the more traditional EBP model?
4 circles
1- Health care system & service organization
2 - Relevant scientific evidence
3 - Clinical judgement
4 - Patients’ values, goals & preferences
What are the 5 stages of EBP?
- Ask
- Acquire
- Appraise & Interpret
- Apply
- Evaluate
What are the levels in the 6S pyramid?
Part of acquiring information in EBP.
Bottom to top:
Single studies
Synposes of single studies
Syntheses
Synopses of syntheses
Summaries
Systems
What is the proposed new EBP pyramid?
- Systematic reviews become the handle/lens in which we understand the other studies.
Explain EBP for researchers
Data collection is in Acquire stage
- As researchers and research teams considerable background search and literature searching will be done in Ask to make sure we have a good question.
- Appraise & interpret – based on the evidence you found you appraise it and then apply it. For researchers this stage is analyzing what you’ve collected and then thinking about how it applies in practice.
What is a background question?
- Seek general knowledge
- About a single concept
- Who, what, where, when, how & why
E.g., terminology, general pathology, general info
What is a foreground question?
- Seek specific knowledge
- Bring together multiple concepts
- Inform clinical decisions
- Foreground q’s compare, often target a certain group, qualitative or quantitative, can be peer reviewed
- Do foreground questions always have to have a comparator? NO. Some might just look at 1 intervention and whether it causes a particular effect or not.
What is the framework for foreground questions overall?
Feasible
Interesting
Novel
Ethical
Relevant
What are foreground question frameworks for quantitative?
PICO - patient/population, intervention, comparison, outcome
PICO-T - adds in time frame
What are foreground question frameworks for mixed methods?
SPIDER - sample, phenomenon of interest, design, evaluation, research type
What are foreground question frameworks for qualitative?
PEO - population/problem, exposure, outcomes/themes
SPICE - setting, perspective, intervention, comparison, evaluation
PS - patient/population, situation
What are foreground question frameworks for measurement?
COSMIN - Consensus-based standards for the selection of health measurement instruments.
What are the PICO and PICO-T used for?
Driven to investigate intervention effectiveness
What is the COSMIN designed for?
COSMIN is designed to look at the development of instruments
- Measurement – look at the development of an outcome measure tools – psychometric type studies, reliability and validity questions.
What is FINER used for?
Determining the quality of a question, rather than the structure of it.
Applicable to the researcher who is going out to collect data
Where will you find background research and what are the considerations?
Textbooks, websites, clinical experts
Consider authorship and recency
Where will you find foreground research and what are the considerations?
Research evidence 6s pyramid
Consider database coverage, question frameworks, search structure (let the question drive)
Types of quantitative research, some common research designs and evidence synthesis?
PICO - therapy, diagnosis, prognosis, etiology/harm
RCT, Cohort studies, case studies, measurement studies
Evidence - Systematic reviews/meta-analyses
What is an example of mixed methods research and evidence synthesis?
Program evaluation
Evidence - Mixed methods review, scoping reviews, clinical practice guidelines
Types of qualitative research, some common research designs and evidence synthesis
Lived experiences
Grounded theory, phenomenology, ethnography, case studies
Evidence - meta syntheses
What is a platform?
Platforms allow you to structure your searches and dig into the databases that are like pots of research they pull from
Key word vs Subject heading
Key word is an abstract or title
Subject heading is a tag that the data base has sorted for you.
Complimentary strategies
What are some ways qualitative research can support evidence for practice?
- When it presents new info
- When it discusses complex or nuanced situations
- Who it can provide into that is context specific
What is the purpose of qualitative research?
- Development of theory
- Study subjective experience, meaning and contextual aspects of human action and interaction
- Exploration and discovery
- Study individuals in their natural settings
- Not interested in generalizability
What are the origins or qualitative research?
Basis in anthropology, philosophy and sociology
What assumptions are made in qualitative research?
- Multiple realities
- Social reality is dynamic and contextual
How are qualitative research questions generally organized?
- Exploratory not explanatory
- Open not closed
- Focus on meaning
- How? What? Why?
What are the 5 traditions of qualitative research?
- Phenomenology
- Ethnography
- Grounded theory
- Critical theory
- Participatory action research
What is phenomenology and how is it conducted?
- Focus on lived experiences of a phenomenon
- We can only understand a phenomenon through the experiences of those living with it/experiencing it
- Researchers focus on understanding/describing the experiences of those living it
- Interviews a common approach to data collection (but not the only one)
- Do see focus groups
- Don’t know in advance how many people you’ll need – can try to achieve data saturation (hard to do pragmatically and is done after the fact so hard to put in research proposal)
Inductive vs Deductive reasoning
Inductive is from specific to general conclusions - see in phenomenology - specific experiences to general themes.
Deductive is general premises to specific conclusion.
What is ethnography and how is it conducted?
- Roots in anthropology
- Description & interpretation of a cultural or social group or system
- Fieldwork: prolonged observation of a group within a setting
- Immersion – researcher being immersed
- Fieldwork – prolonged observation in the natural setting (wherever that is – e.g., classroom). Prolonged is historically year or years. Observations can look different but are typically very detailed.
- If you are researching and participating, you become so much part of the group that it might taint your understanding. Difference between observation and participant observation.
What is grounded theory and how is it conducted?
- Theory generation/construction as the focus
- Researchers typically do NOT immerse themselves in published literature in advance
- Common data collection: interviews, focus groups, observation
Construct theory from qual data
What is critical theory and how is it conducted?
- Focus on social realities that create barriers through dominant structures/processes. i.e., Challenging hegemony (dominant view of the world)
- Common data collection: Discourse analysis of policy documents; case study; interviews; observation
- About critique – understanding common understandings of issues, challenges and then putting it in the context of challenging some of those. Understanding political and social dominant structures that might be influencing our understanding of those.
What is participatory action research and how is it done?
- Engaging with those who are the focus of the research as research partners
- Action and understanding are inextricably linked
- Participatory approach from project formulation through to knowledge translation
- Mixed methods of data collection are common
-Assumes that the people who are meant to benefit from the research are active participants in the research from beginning to end. Design research question with you and are involved in every step.
- PAR is in line with the idea of multiple truths and realities
- Typically, the action is part of the research and you’re examining as a result of that action what changed, and you prepare a report based on that
What 2 principles guide participant selection in qualitative research and how do you describe them?
Appropriateness & Adequacy
- Appropriateness: select participants with exposure to the focus of the research
- Adequacy: who can be engaged with a goal of saturation
Often need to consider the pragmatics
What is purposive sampling?
Purposefully sought out ppl – e.g., ppl in first year of the program and aimed to get a range in terms of students ages, academic background and gender. Describe key characteristics as part of your samplings. Ask ppl what categories they fit in. Aren’t turning ppl away but are we getting a sample that ticks the important boxes in terms of your purpose.
Intentionally picking someone based of a characteristic/criteria.
What is snowball sampling?
Research participants are asked to assist researchers in identifying other potential subjects.
What is maximum variation?
Intent on trying to get as much variation from the group as you can – is a kind of purposive sampling
What is the DEJA Model and what is it used for?
Used for sample size determination
- Define – the sampling strategy (e.g., non-probability)
- Explain – how you are implementing the strategy (e.g., purposive)
- Justify – consider justification based on qualitative tradition, method of data collection, expected depth of data
- Apply – consider pragmatic issues that might affect number of participants
- Developed in response to critiques of “data saturation” as the determinant for sample size
What is the goal of qual research?
Description and often interpretation
What are codes?
Small chunks of data that you can label in some way
Generally, codes emerge from the data.
What comes after codes?
- Categories/Sub-themes
o Once you have a set of codes identified, and perhaps a short description of each, consider how the codes link to each other? Can they be clustered together in some way that makes sense? - Themes
o Can codes or categories be clustered together in a “bigger picture” idea or theme?
What are the 4 ways to look at trustworthiness/rigour in qualitative data?
- Rigor/trustworthiness is the overarching theme of critical appraisal
- Credibility
- Transferability
- Dependability
- Confirmability
What is credibility and some strategies used?
- Establishing believability
- Strategies:
o Prolonged engagement
o Reflexivity
o Member checking
o Triangulation of data
What is prolonged engagement?
Researcher immerses themselves as an observer or participant observer for prolonged amounts of time (ethnography) need a sense they were with participants long enough to believe them.
What is reflexivity?
Critically examining your own beliefs and positionality – critical reflection – want a sense that they set themselves up for critical reflection. Did more than 1 person code/collect the data (more so triangulation) and have those people had critical discussion related to what they’re thinking/learning
What is member checking?
Send the transcript back to the person you interviewed and ask them to review it to make sure it reflects what they were meant to say (can be controversial b/c they may want to change the data).
What is triangulation?
Collecting diff types of data, maybe from different types of people (e.g., profs and students).
What is transferability and some strategies used?
- Can study findings be transferred to other contexts?
- Strategies
o Dense description of participants/sampling strategies - who was recruited and how
o Findings described clearly and coherently with adequate supporting data to understand context and findings
What is dependability and some strategies used?
- Consistency between data and findings - Do the quotes illustrate the points that they’re trying to make in the description of the findings
- Procedural rigour
- Strategies:
o Audit trails
o Detailed description of methods - Want a sense of how they did the coding, who was involved, did they have training, how often did they meet, how many ppl were involved.
o Multiple researchers conducting analyses
o Triangulation of analyses (not super common)
What is an audit trail?
Record of all the transition/decision points you make once you have the data and are trying to interpret it.
What is confirmability and some strategies used?
- Analytic rigour/reducing bias in managing data
- Strategies:
o Rich description of findings (quotes support themes) - do they share quotes that support the themes/categories and do those quotes speak to how they describe
o Audit trail re analysis
o Reflexive analyses - have they talked about in the methods section they went through this sort of process
o External audit - doesn’t have to be someone not on the research team – but maybe haven’t been involved in the coding part. That person might ask can I see how you developed these themes from this data
Confirmability questions how the study findings are supported by the data. It identifies any bias that may have been present. It’s the level to which the findings can be confirmed or corroborated. Confirmability is concerned with determining that data & interpretations of the findings are not made up by the researcher’s imagination, but clearly derived from the data.
Generalizability vs transferability
- Transferability is what we use in qual and generalizability we use in quant. Both talk about applicability. How can I apply this research to my setting/practice. Transferability has a real focus on the findings and the context. can I apply this to my situation? Can we transfer the findings to a different context?
- Generalizing from the sample in the quant study are they generalizing to the sample I am working with. What are the characteristics of that sample?
What are the 4 parts of critical appraisal in qual research?
Research question
Design
Methods
Rigour
What are 4 types of foreground questions in quant research?
Treatment/Therapy - Does this intervention work? With whom? How well? For how long?
Diagnosis - Will this measurement help me understand the condition?
Prognosis - What is the likely outcome for this person/group of people?
Etiology/Harm - How does this intervention/exposure affect the development of the condition?
What are the 2 question styles in quant research?
Descriptive - how many? what factors?
Relational - Degree and/or direction of association? Causation?
What is an independent variable?
- Investigated as a potential agent of change in the dependent variable.
- Intervention/exposure
- Manipulated
What is a dependent variable?
- Thought to vary based on exposure to the independent variable
- Outcome of interest
- Measured NOT manipulated
What is an extraneous variable? Explain the 2 types.
- Not investigated but could influence the dependent variable
- Covariate - characteristic separate from the IV that isn’t of direct interest to the researcher but sometimes it might be controlled for (balance the groups based on it) or can use statistics
- Confounder - can influence IV and DV so the results of the experiment don’t represent the true relationship between the 2.
Quant study designs: Did the investigators decide receipt of the IV? What happens if you say yes?
Yes = experimental study
Did investigators randomly assign participants to receipt of the IV?
Yes = RCT
No = Non-randomized control trial (quasi-experimental study)
Quant study designs: Did the investigators decide receipt of the IV? What happens if you say no?
No = observational study
Do investigators compare participants on presence of the IV?
No = descriptive study
Yes = Analytical study
Do investigators look forwards or backwards in time from the intervention/exposure?
Leads to cohort, case-control study or cross-sectional study.
What is an experimental study?
Investigators decide who receives the intervention/exposure.
What is a randomized control trial?
Each participant has the SAME chance of allocation to intervention/exposure
What is a non-randomized control trial?
Aka quasi-experimental study.
Each participant allocated to intervention/exposure based on non-random factors.
E.g., Might have different sites where we recruit participants from and different sites might be allocated different levels of the IV.
What is an observational study?
Investigators do not decide who receives the intervention/exposure.
More of a natural presence - it’s there naturally in certain people.
What is a descriptive study?
Demonstrates possible association between the intervention/exposure and the outcome in a single group.
E.g., here are some heavy smokers and many have lung cancer
Here are some highly active older adults and relatively few of them have falls.
1 group, 2 variables, hints at a relationship so is often a starting point for more research
What is an analytical study?
Explores reasons for possible association between the intervention/exposure and the outcome.
Do control who they select to be the comparison group - still manipulated but not assigned.
Can match between groups to eliminate some of the confounders.
What is a cohort study?
Participants who do/do not receive are analyzed prospectively
Look at ppl who do or do not receive a treatment and if they develop certain types of outcomes.
What is a case-control study?
Participants who do/do not receive are analyzed retrospectively.
Outcome is the starting point and then you look backwards for exposure.
What is a cross-sectional study?
Participants who do/do not receive are analyzed at a single point in time.
Looking for relationship between exposure and outcome.
What do measurement studies look at?
Look at reliability/validity of certain tools
What do feasibility studies look at?
Looking at if an intervention is worth developing further
What are similarities between experimental & observational studies?
Both have independent variables
Both have comparison/control groups
Both have dependent variables
What are differences between experimental & observational studies?
Experimental
- Assigned
- Determined randomly or non-randomly
- Prospective
Observational
- Present/observed
- Exist/selected
- Prospective/retrospective/
simultaneous
What is bias?
A systematic error that can distort measurement and/or affect investigations and their results.
Uncontrolled phenomenon in a study that could potentially invalidate the results.
Consider: direction & magnitude of bias
What is a source of bias in the sampling or group/allocation/selection stage?
Selection Bias
What is a source of bias in the intervention/exposure stage?
Performance bias
What are sources of bias in the outcome measurement stage?
Detection bias or attrition bias
What is a source of bias in the interpretation stage?
Reporting bias
What is selection bias?
Participants or groups in a study sample differ systematically from the population at baseline.
- How are participants recruited?
- Who is enrolled in the study?
- How are participants allocated to groups?
- How is the allocation sequence concealed?
What is performance bias?
Inadvertent systematic differences between groups in intervention/exposure to other salient factors.
- Are participants blinded?
- Are interventionists blinded?
- How are interventionists trained?
- Is treatment fidelity assessed?
Might be an inadvertent systematic difference between the amount of attention one group gets compared to the other - then maybe it’s just the interaction impacting the DV.
What is fidelity?
Can we track the consistency of treatment between participants of 1 intervention.
The extent to which delivery of an intervention adheres to the protocol or program model originally developed.
What is detection bias?
Systematic differences between groups in how outcomes are measured.
- Are outcome measures objective/subjective?
- Are outcome assessors blinded?
A type of selection bias that results when one population is more likely to have the disease or condition detected than another because of increased testing, screening or surveillance in general.
What is attrition bias?
Systematic differences between groups in withdrawals
- Are outcome data for all participants reported?
- Are reasons for attrition examined?
Is there a trend that maybe the intervention is so hard to take that ppl can’t handle it. Is outcome data for all reported? Have they told you what happened with the missing data? Was it analyzed?
A type of selection bias due to systematic differences between study groups in the number and the way participants are lost from a study.
What is reporting bias?
Systematic differences in reported vs unreported findings.
- Are all identified outcomes reported?
- Are statistically significant and non-significant effects reported and linked to analyses?
What is random sampling?
Randomly choose participants for the study. Important for generalization.
Choosing participants for the study randomly from the population of interest. Helpful if you want to apply it to larger group.
What is random allocation?
Randomly choose group/receipt of the IV. Important to detect true effects of intervention on outcome.
Only happens in experimental studies.
When you do an RCT what is random?
Which treatment people get. Random allocation is what they talk about when talking about RCTs.
Often random sampling isn’t possible in health research.