CDM - Research (importance, process) Qualitative and Quantiative Flashcards

1
Q

What are the stages of the research process?

A

Title/ Question (can indicate if quantitative or qualitative)

Abstract (overview of whole study structure might follow going through each step of research process in brief)

Background (reviews what’s already known and shows gaps that paper is trying to answer)

Purpose/Aims (explicitly state aims- might be similar to question)

Methodology (can be one big methods section with next categories, if quantitative includes the outcomes being used and how they’re measured)

Design (explains research design)

Sampling (how sampling was carried out)

Ethics (how requirements have been met)

Data Collection (goes through how data was collected)

Data Analysis (shows how data was analysed e.g, which statistical analysis used or how themes were coded)

Findings (goes through what data analysis showed, might be shown in tables graphs etc)

Discussion (how findings relate to question)

Implications (how findings apply to nursing practice)

Conclusions (tie it all together)

Limitations (state limits of research)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What’s evidence based practise?

A

Combination of clinical expertise (proficiency and judgment acquired by a clinician through their experience) conscientious, explicit and judicious use of current best evidence and patients choice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Why are all 3 components of evidence based practice needed?

A
  • without expertise practise risks beefing tyrannised by evidence and if no evidence is available on a topic need the expertise instead
  • evidence may be inapplicable or inappropriate for a patient so need their choice and experience to know if unapplicable
  • research without patient focus and choice may be too generalised
  • without evidence practice risks becoming dangerously out of date
  • clinical expertise and evidence is another form of evidence and is often at least in part based on research but without research becomes process reliant and ideas about cause and effect or outcomes may be mistaken (unsure if outcomes are caused by an intervention used or just chance)
  • non research based evidence can be easily biased and make more predictions which leads to harm being done, treatments based on process (expert opinion) may actually do more harm than good as discovered by research into outcome
  • be mindful expert opinion can differ
  • if based only on experience you won’t know what you don’t know as experiences only include what you’ve seen so can’t decision make by anecdote or experience alone
  • have to be mindful that an expert opinion when compared with a non expert with unbiased eyes reading a a review of evidence is less likely to provide a subjective recount
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Key features of primary research

A
  • systematic approach (follows the research process)
  • discovers new knowledge
  • generalisable results
  • to get government funding have to involve patients/public (health and social care act commitment to patient experience and measuring (quantitative) or exploring it in their own words (qualitative)
  • can be quantitative (measurable outcome) or qualitative (focus on staff and patient experiences) or mixed methods (quantitative and qualitative in one paper)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Key points of secondary research

A
  • find and use primary research
  • combine number of primary research papers answering similar questions into one research study
  • e.g systematic reviews, literature review, Cochrane reviews
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What can research do? (8)

A
  • categorise (create boxes to sort stuff into, can explain what things belong together and how)
  • describe (observation as form of collecting data, examines situations to establish norm- what can be predicted to happen again under same circumstances)
  • explain (descriptive type, deals with complex issues, move beyonds getting facts to making sense of other elements involved)
  • evaluate (making judgments about quality (either as an absolute or comparatively) methods used need to relevant to context and intentions)
  • compare (two or more contrasting cases to be examined to highlight differences and similarities to understand better)
  • correlate (not causation!! Relationships investigated to see how they’re related to each other measures in levels of association)
  • predict ( can sometimes be done in areas where correlation already known- if previous strong relationship can predict will exist again in same conditions meaning a predictable outcome)
  • control (once able to understand an event may be able to control it if you know what the cause and effect relationships are and be able to exert control over vital ingredients- technology relies on ability to control)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Use of quantitative research?

A

Can numerically compare outcomes to see if a treatment actually does good (not harm)
Help understand cause and effect

Involves systematic empirical investigation of quantitative properties and phenomena and their relationships by asking a narrow question and collecting numerical data to analyse using statistical methods

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Use of qualitative research?

A
  • can assess quality of nursing care through patient experiences (hard to quantify experiences and interactions)
  • lots of work has led to the reduction in stereotyping
  • can show meaning and significance of experience of those who have a disease
  • help to know about patients experiences

Involved understanding human behaviour and reasons that govern such behaviour by asking a broad question, collecting data in form of words, images and videos etc that is analysed searching for themes. (Investigate a question without attempting to quantifiably measure variables or look to potential relationships between variables)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What’s the importance of research to nursing?

A
  • practise (improves patient care and experience nurses need to know what treatments work and what patients think- also can help reduce bias which impacts patient care shows cause and effect and people’s experiences )
  • professionalism (provides scientific and distinct knowledge base for nursing knowledge , knowledge base can help ensure legal and professional standard of care is being met, code for nurses also states nurses must use evidence and stay up to date with knowledge and evidence )
  • accountability (scientific justification for decisions and justifying decisions is part of meeting a duty of care, makes explicit the implicit decisions of nurses as intuition isn’t sufficient might also involve justifying why in a certain situation evidence/research/guidelines weren’t followed hard to justify decisions without evidence, especially important when things haven’t gone to plan)
  • can show social relevance of nursing (the difference nursing makes through research about nursing or the fact nurses carry out research also shows nursing as a distinct profession )
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What research can be used for evidence based practise?

A
  • needs to be up to date and critically appraised
  • can be primary, secondary, quantitative or qualitative research
  • research can’t be looking to probe something (confirmation bias)
  • needs to be unbiased
  • can’t be non research based (can lead to poor predications and bias that cause harm)
  • national guidelines can be evidence their strength depends on their use of research as evidence for recommendations, need to use best and most up to date information available to them (can highlight areas more research is needed)
  • local guidelines can also be evidence but may be more out of date and less informed by evidence than national ones
  • research is essential for guidelines
  • need to evaluate the whole research paper, can’t just rely of results if you don’t know about the quality of the process etc
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Role of cost in evidence based practise

A
  • may need to appreciate cost
  • decisions should never be made on cost alone
  • this is often senseless and cruel and it’s right to object to this
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What’s a critique?

A
  • systematic review study of something usually involving good and bad points
  • e.g critique of primary research, helpful to follow steps of research process and there are tools to do this e.g CASP (one generic tool for qualitative and multiple for quantitative depending on the method used)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What ethical approvals are needed for research?

A

All primary research needs ethical approval before starting
Secondary research and QI projects do not.
Primary research needs university research ethics approval
If involving NHS staff need health research authority approval
If involving NHS patients we’d health research authority NHS research ethics approval
Trust approval from their research and development (R+D team)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the key points of a research title/question?

A
  • it should clearly address the gap in research
  • it should focus the topic/practical problem (often from clinical practise) into an answerable question
  • indication of role study will have
  • can indicate if study will be quantitative or qualitative
  • can show how rest of research process will be carried out and what decision will be used
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are PICO and PEO question structures?

A

PEO- population (and their problems), exposure (term used loosely) outcomes (or themes) often used for qualitative research.

PICO- population (affected group), intervention (what is being done for group), comparison (control group/comparison element e.g placebo, new vs current) and outcome (for quant measurable) what is hoped to achieved/changed/measured. Often used for quantitative.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are quantitative research questions?

A

Systematic empirical investigation of quantitative properties and phenomena and their relationships by asking a narrow question and collecting numerical data to analyse with statistical methods

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are qualitative research questions?

A

Involve understanding human behaviour and the reasons that govern it by asking a broad question, collecting data in the form of words, images, video etc that is analysed searching for themes and patterns without attempting to quantifiably measure variables or look at relationships between variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Why is the background of a paper important?

A
  • goes through current literature and what is already known
  • shows gap paper is trying to answer (question comes from gap)
  • explains why topic is important
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What are the key points of sampling?

A
  • sample must be representative of population that study is addressing
  • sample needs to be representative of population
  • large sample sizes are more representative of the population than high increases, repeatability, reliability and internal and external validity
  • sample will always differ from the population (sampling error)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What’s probability sampling?

A
  • everyone in population has equal chance of being sampled
  • aims for representative sample
  • random sampling can help achieve this
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What’s non probability sampling?

A
  • non random
  • sample is chosen from population
  • inferences made about the population will be weaker as are less sure the sample is representative of population so findings might not be applicable to whole population
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What are the different types of random sampling?

A
  • simple random sampling (everyone in population is assigned number and then numbers chosen by random)
  • stratified random sampling (break down population into characteristics then use simple random sampling from each group- equal chance of being chosen from each group, can alter proportions from each group to match population)
  • random cluster sampling (for when population is spread out e.g different GP surgeries, random select areas then randomly select sample from chosen areas, equal chance of each area being chosen then equal chance of being chosen in area)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What are parameters and statistics?

A

A parameter is a measurable quality being looked at in the population.
The parameter in the sample is a statistic.
Statistic (sample) is used to make inferences about parameter (population) but will always be a margin of error as sample isn’t the whole population so will always be sampling error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What are the problems with collecting data?

A

To do with reliability, validity and avoiding bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What are the key points for data analysis in research process?

A
  • shows how data was analysed
  • quant uses statistical analysis
  • qualitative may use thematics (phenomenological approach) and good practise to follow and reference a specific approach
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Methods of data collection in qualitative research?

A

Social media monitoring(possible as publicly available information)
Observation
Focus groups
Interviews
Questionnaires/ surveys

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What are some issues with data collection in qualitative research?

A

Reliability (precision) bias in data collection.
Validity (accuracy) can be researcher bias (influence of researcher on data as they decide themes so assumptions and prejudices) or reactivity bias which is the influence researcher has on respondents such as asking leading questions or respondent bias when respondents don’t answer honestly maybe to please researcher it is it’s a threatening/embarrassing topic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Is qualitative research generalisable?

A

Harder the generalise than quantitative as if repeated may get very different results as it’s based on a few people’s experiences at a particular time and everyone experiences things differently

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What knowledge does qualitative research create?

A

New knowledge
Often raises social, cultural and political concerns or info about bias.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What are the methods of data analysis for qualitative research?

A

Descriptive analysis- describes what was says and synthesising individual accounts into themes

Content analysis/thematic analysis- describes content and interpret it, aims to reveal patterns and themes in data, researcher has an active role in deciding what themes are so it’s susceptible to bias e.g brain and Clarke

Narrative analysis- examines stories taking into account the context, often but doesn’t always include themes and can explore differences in narratives such as context and structure

31
Q

Key points of qualitative data analysis?

A

Analysis of non numeric data to make sense of it.
Carried out systematically to reduce bias.
Leave clear audit trail of what was done and how to reduce bias
Can use recognised process to reduce bias

32
Q

What are the steps of Braun and Clarke thematic/context analysis?

A

1 familiarise self with data
2 generate initial codes- highlight sentences/words
3 search for themes - sort into potential themes with main themes and sub themes
4 review themes- data in same theme should cohere together and clear and identifiable differences between themes
5 define and name themes- define essence of themes (what they mean) and hope for themes relate to research question
6 produce report- provide evidence of themes and include examples/extracts/quotes

33
Q

What’s SPIDER for research?

A

Sample (smaller sample sizes used in qualitative- may not intend to be generalised to all of general public)

Phenomenon of Interest (how and why certain experiences, behaviours and decisions are occurring)

Design (study design influences robustness of study analysis and findings)

Evaluation (may include more subjective outcomes e.g views and attitudes)

Research type (can search for quantitative, qualitative or mixed methods)

34
Q

Key points of sampling for qualitative research?

A

Open to bias (can chose people who will say what you want them to say)
Small sample size (otherwise too much data) means can larger sampling error. If homogenous population smaller sample may be okay but if heterogenous or ethnography need larger sample
Sample size can be decided practically when data saturation is reached but that requires collecting and analysing data at the same time

35
Q

What’s data saturation

A

Is when roughly no new info is being gathered (so researcher stops recruiting) but it could be due to poor interviewing not getting enough info from participants.

36
Q

What are the different types of non random/non probability sampling?

A

Purposive sampling- participants chosen to suit aim of study or researcher can choose most productive participants(most common)

Convenience sampling- based on participant availability and willingness to take part or where researcher already has access, likely to miss key informants and susceptible to volunteer bias

Quota sampling- recruit a certain quota that you hope will be proportional to population of interest eg 10 male, 4 female, 6 children

Theoretical sampling- as data is collected and analysed depending on what is found further participants are chosen to develop emerging theory (grounded theory design)

Key informant sampling- a key person knows people with relevant experience/requirements who may not otherwise be approached

Snow ball sampling- one participant refers another potential one to the study and so on

37
Q

What are the types of qualitative experimental design?

A

Phenomenology and ethnography

No hierarchy based on design (unlike quant)

38
Q

What is ethnography? And a focussed ethnographic approach?

A

Looking at group and the way the group sees the world and it’s culture, perspectives, practices social interactions and behaviours often by being inserted directly into group and documenting findings or social media monitoring.
Anthropological research method

A focussed ethnographic approach is used when researcher has intimate familiarity with the the context (such as already having insider status) and focuses on particular rather than the general- a narrow focus of exploration. Accounts for influence of culture.

39
Q

What is phenomenology?

A

Follows a systematic process- research that is carried out objectively but using subjective data.
Explored an individuals (subjective) experience of an event and how they interpret it through their own descriptions of it. Often done through interviews.

40
Q

What are the different types of phenomenology?

A

Descriptive- less common, simply records/describes views of participants

Interpretive- hermeneutic phenomenology, researcher tries to make some sense of data by creating some coherence or meaning, could be sorting it into themes, researcher decides the themes so they decide the results.

41
Q

What are the problems with phenomenology?

A
  • risk of participants not sharing what they actually think
  • told after the fact so open to recall bias
  • uses interviews so researcher can easily influence answers (hard not to ) can be putting words in mouth/leading questions, need to have some background info to do interviews but then might have some bias before starting
42
Q

What are the different types of quantitative experimental designs?

A

Experimental- RCT (true experiment) and quasi experiments

Non experimental/observational- correlational, descriptive/observational (cohort study, case control study or cross sectional design)

43
Q

What are quasi experiments?

A

Experimental quantitative research design. Sometimes called analytic cohort study or a clinical trial with no random allocation.

Can be done with only a treatment group or a treatment and control group but the groups aren’t randomly assigned.

If two groups both are tested at baseline and then the intervention is administered to one group then both groups are tested again.

44
Q

What are the pros and cons of quasi experiments?

A

Con- lower in quality of evidence hierarchy as there are greater threats to internal and external validity, due to non random samples, samples are less representative (bigger sampling error) so it’s not clear if outcome differences might be caused by differences in the groups or it’s not clear if findings will be generalisable to the population (as sample is less representative)

Pros- needed as it’s not always possible for some things to be randomly allocated to groups or RCTs can sometimes be unethical to go on human beings/in healthcare

45
Q

What’s an RCT?

A

Randomised control trial.

Uses control groups and sample is randomly sampled then groups are assigned randomly. Uses double blinding- neither researcher or participant knows who’s in what group

46
Q

What are the aims of an RCT?

A

Examine effectiveness, a cause and effect relationship. And show that the effect came after the cause (not that it was down to luck or a different variable)
E.g that a new treatments is better than a placebo, or the current treatments or a different new treatment

47
Q

What are the features of sampling for an RCT

A

Random sampling and large sample sized to reduce sampling error and improve reliability, repeatability and internal and external validity.
This means samples are more representative of the population so are more likely to be able to be applied to the population and is groups are identical on relevant features it means that differences in outcomes can only be down to treatment not anything else.

48
Q

What are the Pros and Cons of RCT?

A

Pros- gold standard of research (top level of primary research evidence) as can control for bias. Provides better quality evidence for cause and effect relationships than descriptive studies and expert opinion.

Cons- not always appropriate, can’t always be done for ethical reasons, sometimes not practical, not applicable for research without an intervention, in some cases clinician may need to decide who gets an intervention (as oppose to random)

49
Q

What’s attrition bias and how is it avoided?

A

Attrition bias is when participants are lost during a study. It might give biased results as the sample size becomes smaller and features may no longer be proportionate across groups. The people lost to data collection may have been important sources of data especially if lots of similar people drop out- will also greater impact representation.

It’s important to explain people who dropped out or didn’t want to participate. To prevent it need to over-recruit to avoid dropping below the required sample size.

50
Q

How do you work out how many people to recruit?

A

N1 = n/(1-d)

N1= adjusted sample size
d= drop out rate
n= required sample size

51
Q

What needs to be considered about data collection?

A

All measurements have to be carried out in exactly the same way (ideally using same equipment) as even objective measurements have room for error. A measurement protocol can be used to do this.

52
Q

What are non experimental quantitative research designs?

A

There are no interventions by the researcher but observe to see measurable outcomes. Often used in public health or epidemiology. Can be correlational or observational/descriptive.

53
Q

Whats correlational non experimental research?

A

Also called analytic.
Mainly observational, explores relationships, uses statistics, have to remember correlation doesn’t equal causation.

54
Q

What are the different types of descriptive/observational quantitative designs?

A

Cohort study
Case control study
Cross sectional design

55
Q

What’s a case control study?

A

Has no random allocation or intervention. One group has an outcome and the other group does not then research collection works backwards to see if any potential factors relate to those who experienced the outcome vs didn’t. It’s a retrospective study.

This is better when studying something rare as otherwise sample size required would be impracticality large.

56
Q

What’s a cross sectional study design? And what are the different types?

A

Observe a population at one specific point of time.
If descriptive just describe prevalence of an outcome at a given time.
If analytic the prevalence of an outcome and an exposure at the same time- measures association (e.g risk ratio and odds ratio in epidemiology)

57
Q

What’s a cohort study?

A

A longitudinal study with no intervention by the researcher. Observe people with exposure of interest and follow them up can compare those who develop the outcome with those who don’t. Can do the study with just one exposed group or with two groups (an extra control group). Might observe outcome in control group too due to other factors. Can measure and compare incidence and risk factors (relative risk)

58
Q

How does reliability relate to data collection?

A

The overall consistency of a measure. How close together repeated measurements would be to each other (not the target/true value)

Interrater reliability is if two people upswing the same tool on the same thing at the same time would get the same measurement.

Intrarater reliability is if the same test used over time by the same person would have reliable results

59
Q

What is validity and the different types?

A

Valid results are close to the true value/target (not necessarily each other)

Internal validity- (construct validity) the extent to which something measures what it is meant to measure and not anything else. The extent to which the results represent the truth.

External validity is the generalisability of the study.

Face validity is if experts think a new tool measures what it is meant to.

Content validity is the comprehensiveness of the tool.

Criterion validity is comparing the tool the one that is currently taken to be best practise.

60
Q

What are the threats to external validity?

A

(The generalisability of the study)

Samples- non random/self selected samples effect internal and external validity- need samples to be representative so random and ideally large sample sizes (to decrease sampling bias) if not representative of the population things that apply to the sample may not also apply to the population.

The Hawthorne effect- knowing you’re in an experiment can affect the results. This is why need blinding in experiments.

The fact that research can be very situationally specific or tightly controlled might make it less externally valid as there might not be another situation to which it can be generalised (if it’s very specific) or if it’s very controlled (RCT) then can’t be sure it will apply in a less controlled environment.

61
Q

What are the threats to internal validity?

A

Confounding variables
Extraneous variables
Selection bias
Asking leading questions in interviews

62
Q

How does selection bias affect validity?

A

Poor sampling e.g non random leads to a sample that isn’t representative of the population which affects internal and external validity.

Attrition bias is a type of selection bias from participants dropping out of a study causing systematic differences between study groups and sample and population.

63
Q

What are extraneous variables?

A

Variables not intentionally being studied in the experiment that can affect the outcome (dependent variable). They can be controlled for e.g in an RCT

64
Q

What’s a confounding variable?

A

An unobserved exposure that is associated with the exposure of interest and could be a potential cause of the outcome of interest. Means that the outcome could be caused by this not the intervention. Can affect the dependent variable and is related to/can affect the independent variable.

65
Q

What’s random and systematic error and how do the effect reliability and validity.

A

Random error (can be balanced out) and is chance difference between what is measured/observed and what is true. Doesn’t effect all measurements the same/at all. Measurements should be valid (if balanced out) (close to true value) but not reliable (close to each other)

Systematic error is a consistent/proportional error difference between observed value and true value and effects all data points. Measurements will be reliable (close to each other) but not valid (close to true value)

66
Q

What’s bias?

A

Deviation from the truth in data collection that exists in all studies but can aim to minimise it (can’t get rid of some types completely). Can cause false or misleading conclusions.
Type of systematic error as effects all results of experiment.

67
Q

What are different types of bias?

A

Systematic error/bias.
Sampling bias
Attrition bias (people dropping out of study)
Longidudinal studies are biased if people die.
The place a study is taking place can cause bias (eg DGH vs tertiary centre)
Measurement tools that aren’t assessed for reliability and validity can cause bias.
Qualitative bias
Publication bias

68
Q

What’s systematic bias?

A

Effects all results the same and cannot be managed/balanced out.
E.g scale miscalibrated to 1kg heavier

69
Q

What’s sampling bias?

A

Sampling bias/error.
The difference between a population and a sample. Can never be removed completely as a sample will never be exactly the same as a population. Can be reduced with large sample sizes and random sampling as with non random sampling some participants are more likely to be picked than others (non probability) either by the researcher (selection bias) or through self selection (volunteer bias)

70
Q

What bias can come from qualitative research?

A
  • leading questions
  • bad survey designs
  • participant recall bias (difference between reflection on action and reflection in action)
  • non random sample bias
  • researcher bias (researcher decides the themes) (impact of researcher on data)
  • respondent bias (not providing honest answers)
  • reactivity bias (influence of researcher on responses/respondents)
71
Q

What’s publication bias?

A

Often only research that is found/used is only research that ended up being published.
Especially impacts secondary research.

72
Q

What’s a case study an example of?

A

Qualitative research that is a single example of something in a lot of detail might include the process, experience events, info about the individual and more.

73
Q

What’s grounded theory?

A

Mainly used in qualitative, it involves creating theories whilst data is being collected- the theory is grounded in the data.