Research methods part 2 Flashcards
Types of surveys
Retrospective surveys- look back in time
Prospective surveys- explore patterns over time
Same people (cohort design)
Different people (trend design)
Cross-sectional designs: This is survey that gives a snapshot in time.
Questions used need to be piloted first, what does this mean?
Test them out on a small group first
And get them to tell us whether they work (do they understand the question
Questions of a questionnaire are called what? and what are the types
items
Closed- Structured
Quantitative data
Simple Y/N answers
Easy to analyse
NO interpretation
Open- Unstructured
Qualitative data
Shades of grey
Depth
Interpretation required
Pit falls of questions
Items must not:
◦ Be ambiguous
◦ Be double barrelled
◦ Be too general
◦ Contain acronyms & jargon
◦ Use double negatives
◦ Use leading questions
◦ Be too long
◦ Be directed to the wrong person?
How would a researcher lead to bias in research.
Researcher- their individual views does it influence the way they view the data. E.g the effect of sweets on tooth decay and the researcher is a dentist.
What is a sample
a small part or quantity intended to show what the whole is like.
The sample is representative of the population.
Sampling methods for random
Random- picking randomly out of a population can be impractical as not everyone is available to do research
Systematic random sampling- Taking every 10th person for example can make it more manageable
Stratified random sampling- look at a population and take a sample from each group to make it representative
Non-random sampling, types
Risk of bias
Examples
Convenience sampling- Is an easy, quick and inexpensive method of selecting a sample but is prone to bias through non-representativeness as it involves selecting the most easily accessible members of the research population.
Quota- get a certain amount of people
Cluster- get samples form different locations
Sample size
LARGE
Survey
Core purpose is to
generalise
SMALL
Case study
Aims at
transferability
What is Randomisation?
Randomisation is a method used to randomly allocate participants to the different parts (arms) of a trial AFTER they have
been sampled.
A sample size too big or too small means?
> A sample size which is larger than required is unethical because more people than necessary will be exposed to the
intervention.
A sample size which is too small is also unethical because an effect which may exist may not be demonstrated
statistically.
What is the control group and what is the experimental group
Control group The control group is the group that receives the existing treatment, or placebo.
Experimental group The experimental group is the group that receives the new treatment.
What is double blinding
Double Blinding With double blinding both the researcher and the research participants are unaware of who is in the
experimental group or the control group for the duration of the study. This is desirable but not possible where it is obvious
which group is which (say with a radiotherapy technique), blinding is most often used in drug trials.
What is content analysis?
Content analysis: This is probably the simplest approach. The number of times that a word or concept occurs is counted (can
sometimes be done using a computer). As this reduces qualitative data to a numerical form it is not considered to be truly qualitative
by some researchers.
What is thematic analysis
Thematic analysis: This approach is not just concerned with how often a word is used. The context in which the word is used is
important too. For instance if we analysed transcripts of interviews with patients about an intervention they were about to undergo
the word ‘worried’ might occur fairly regularly. The number of times it occurs might be misleading if taken in isolation, see the
examples below:
‘I was very worried about having a needle inserted in my arm’
‘the radiographer asked if I was worried about anything’
‘my wife was very worried as I was not very well that day’
In only one of the sentences above did the patient indicate that they were worried. If we only counted the number of times the word
occured or if we said every interview contained the word we might, wrongly, conclude that all three patients were worried.
What is Theoretical analysis?
Theoretical analysis: An approach where the researcher develops theories from the data which may be tested against existing
theories or against further analysis of the data. This is also known as analytic induction.
How could subjects lead to bias in research?
Subjects- Subject bias, also known as participant bias, is a tendency of participants (subjects) in an experiment to consciously or subconsciously act in a way in which they wouldnt usually. E.g they know they are partaking in research looking into tooth decay overtime, so they start looking after their teeth better than they usually would.
How would the Sampling design / method lead to bias in research?
Sampling bias occurs when some members of a population are systematically more likely to be selected in a sample than others. It is also called ascertainment bias in medical fields. e.g only people with healthy teeth can part take in the study about tooth decay overtime. This means that those deemed to have unhealthy teeth are not apart of the sample.
How would the research design affect bias?
As an example, a researcher working on a survey containing questions concerning health benefits may overlook the researcher’s awareness of the sample group’s limitations. It’s possible that the group tested was all male or all over a particular age.
How would the research instrument affect bias?
One of the most common forms of measurement bias in quantitative investigations is instrument bias. A defective scale would generate instrument bias and invalidate the experimental process in a quantitative experiment.
The way you do the study will affect the results you get. So if you do a method that is not appropriate it would affect the results. E.g for tooth decay, getting children to rate their tooth decay overtime rather than doing observational study collection.
Getting the children to rate it, is not appropriate as they may not understand the scale in which they are rating, subject bias (no one wants to rate their teeth as bad)
How could the data analysis affect bias?
Analytics bias is often caused by incomplete data sets and a lack of context around those data sets.
E.g looking at tooth decay overtime via observational method when children go to the dentist but some children missed more dentist appointments than others. This means there is missing data for certain children during the research time.
Another example is doing a mental health study on undergraduates during exam season, as they are more likely to rate they are feeling low due to the stress of the exams. However, not including the fact its exam season you cant say all undergradutes are feeling sad all the time.
What is publication or reporting bias.
Publication or reporting bias
A sort of bias that influences research is publication bias. It is also known as reporting bias. It refers to a condition in which favorable outcomes are more likely to be reported than negative or empty ones. The publication standards for research articles in a specific area frequently reflect this bias on them. Researchers sometimes choose not to disclose their outcomes if they believe the data do not reflect their theory.
What is prospective surveys?
Prospective surveys- explore patterns over time
Same people (cohort design)
Different people (trend design)
Retrospective surveys meaning?
Retrospective surveys- look back in time
Cross-sectional design meaning?
Cross-sectional designs: This is survey that gives a snapshot in time.
What does Systematic random sampling mean?
Taking every 10th person for example can make it more manageable
What does Stratified random sampling mean?
Stratified random sampling- look at a population and take a sample from each group to make it representative