Data Collection Flashcards
You are a community pharmacy manager who is considering implementing a CAMs advice service in your pharmacy –> what are 2 questions to ask before doing this?
- Is there a demand for this service?
- What will clients be willing to pay?
What are some factors when it comes to sampling? What are some key issues?
Factors for sampling
- Denominator?
- Proportion?
- How many?
- How to select?
Key issues
- Representativeness
- Generalisability
- Validity
Give SIX examples of sampling methods. Explain each method briefly.
Simple random
- Every member of the population has equal and fair chance of selection
- Allocate numbers; generate random numbers (e.g. online random number generator)
Convenience
- Common, cheap and pragmatic – but not random, so increased risk of bias
Snowball/network
- Useful for difficult-to-obtain subjects
Systematic
- Nth subject on a random list
Stratified
- Divide population into groups (e.g. by age, gender), then sample randomly
Cluster
- Sample clusters at random, then sample within clusters (randomly)
What are the FIVE major types of error/bias? How to prevent it?
- Selection bias: non-random selection for inclusion/treatment
> Largely prevented by randomisation
- Measurement bias: quality of measurement varies non-randomly
> Largely prevented by blinding
- Confounding - association between two factors caused by third factor
> For example, blood transfusions associated with mortality
- But patients undergoing larger, longer operations require more blood
- Increased mortality may be a consequence of larger operations
> Largely prevented by randomisation
- Reverse causation
- Chance
For reliability;
A) What is it?
B) Measurements should be …
C) All questions should be ….
D) Should be able to demonstrate …
E) How is it optimised in questionnaires?
F) What is poor reliability a result of?
A)
- Relates to the consistency of measurements
B)
- Measurements should be stable over time
C)
- All questions should be homogeneous – should relate to same ‘thing’
D)
- Should be able to demonstrate equivalence – inter-rater and intra-rater
E)
- Clear and consistent questions
- Good readability
F)
- Poor reliability = error due to chance (‘random error’)
For validity;
A) What is measurement validity?
B) What is internal validity?
C) What is external validity?
D) Why does poor validity occur?
A)
- Is our instrument measuring what it is intended to measure?
B)
- Is the effect of an intervention a true effect? Or due to unrelated variables?
C)
- Generalisability to a wider population
- Goes back to representativeness of sample
D)
- Measurement error due to systematic error or bias
For validity in questionnaire/survey studies;
A) What is content validity? What are the types?
B) What is construct validity?
A)
- Does the survey cover the different aspects of the issue?
- Consensual validity: panel of experts agree on logic, balance and scope
- Face validity: “looks ok”
B)
- Highest level of validity
- Extent to which an instrument measures a theoretical ‘construct’
For response bias in questionnaire/survey studies;
A) What is non-response bias?
B) What is acquiescence bias?
C) What are demand characteristics?
D) What is extreme responding?
E) What is social desirability bias?
A)
- When responses of participants vary from the potential responses of non-responders
B)
- Respondent tends to agree to everything! Or nothing…
C)
- Participants’ responses are influenced because they are a part of a study
D)
- Giving extremely positive or negative answers
- Opposite = neutral responding
E)
- Participant provides socially desirable ‘fake answers’