Module 12 Flashcards

1
Q

What are two major sources of error in epidemiologic research?

A

Random (or chance) errors (reliability)
Systemic errors- all occurring for a specific reason (validity, bias)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What can cause a lack of precision?

A

Sampling error, imprecision in measurement, and variability of the data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the result of lack of precision?

A

Wider confidence interval
Makes it more difficult to reject the null hypothesis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How can you reduce random error d/t lack of precision?

A

Increase the sample size and/or increase the number of measurements and taking the average of those measurements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What can systematic errors be caused by?

A

Selection bias, information bias, or confounding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What occurs as a result of systemic error?

A

Changes the value of results (odds ratio or relative risk) by moving it up or down

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How to reduce systemic bias

A

Prevent it when designing or carrying out the study or adjusting for confounding in the analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the two components of validity?

A

Internal validity
External validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Characteristics of lack of internal validity

A

It’s the same as bias in a study.
It is systematic error.
It is affected by how the study was designed and carried out

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A study is said to have high internal validity when…

A

There have been proper selection of study groups-little selection bias
A lack of systematic error in measurement- little information bias,
And the results are not due to confounding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

High internal validity = ?

A

Low bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Definition of systematic errors?

A

Errors in how samples were selected or the quality of the data used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What does external validity imply?

A

The ability to generalize beyond a set of observations to some universal statement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is external validity affected by?

A

The source population from which the sample is drawn and by the level of internal validity in the study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Question asked for internal validity

A

Could the association be d/t bias (internal validity issues?) Selection bias and/or information bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Question regarding external validity

A

To whom does this association apply? To what extent may the findings from the study be generalized to other pops?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Which is more important: internal or external validity?

A

A study can have good internal validity (not much bias) but still not be externally very valid (limited generalizability to a source pop)
However, a study cannot be externally valid without being internally valid
In other words, if a study has a great deal of bias, you cannot accurately generalize to a universal statement, or to any population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Definition of bias

A

Deviation of results or interferences from the truth, or processes leading to such deviation. Any trend in the collection, analysis, interpretation, publication, or review of data that can lead to conclusions that are systematically different from the truth.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

When does overestimation of association occur?

A

If the two comparison groups are more different in the study than they are in reality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Overestimation of association in case-control studies

A

The odds of exposure to a risk factor is overestimated in the cases
The odds of exposure to a risk factor is underestimated in the controls

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Overestimation of association in cohort studies

A

The incidence of the outcome in the exposed is overestimated
The incidence of the outcome in the unexposed is underestimated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

When does underestimation of the association occur?

A

If the two comparison groups are less different in the study than they are in reality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Underestimation of the association in case-control studies

A

The odds of exposure to a risk factor is underestimated in the cases and overestimated in the controls

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Underestimation of the association in cohort studies

A

The incidence of the outcome in the exposed is underestimated and overestimated in the unexposed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
How bias occurs in descriptive studies
The prevalence (not just the prevalence ratio) in descriptive cross-sectional studies can be biased up or down This can be caused by selection bias of some sort (improperly selected sample) or information bias (data obtained from sample not correct)
26
Definition of selection bias
A bias d/t differences in the manner in which study groups are formed: bias in how people are selected into the sample
27
When does a selection bias occur?
Arises when the relationship between exposure and dz is different for those who participate and those who theoretically would be eligible for the study but do not participate
28
Selection bias is not the same as...
Random sampling error
29
Why is selection bias not the same as random sampling error?
They are both sampling problems, but for different reasons Sampling error is due to low sample size But selection bias is d/t recruitment of the wrong ppl. It cannot be corrected by increasing the sample size
30
Types of selection bias
Non-response bias LTFU bias Healthy worker effect Exclusion bias Incidence-prevalence bias Hospital admission bias
31
When does non-response bias occur?
When the response rate is low and non-respondents are systematically different than respondents
32
Definition of response rate
The proportion of potential participants who are asked to participate in the study and actually did participate divided by the total who were asked to take part in the study (both those who accepted and those who refused)
33
When can volunteer bias occur?
When people volunteer for a study, as volunteers are often systematically different than the general pop. This occurs when sample participants have been self-selected
34
Which type of sample involves some degree of volunteer bias?
Convenience (nonprobability) survey samples
35
When does LTFU bias occur?
When ppl drop out of a study, and those who drop out are systematically different than those who remained
36
In what two situations does healthy worker effect occur?
1. When an occupational group is compared to the general pop, which includes ppl too unhealthy to work. 2. When "sicker ppl choose to work in environments where exposures are low, are excluded from being hired, or once hired, are transferred to less exposed jobs or leave work"
37
What does healthy worker effect lead to?
Underestimate of the association
38
When can exclusion bias occur?
If the exclusion criteria for participants are different for the comparison groups in the study
39
When does hospital admission bias occur?
When hospitalized participants are used as a "healthy" comparison group, but actually are not representative of people without the dz being studied. This can happen if hospitalization rates differ for the exposed and not exposed. The association between exposure and dz in the hospitalized sample may not reflect what it is in non-hospitalized ppl
40
Why can incidence-prevalence bias occur?
Because prevalence is influenced by the duration of the dz and whatever is influencing that- not just what causes one to get the dz in the first place. Including participants who are prevalent cases can introduce bias, since what led to their survival (and hence inclusion in the study) may be something different than what caused them to get the dz
41
Common forms of selection bias in cross-sectional studies
Non-response bias Volunteer bias Incidence-prevalence bias
42
Common types of selection bias in case-control studies
Non-response bias Exclusion bias Incidence-prevalence bias Hospital admission bias
43
More details about selection bias in case-control studies
More of a problem for case-control studies than some other study designs Has often to do with how the controls are selected
44
How can bias occur in selection of controls in a case-control study?
Controls may be selected into the study based in some way on the exposure, so that controls do not represent the source pop in terms of exposure status. As a result, the relationship between exposure and dz among participants in the study differs from what the relationship would have been along individuals in the pop of interest.
45
How to reduce selection bias among controls
Recruit controls from the same source pop as the cases If unsure of the exact sampling frame, attempt to draw controls from a variety of sources. Compare the prevalence of the exposure among controls with other sources to evaluate credibility
46
How to reduce selection bias due to unrepresentative cases
Develop an explicit (objective) case definition Enroll all cases in a defined time and region Ensure that all medical facilities are thoroughly canvassed Consider whether all cases require medical attention; consider possible strategies to ID where else the cases might be ascertained Try to recruit incident (new) rather than prevalent (all) cases, if possible
47
Common forms of selection bias in cohort studies
LTFU bias Healthy worker effect
48
How to minimize LTFU
Restricting the study participants to those likely to remain in the study Collecting personal identifying info Making periodic contact, and providing incentives Recruiting a new cohort as time goes on to replace those LTFU (and who die)
49
How to minimize healthy worker effect
Select a comparison group made up of a similar type of worker, who were unexposed
50
Definition of information bias
Results from systematic differences in the way data on exposure or outcome are obtained from the various study groups It is a bias in terms of the data itself and its ascertainment Can be introduced as a result of systematic measurement error in assessment of either exposure and/or dz
51
Types of information bias
Recall bias Reporting bias ---Social desirability bias ---Wish bias Interviewer/abstractor bias Surrogate interview bias Exposure suspicion bias Detection bias
52
Definition of recall bias
Occurs when one group remembers the past differently (more or less accurately) than the other
53
Definition of reporting bias
Can occur whenever any self-reported data are collected. This is any incorrect information given by the respondents for whatever reason, intentional or unintentional
54
Definition of social desirability bias
Aka the Clever Hans Effect or Obsequiousness Bias. A type of reporting bias in which participants lie about the data in order to please the interviewers
55
Wish bias definition
A type of reporting bias in which cases seek to show that the dz was not their fault. Such participants deny certain negative lifestyle exposures, or overestimate workplace exposures. This form of bias may be unconscious.
56
Interviewer/abstractor bias or observer/interviewer bias
Occurs when interviewers probe more thoroughly for an exposure or outcome in one comparison group than in the other (or when they abstract data more thoroughly in one group than another)
57
When can interviewer/abstractor bias occur?
In any type of unblinded or single blinded study where there are two or more comparison groups -Case-control studies -Cohort studies -Clinical trials that are not double blinded
58
How to prevent interviewer/abstractor bias and the placebo effect
In a double-blind study neither the participants nor the investigators know who is receiving the active tx
59
How to minimize interviewer/abstractor bias when there is no double blinding possible in observational studies
If interviewers do not know the exposure status of the participants, there may be less likelihood of this sort of bias (a kind of blinding) Even if the interviewers are blinded as to who is exposed and who is not, the participants themselves know
60
When does surrogate interview bias occur?
When participants are dead or unable to be interviewed, and surrogates are interviewed in their place
61
Exposure suspicion bias definition
This type of information bias is closely related to, but not quite the same as interviewer/abstractor bias. It occurs when knowledge of cases' and controls' outcome status determines how the exposure is assessed.
62
Detection bias definition
Aka surveillance bias. When one pop or group is monitored more thoroughly over time than the other
63
Common forms of information bias in cross-sectional studies
Reporting bias Recall bias Surrogate interview bias
64
Common forms of information bias in case-control studies
Reporting bias Recall bias Interviewer/abstractor bias Surrogate interview bias Exposure suspicion bias
65
Common forms of information bias in cohort studies
Reporting bias Interviewer/abstractor bias Detection or surveillance bias
66
What is the result of information bias?
People are misclassified into the wrong groups Should be exposed but are listed as unexposed or vice versa Or should be diseased but are listed as not diseased or vice versa The right people are in your study, but under the wrong labels Can occur in any study design
67
What are two types of misclassification?
Non-differential (less serious) Differential (more serious)
68
Definition of non-differential misclassification
When the direction and extent of misclassification is equal between comparison groups. It occurs in both groups
69
What does non-differential misclassification always bias?
Odds ratio, relative risk, prevalence ratio or correlation coefficient toward the null
70
Differential misclassification definition
When the direction and/or extent of misclassification is not equal between comparison groups, or misclassification only occurs in one of the comparison groups
71
What calculations can differential misclassification bias?
The odds ratio, relative risk, etc. in any direction
72
Techniques to reduce information bias
Use memory aids to avoid recall bias Validate exposures via medical records or blood tests as opposed to questioning the participant Provide standardized training sessions and protocols for interviewer to avoid interviewer/abstractor bias Use standardized data collection forms to avoid interviewer/abstractor bias Blind or mask when possible in clinical trials
73
What are key questions when evaluating an epidemiologic study in terms of information bias?
What bias might have been introduced into the findings by the approach to selection of participants? How were the comparison groups chosen? Were selection criteria clearly defined? What were the response rates? Were all of the participants in the sample used in the analysis of the article? Why were some left out?
74
What are key questions when evaluating an epi study in terms of information bias?
Is there any evidence for bias by misclassification of the outcome or the exposure (information bias)? Is there likely to be recall or reporting bias? Could there be any interviewer/abstractor bias? Were the measurement and classification methods consistent for all participants?
75
When to control selection and information bias
Selection and information bias can only be prevented and controlled during the design (planning) and conduct of a study- not later -Choice of a study population (selection) -Methods of data collection (information)