Lecture 4 (Sep 27) Flashcards

1
Q

Misclassify

A

Very common error in health research

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Information (measurement) error leads to

A

misclassification

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Non-differential classification (the same in all study groups)

A

Usually weakens associations – i.e. brings effect estimates (RR, OR, AR) closer to the null value
But not always…
May have misclassified 1.4 % of them but the vast majority was not misclassified

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Differential (different in different study groups)

A

Effect estimates may change in any direction, depending on the particular error
Misclassified to much of the data to a point of having no idea whether you should
Ex. Low Birthweight mothers a biased to assume that it is their fault that and the over represent the amount of pesticides they were around

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Causal Inference

A

The process of determining whether a cause-and-effect relationship exists between two variables. It aims to answer questions like, “Does X cause Y?” For example, in medicine, researchers might ask, “Does a new drug cause better health outcomes?”
Building Blocks:
Measures of Disease Frequency, Various Study Designs
Results of Research:
Measures of Association

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Once you have calculated a measure of association, you need to determine if the observed association is

A

valid and if it is causal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Research Evidence Strong evidence is

A

Strong evidence is

  1. Of the lowest possible random sampling error (a statistically significant exposure/outcome association)
  2. Based on a good design
    Free of selection and information biases
    Under minimal influence of confounding (next session)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Internal validity

A

refers to the extent to which a study or experiment accurately establishes a causal relationship between the treatment (or independent variable) and the observed outcome (or dependent variable), without being affected by other confounding variables or biases. In simpler terms, it addresses whether the effects observed in a study can be confidently attributed to the intervention or treatment itself, rather than to external factors or flaws in the research design.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Generalizability

A

also known as external validity, refers to the extent to which the findings of a study can be applied or generalized beyond the specific conditions of the research. In other words, it addresses whether the results of a study hold true across different populations, settings, time periods, or variations in the treatment. Requires internal validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

If a study lacks internal validity, external validity

A

is irrelevant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

We do not compromise internal validity in an effort to

A

achieve external validity (generalizability)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Internal validity is when

A

when the effect estimated from the analytic sample is equal to the true causal effect in the study sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

External validity is when

A

the true causal effect in the study sample is equal to the true causal effect in the target population.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Four Hallmarks of Health Studies

A
  1. A research question/plausible theory
  2. A well thought design to address the research question
  3. Measurement of exposure and outcome
  4. Analysis to compare groups (measured association)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Validity is…

A

Having fewer errors

Error=Measured value-True value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Sources of error (CBC):

A

Chance (random sampling error)
Bias Systematic error in selection of participants and/or measurement
Confounding (next week)

17
Q

Threats to Validity

A

Chance (random sampling error)
Bias Systematic error in selection of participants and/or measurement
Confounding (next week)

18
Q

Random Sampling Error is not to be confused with

A

1) Random error in measurement
Information bias (previous session)
2) Randomization (a process in experimental studies)
Nov 8, 2024

19
Q

Type I error:

A

jumping the gun
Concluding that there is a treatment effect, or an exposure/outcome association (in the population from which you sampled), when there is NOT

20
Q

Type II error:

A

missing the boat
Failing to detect a REAL treatment effect or an exposure/outcome association (in the population from which you sampled)

21
Q

Bias

A

Bias refers to a systematic error in the design or conduct of a study
When bias occurs in a study the observed association between the exposure and outcome will be different from the true association

Most biases relate to the study design and procedures and can be classified into two categories:

Selection bias
Information bias (due to measurement error, last week)

22
Q

Information Bias vs Selection Bias:

A

Information bias occurs when there is a systematic error in the way data on the variables (e.g., exposure or outcome) is measured, collected, or classified in a study. This can lead to incorrect or misleading information being used in the analysis, potentially affecting the results.
Selection Bias: Selection bias occurs when the participants included in a study (or excluded from it) are not representative of the target population. This can happen during the selection or retention of participants and leads to systematic differences between those who are studied and those who are not.

23
Q

Why does Selection Bias Happen?

A

NOT an Error Associated with Sampling, recall week 2

24
Q

Types of Selection Bias

A

Inappropriate Control Selection (Control-Selection Bias) > Case control

Differential Participation > Case control, cohort

Differential Loss to Follow Up > Cohort Experimental

25
Q

Examples of Selection Bias

A

Volunteer bias
Non-response bias
Membership bias
Loss to follow-up bias

26
Q

Volunteer bias

A

Volunteers are more health-conscious or from a different socio-economic group
Differential exposure
Effect of interventions for enhancing physical activity in older adults

27
Q

Non-response bias

A

Those suffering from a disease with a particular belief
Differential outcome

28
Q

Membership bias

A

Healthy worker effect
Service in Vietnam reduced mortality rates (Crane et al, 1997)

29
Q

Loss to follow-up bias

A

In clinical trials or longitudinal studies the sickest usually leave the study early

30
Q

Examples of Selection Bias

A

Berkson’s bias

31
Q

Berkson’s bias

A

When cases and controls for a study are recruited from hospitals and therefore are more likely than the general population to have comorbid conditions
For example, assume that two diseases, sickle cell anemia (exposure) and asbestosis (outcome), are not associated in the source population. However, individuals with both diseases are more likely to be hospitalized.

32
Q

2 year long randomized controlled trial of medication use and weight loss in adults with obesity. Three treatment (exposure) groups: Metformin, Orlistat, or Placebo

A

Participants enrolled in this study with the intention and hope of losing weight

During the 2 year intervention, 10% of the Metformin group dropped out, 25% of the Orlistat group dropped out, and 35% of the Placebo group dropped out

The loss to follow-up also differed by weight loss (outcome) and side effects of the drugs (related to exposure)

33
Q

Oral Contraceptive Pills (OCP) and Deep Vein Thrombosis (DVP)

A
34
Q
A