Summarising the Evidence Flashcards
What is a cause of disease?
The causation of many diseases comprises a combination of genetically susceptibility and environmental exposure which leads, over a period of time, to the emergence of clinical disease.
A cause of disease is an exposure (which may be genetic or environmental) that produces disease. Epidemiological studies identify such causes by comparing diseased individuals with healthy individuals. One obvious prerequisite for this approach to succeed is that there is heterogeneity in exposure in the population studied; if everyone is exposed then the effects of that exposure cannot be assessed. If everyone in the UK smoked 20 cigarettes a day, then all cases and all controls would be smokers of 20 cigarettes per day and there would be no obvious association between cigarette smoking and lung cancer.
If the association between causes and disease were really a simple one-to-all, all-or-nothing relationship then what would epidemiological studies would be expected to find?
If the association between causes and disease were really a simple one-to-all, all-or-nothing relationship then epidemiological studies would be expected, for example, to demonstrate that smoking causes lung cancer by showing that all exposed individuals have disease, and all unexposed individuals do not. In reality of course, this is never the case, for reasons that include the following:
- Few if any diseases arise from a single cause, and some exposures cause more than one disease. Aside from single gene defects, the relationship between exposure and disease is usually far from absolute.
- Epidemiological studies will inevitably misclassify both disease and exposure status, resulting in a weakening of the apparent relation between them.
- It takes time (latency) for exposures to lead to disease, so some exposed individuals who will in time develop disease will appear to be healthy when a study is carried out.
What do epidemiological studies usually do?
What epidemiological studies usually do is identify risk factors, or exposures that increase the risk of disease. There may be many risk factors for a given disease, e.g. for asthma the list of identified risk factors includes allergy, birth order, maternal allergy, dust mite exposure, vaccinations, antibiotic use, maternal age, smoking, obesity, dietary antioxidant intake etc. Not all individuals are necessarily exposed to any or all of these factors, but in general, those who are exposed carry a higher risk of disease than those who are not.
For what reasons might risk factors for disease not necessarily cause the disease?
Risk factors for disease are not necessarily causes of disease because of the following:
- Chance: An apparent association between an exposure and a disease can arise by chance (a false positive association). Conventional statistical methods are designed to limit the likelihood of this to the 5% level, but false positive associations will still arise, particularly in studies in which multiple comparisons or hypothesis tests have been carried out. This can be easy to detect and a judgment about the potential relevance made when authors make the analyses they have carried out in their published report. This is not so easy when this process has gone on covertly in the process of producing a paper, but is not acknowledged in the published report.
- Indirect causation: The risk factor identified may act as a correlate of a true cause, there being in fact no independent relation between the risk factor and the disease. For example, carrying a box of matches is probably a risk factor for lung cancer, but only because most people who carry matches are cigarette smokers. The effects of poverty or deprivation on disease are also an example of indirect causation – having no money is not a health hazard in itself, but the consequences of having no money, such as homelessness or starvation, clearly are.
- Reverse causation: An exposure may be associated with disease because the disease causes the exposure, rather than vice-versa. For example, the association between asthma mortality and use of the asthma drug fenoterol reported in New Zealand could have arisen because asthmatic patients at high risk of death were more likely to be prescribed fenoterol, rather than any harmful effect of the drug in patients with asthma.
- Confounding: The association between risk factor and disease may arise or be otherwise distorted because of a relation between the risk factor and a confounding variable, which is itself related to disease occurrence. We have looked at confounding in section 5 – you will recall that confounding can create, accentuate, reduce or reverse a relationship between an exposure and disease.
It is often difficult to distinguish these various influences in observational studies, and often the case that even after the most critical evaluation of available data; it is not possible to tell whether an identified risk is actually a cause of disease. If the results of independent studies all come with a similar answer, then that consistency would support the idea that a casual association exists (see Bradford Hill criterial later). However, what often happens in practice is that different studies produce broadly similar but individually different results, and particularly for relationships that are relatively weak, not all of which are statistically significant. In these circumstances it is appropriate to try to combine the results of different studies, and sum their combined statistical power (meta-analysis) to draw evidence from multiple sources into a single combined summary statistic.
However, the most consistent evidence of association between an exposure and a disease does not exclude indirect causation, reverse causation, confounding or other bias. To resolve this, it is necessary to carry out intervention studies to determine whether changing exposure changes the risk of disease. Since it is unethical to deliberately expose people to something that is thought to be harmful (for example a randomised trial to see if smoking causes lung cancer), this usually involves reduction or abatement of the exposure, to see if this reduces disease risk. To identify whether a particular exposure causes a disease, some inference of the likely causation is needed from available evidence.
In 1965 Bradford-Hill outlined nine basic criteria for determining causation that are still widely used - list these criteria.
- Strength: Strong associations are more likely to represent true cause than weak ones. It is also probably true that weak associations are more likely to have arisen from error or bias than strong ones.
- Consistency: A risk factor for a disease that is constantly identifiable as such in different populations is more likely to be a true cause than one that is not. Although the presence of other necessary component causes is a prerequisite of consistency, it is nevertheless true that in terms of inference, one repetition of a finding in a different study is probably worth a thousand subgroup reassessments from a single study.
- Specificity: This criterion requires that a particular exposure should lead to a single disease, not to multiple disease. It would be interesting to know what Bradford Hill have made of the evidence now available on the health effects of smoking.
- Temporality: The cause must precede the onset of disease.
- Biological gradient: An exposure-response relationship makes it more likely that a particular exposure causes disease. In logical terms this seems likely, but there are many reasons why an exposure-response relationship may be difficult to demonstrate.
- Plausibility: The cause-effect relation should be plausible on biological grounds. This was not the case for many years for smoking and lung cancer. Plausibility helps, therefore, but is not necessary and may even be misleading.
- Coherence: The observed association should fit in with what is already known about the disease. This is a very similar criterion to plausibility.
- Experimental evidence: Demonstration that intervention in the exposure influences disease outcome is clearly strong evidence of causation. Need intervention study - RCT but unethical to conduct if exposure is a suspected cause of disease. Hence have to intervene to reduce exposure - does this reduce disease?
- Analogy: If similar cause effect relationships exist, a new example is perhaps more credible. Finding analogies is perhaps more of a challenge than a help to those involved in trying to identify causes.
These criteria are now perhaps subject to rather more qualification by both experienced and principle than when they were first proposed, but they are still widely quoted and used, (particularly in medico-legal work) and provide some support and insight into the likely validity of associations.
What do we mean by a ‘cause’ of a disease?
A cause of disease is an exposure that causes disease.
How do epidemiological studies try and identify cause of disease?
Epidemiological studies try and identify causal factors by comparing diseased with health people.
In order for an epidemiological study to be able to identify the cause of disease what key factor is required in the population?
In order for an epidemiological study to have any chance of determining the cause of disease we need heterogeneity in the population studied.
True or False - few diseases arise from a single cause.
True.
True or False - some exposures ‘cause’ multiple diseases.
True.
What is meant by the term ‘risk factor’?
Risk factor = exposure that increases the risk of disease.
Individuals exposed have a higher risk of disease.
For what reasons may a risk factor not necessarily be a cause of disease?
- Chance
- Indirect Causation
- Reverse Causation
- Confounding
Describe how chance may mean that a risk factor is not necessarily the cause of disease.
With a P-value of 0.05 5% of the time the conclusions will be false positive results. This can cause an issue with multiple testing / data dredging. It is fine if the authors are honest about the comparisons that have been done but it is not so obvious in some papers.
Describe how indirect causation may mean that a risk factor is not necessarily the cause of disease.
For example:
Risk factor A may be related to risk factor B, and it is actually risk factor B that is causing disease.
e.g. poverty may appear to be linked to malnutrition, but the actual causative factor for malnutrition is starvation which can be linked to poverty.
Describe how confounding factors may mean that a risk factor is not necessarily the cause of disease.
Confounding factors are independently associated with both the exposure you are looking at and the disease.
e.g. we may see an association between coffee drinking and lung cancer, but this is actually because coffee drinking is independently related to smoking.
You can test for confounding by using stratified analysis - e.g. looking at our association between coffee drinking and lung cancer in our smokers and non-smokers separately and seeing if that relationship was maintained.
Describe how reverse causation may mean that a risk factor is not necessarily the cause of disease.
The disease may cause risk factor A rather than the risk factor causing the disease.
How do we determine cause from epidemiological studies when there are so many factors to take into consideration?
- Bradford-Hill - developed 9 criteria in 1965
- Beware though - even the most consistent evidence doesn’t exclude indirect causation, reverse causation or confounding
What is usually accepted as a strong association in a study?
An odds ratio or a risk ratio of 2 or more. However, this is not a magic number and we should not blindly trust this. It is important not to discount small effects. Well designed and implemented studies can detect small effect sizes.
Is specificity a perfect indicator of causality?
No. For example smoking causes more than just lung cancer, Aspirin is effective for a wide range of diseases. Don’t discount non-specific effects, but be cautious.
Summarise causation.
It is difficult to assess causation in epidemiological studies - we can identify risk factors and to a certain extent use Bradford-Hill criteria to assess if causation is likely. Really what we need is a prospective study to ‘prove’ causation.
What is the purpose of systematic reviews?
One of the difficulties of determining cause and effect, and indeed other associations in epidemiology and clinical medicine, is that of synthesising evidence from diverse sources and from studies of different design and sample size.
For example, during the 1980’s reports began to appear in the literature suggesting that passive exposure to cigarette smoke may increase the risk of lung cancer in non-smokers. Most of the early studies were carried out in small samples of people, and although they found odds ratios that were increased, few were statistically significant.
Some of the early studies carried out in European populations seemed to indicate that there was some real increase in the risk of lung cancer amongst non-smokers married to a smoker, but also that the estimated magnitude of effects varies greatly, and that almost all the studies were not statistically significant. How can the information from diverse sources be combined?
The aim of systematic reviews and meta analysis is to give a complete and balanced overview of the available evidence concerning one particular question. The question may relate to causation but could also relate to the efficacy of a treatment in different clinical trials, or many other topics. In fact the statistical methods used in meta-analysis are relatively simple; the hard part of the process is making sure that all of the relevant data has been collected. This is the function of a systematic review.
What is a systematic review?
The definition of a systematic review given by Cook et al is “the application of scientific strategies that limit bias to the systematic assembly, critical appraisal, and synthesis of all relevant studies on a specific topic.” This paper also presents guidelines for performing systematic reviews and meta-analysis.
The underlying principle is that rigid methodology is used to ensure that the question being asked and the study to be reviewed is defined carefully, and that a thorough review of all the literature is carried out. The Cochrane Collaboration is a body that aims to produce systematic reviews of a number of clinical questions and update their reviews regularly. Already the Cochrane library contains hundreds of systematic reviews.
Describe the steps that must be carried out in a systematic review.
How each step of the systematic review is to be carried out needs to be documented in a protocol. These steps are:
- Framing the question - for example, does exposure to cigarette smoke increase the risk of lung cancer in non-smokers?
- Identifying the relevant literature - unlike the literature review where we just look at a selection of papers, for a systematic review we have to look at all of the available evidence that has tried to answer the specific question. This requires a very detailed search strategy and it is very time consuming. It usually generates thousands of pieces of literature which then have to be sorted to see if they are relevant to our question of whether cigarette smoke exposure increases the risk of lung cancer in non-smokers.
- Assessing the methodological quality of the included studies - This is an important aspect of any systematic review. Some studies are poorly conducted and/or reported, and therefore it is questionable as to whether these studies should inform practice. The method of assessment you can use to judge the quality of an individual study depends on the type of study you are assessing. What you are trying to assess is whether the results are credible or valid. You can do this by identifying the strengths and weaknesses of the study, and determining whether the study is at a high risk of bias.
- Summarising the evidence and interpreting findings - Once we have identified and obtained all of the relevant literature on our research question, we can then summarise all of the information. This can be done either qualitatively using a descriptive format, for example, ‘when looking at the included studies, it does not appear that exposure to cigarette smoke consistently increases the risk of lung cancer in non-smokers’; or quantitatively, using a statistical method called meta-analysis.
What is meta-analysis?
A meta-analysis can be used to calculate an ‘average’ measure of an effect by assembling the quantitative results from several studies together. Each study in the meta-analysis has a measure of effect, such as an odds ratio, which tells the researcher whether there was, on average, an increase or decrease in the risk of getting lung cancer after being exposed to cigarette smoke. The main aim of a meta-analysis is to improve the precision of these measures of effect by statistically combining the data from multiple studies and creating a new, single measure of effect, called the pooled result or summary statistic. Meta-analyses cannot be carried out using SPSS. Instead, STATA carries out fixed and random effects analyses, tests for heterogeneity and produces graphs of the individual and pooled estimates.
A meta-analysis is a formal way of synthesising the result of multiply studies and uses statistical techniques to pool these results. One definition of a meta-analysis is “a systematic review that employs statistical methods to combine and summarise the results of several studies.”