Fallacies Flashcards
Argument from fallacy
Assumes that if an argument for some conclusion is fallacious then the conclusion itself is false.
Argument from fallacy is the formal fallacy of analyzing an argument and inferring that, since it contains a fallacy, its conclusion must be false.K. S. Pope (2003) “Logical Fallacies in Psychology: 21 Types” Fallacies & Pitfalls in Psychology It is also called argument to logic (argumentum ad logicam), fallacy fallacy, or fallacist’s fallacy.
Fallacious arguments can arrive at true conclusions, so this is an informal fallacy of relevance.
Form
It has the general argument form:
If P, then Q. P is a fallacious argument. Therefore, Q is false. c since A A is fallacious ¬c Thus, it is a special case of denying the antecedent where the antecedent, rather than being a proposition that is false, is an entire argument that is fallacious. A fallacious argument, just as with a false antecedent, can still have a consequent that happens to be true. The fallacy is in concluding the consequent of a fallacious argument has to be false.
That the argument is fallacious only means that the argument cannot succeed in proving its consequent.John Woods, The death of argument: fallacies in agent based reasoning, Springer 2004, pp. XXIII–XXV But showing how one argument in a complex thesis is fallaciously reasoned does not necessarily invalidate the proof; the complete proof could still logically imply its conclusion if that conclusion is not dependent on the fallacy:
Examples:
Tom: All cats are animals. Ginger is an animal. This means Ginger is a cat.
Bill: Ah, you just committed the affirming the consequent logical fallacy. Sorry, you are wrong, which means that Ginger is not a cat.
Tom: OK – I’ll prove I’m English – I speak English so that proves it.
Bill: But Americans and Canadians, among others, speak English too. You have committed the package-deal fallacy, assuming that speaking English and being English always go together. That means you are not English.
Both of Bill’s rebuttals are arguments from fallacy, because Ginger may or may not be a cat, and Tom may or may not be English. Of course, the mere fact that one can invoke the argument from fallacy against a position does not automatically “prove” one’s own position either, as this would itself be yet another argument from fallacy. An example of this false reasoning follows:
Joe: Bill’s assumption that Ginger is not a cat uses the argument from fallacy. Therefore, Ginger absolutely must be a cat.
An argument using fallacious reasoning is capable of being consequentially correct.
Base rate fallacy
Making a probability judgement based on conditional probabilities, without taking into account the effect of prior probabilities.
The base rate fallacy, also called base rate neglect or base rate bias, is an error that occurs when the conditional probability of some hypothesis H given some evidence E is assessed without taking into account the prior probability (“base rate”) of H and the total probability of evidence E. The conditional probability can be expressed as P(H|E), the probability of H given E, and the base rate error happens when the values of sensitivity and specificity, which depend only on the test itself, are used in place of positive predictive value and negative predictive value, which depend on both the test and the baseline prevalence of event.
Example
In a city of 1 million inhabitants there are 100 terrorists and 999,900 non-terrorists. To simplify the example, it is assumed that the only people in the city are inhabitants. Thus, the base rate probability of a randomly selected inhabitant of the city being a terrorist is 0.0001, and the base rate probability of that same inhabitant being a non-terrorist is 0.9999. In an attempt to catch the terrorists, the city installs an alarm system with a surveillance camera and automatic facial recognition software. The software has two failure rates of 1%:
The false negative rate: If the camera scans a terrorist, a bell will ring 99% of the time, and it will fail to ring 1% of the time.
The false positive rate: If the camera scans a non-terrorist, a bell will not ring 99% of the time, but it will ring 1% of the time.
Suppose now that an inhabitant triggers the alarm. What is the chance that the person is a terrorist? In other words, what is P(P|B), the probability that a terrorist has been detected given the ringing of the bell? Someone making the ‘base rate fallacy’ would infer that there is a 99% chance that the detected person is a terrorist. Although the inference seems to make sense, it is actually bad reasoning, and a calculation below will show that the chances they are a terrorist are actually near 1%, not near 99%.
The fallacy arises from confusing the natures of two different failure rates. The ‘number of non-bells per 100 terrorists’ and the ‘number of non-terrorists per 100 bells’ are unrelated quantities. One does not necessarily equal the other, and they don’t even have to be almost equal. To show this, consider what happens if an identical alarm system were set up in a second city with no terrorists at all. As in the first city, the alarm sounds for 1 out of every 100 non-terrorist inhabitants detected, but unlike in the first city, the alarm never sounds for a terrorist. Therefore 100% of all occasions of the alarm sounding are for non-terrorists, but a false negative rate cannot even be calculated. The ‘number of non-terrorists per 100 bells’ in that city is 100, yet P(T|B) = 0%. There is zero chance that a terrorist has been detected given the ringing of the bell.
Imagine that the city’s entire population of one million people pass in front of the camera. About 99 of the 100 terrorists will trigger the alarm—-and so will about 9,999 of the 999,900 non-terrorists. Therefore, about 10,098 people will trigger the alarm, among which about 99 will be terrorists. So the probability that a person triggering the alarm is actually a terrorist is only about 99 in 10,098, which is less than 1%, and very very far below our initial guess of 99%.
The base rate fallacy is so misleading in this example because there are many more non-terrorists than terrorists. If, instead, the city had about as many terrorists as non-terrorists, and the false-positive rate and the false-negative rate were nearly equal, then the probability of misidentification would be about the same as the false-positive rate of the device. These special conditions hold sometimes: as for instance, about half the women undergoing a pregnancy test are actually pregnant, and some pregnancy tests give about the same rates of false positives and of false negatives. In this case, the rate of false positives per positive test will be nearly equal to the rate of false positives per nonpregnant woman. This is why it is very easy to fall into this fallacy: by coincidence it gives the correct answer in many common situations.
In many real-world situations, though, particularly problems like detecting criminals in a largely law-abiding population, the small proportion of targets in the large population makes the base rate fallacy very applicable. Even a very low false-positive rate will result in so many false alarms as to make such a system useless in practice.
Findings in psychology
In experiments, people have been found to prefer individuating information over general information when the former is available.
In some experiments, students were asked to estimate the grade point averages (GPAs) of hypothetical students. When given relevant statistics about GPA distribution, students tended to ignore them if given descriptive information about the particular student, even if the new descriptive information was obviously of little or no relevance to school performance. This finding has been used to argue that interviews are an unnecessary part of the college admissions process because interviewers are unable to pick successful candidates better than basic statistics.
Psychologists Daniel Kahneman and Amos Tversky attempted to explain this finding in terms of a simple rule or “heuristic” called representativeness. They argued that many judgements relating to likelihood, or to cause and effect, are based on how representative one thing is of another, or of a category. Richard Nisbett has argued that some attributional biases like the fundamental attribution error are instances of the base rate fallacy: people underutilize “consensus information” (the “base rate”) about how others behaved in similar situations and instead prefer simpler dispositional attributions.
Kahneman considers base rate neglect to be a specific form of extension neglect.
Conjunction fallacy
Assumption that an outcome simultaneously satisfying multiple conditions is more probable than an outcome satisfying a single one of them.
The conjunction fallacy is a formal fallacy that occurs when it is assumed that specific conditions are more probable than a single general one.
The most often-cited example of this fallacy originated with Amos Tversky and Daniel Kahneman:Tversky & Kahneman (1982, 1983)
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
Which is more probable?
Linda is a bank teller.
Linda is a bank teller and is active in the feminist movement.
90% of those asked chose option 2. However the probability of two events occurring together (in “conjunction”) is always less than or equal to the probability of either one occurring alone—formally, for two events A and B this inequality could be written as \Pr(A \and B) \leq \Pr(A), and \Pr(A \and B) \leq \Pr(B).
For example, even choosing a very low probability of Linda being a bank teller, say Pr(Linda is a bank teller)
- 05 and a high probability that she would be a feminist, say Pr(Linda is a feminist)
- 95, then, assuming independence, Pr(Linda is a bank teller and Linda is a feminist) = 0.05 × 0.95 or 0.0475, lower than Pr(Linda is a bank teller).
Tversky and Kahneman argue that most people get this problem wrong because they use a heuristic called representativeness to make this kind of judgment: Option 2 seems more “representative” of Linda based on the description of her, even though it is clearly mathematically less likely.Tversky & Kahneman (1983)
In other demonstrations they argued that specific scenario seemed more likely because of representativeness, but each added detail would actually make the scenario less and less likely. In this way it could be similar to the misleading vividness or slippery slope fallacies. More recently Kahneman has argued that the conjunction fallacy is a type of extension neglectKahneman (2003)
Joint versus separate evaluation
In some experimental demonstrations the conjoint option is evaluated separately from its basic option. In other words, one group of participants is asked to rank order the likelihood that Linda is a bank teller, a high school teacher, and several other options, and another group is asked to rank order whether Linda is a bank teller and active in the feminist movement versus the same set of options (without Linda is a bankteller as an option). In this type of demonstration different groups of subjects rank order Linda as a bank teller and active in the feminist movement more highly than Linda as a bank teller.
Separate evaluation experiments preceded the earliest joint evaluation experiments, and Kahneman and Tversky were surprised when the effect was still observed under joint evaluation.Kahneman (2011) chapter 15
In separate evaluation the term conjunction effect may be preferred.
Criticism of the Linda problem
Critics such as Gerd Gigerenzer and Ralph Hertwig criticized the Linda problem on grounds such as the wording and framing. The question of the Linda problem may violate conversational maxims in that people assume that the question obeys the maxim of relevance. Gigerenzer argues that some of the terminology used have polysemous meanings, the alternatives of which he claimed were more “natural”. He argues that the meaning of probable “what happens frequently”, corresponds to the mathematical probability people are supposed to be tested on, but the meanings of probable “what is plausible”, and “whether there is evidence” do not.Gigerenzer (1996), Hertwig & Gigerenzer (1999) The term “and” has even been argued to have relevant polysemous meanings.Mellers, Hertwig & Kahneman (2001) Many techniques have been developed to control for this possible misinterpretation but none of them has dissipated the effect.Moro, 2009; Tentori & Crupi, 2012
Many variations in wording of the Linda problem were studied by Tversky and Kahneman. If the first option is changed to obey conversational relevance, i.e., “Linda is a bank teller whether or not she is active in the feminist movement” the effect is decreased, but the majority (57%) of the respondents still commit the conjunction error. If the probability is changed to frequency format (see debiasing section below) the effect is reduced or eliminated. However, studies exist in which indistinguishable conjunction fallacy rates have been observed with stimuli framed in terms of probabilities versus frequencies.see, for example, Tentori, Bonini, & Osherson, 2004 or Weddell & Moro, 2008
The wording criticisms may be less applicable to the conjunction effect in separate evaluation.Gigerenzer (1996) The “Linda problem” has been studied and criticized more than other types of demonstration of the effect (some described below).Kahneman (2011) ch. 15, Kahneman & Tversky (1996), Mellers, Hertwig & Kahneman (2001)
Other demonstrations
Policy experts were asked to rate the probability that the Soviet Union would invade Poland, and the United States would break off diplomatic relations, all in the following year. They rated it on average as having a 4% probability of occurring. Another group of experts was asked to rate the probability simply that the United States would break off relations with the Soviet Union in the following year. They gave it an average probability of only 1%.
In an experiment conducted in 1980, respondents were asked the following:
Suppose Bjorn Borg reaches the Wimbledon finals in 1981. Please rank order the following outcomes from most to least likely.
Borg will win the match
Borg will lose the first set
Borg will lose the first set but win the match
Borg will win the first set but lose the match
On average, participants rated “Borg will lose the first set but win the match” more highly than “Borg will lose the first set”.
In another experiment, participants were asked:
Consider a regular six-sided die with four green faces and two red faces. The die will be rolled 20 times and the sequence of greens (G) and reds (R) will be recorded. You are asked to select one sequence, from a set of three, and you will win $25 if the sequence you choose appears on successive rolls of the die.
RGRRR
GRGRRR
GRRRRR
65% of participants chose the second sequence, though option 1 is contained within it and is shorter than the other options. In a version where the $25 bet was only hypothetical the results did not significantly differ. Tversky and Kahneman argued that sequence 2 appears “representative” of a chance sequence (compare to the clustering illusion).
Debiasing
Drawing attention to set relationships, using frequencies instead of probabilities and/or thinking diagrammatically sharply reduce the error in some forms of the conjunction fallacy.Tversky & Kahneman (1983), Gigerenzer (1991), Hertwig & Gigerenzer (1999), Mellers, Hertwig & Kahneman (2001)
In one experiment the question of the Linda problem was reformulated as follows:
There are 100 persons who fit the description above (that is, Linda’s). How many of them are:
Bank tellers? __ of 100
Bank tellers and active in the feminist movement? __ of 100
Whereas previously 85% of participants gave the wrong answer (bank teller and active in the feminist movement) in experiments done with this questioning none of the participants gave a wrong answer.Gigerenzer (1991)
Masked man fallacy
The substitution of identical designators in a true statement can lead to a false one.
The masked man fallacy is a fallacy of formal logic in which substitution of identical designators in a true statement can lead to a false one.
One form of the fallacy may be summarized as follows:
Premise 1: I know who X is.
Premise 2: I do not know who Y is.
Conclusion: Therefore, X is not Y.
The problem arises because Premise 1 and Premise 2 can be simultaneously true even when X and Y refer to the same person. Consider the argument, “I know who my father is. I do not know who the thief is. Therefore, my father is not the thief.” The premises may be true and the conclusion false if the father is the thief but the speaker does not know this about his father. Thus the argument is a fallacious one.
The name of the fallacy comes from the example, “I do not know who the masked man is”, which can be true even though the masked man is Jones, and I know who Jones is.
If someone were to say, “I do not know the masked man,” it implies, “If I do know the masked man, I do not know that he is the masked man.” The masked man fallacy omits the implication.
Note that the following similar argument is valid:
X is Z
Y is not Z
Therefore, X is not Y
But this is because being something is different from knowing (or believing, etc.) something.
Affirming a disjunct
Concluded that one disjunct of a logical disjunction must be false because the other disjunct is true. A or B; A; therefore not B.
The formal fallacy of affirming a disjunct also known as the fallacy of the alternative disjunct or a false exclusionary disjunct occurs when a deductive argument takes the following logical form:
A or B
A
Therefore, it is not the case that B
Explanation
The fallacy lies in concluding that one disjunct must be false because the other disjunct is true; in fact they may both be true because “or” is defined inclusively rather than exclusively. It is a fallacy of equivocation between the operations OR and XOR.
Affirming the disjunct should not be confused with the valid argument known as the disjunctive syllogism.
Example
The following argument indicates the invalidity of affirming a disjunct:
Max is a cat or Max is a mammal.
Max is a cat.
Therefore, Max is not a mammal.
This inference is invalid. If Max is a cat then Max is also a mammal. (Remember “or” is defined in an inclusive sense not an exclusive sense.)
The car is red or the car is large.
The car is red.
Therefore, the car is not large.
The above example of an argument also demonstrates the fallacy.
Affirming the consequent
The antecedent in an indicative conditional is claimed to be true because the consequent is true.
If A, then B; B, therefore A.
Affirming the consequent, sometimes called converse error or fallacy of the converse, is a formal fallacy of inferring the converse from the original statement. The corresponding argument has the general form:
If P, then Q.
Q.
Therefore, P.
An argument of this form is invalid, i.e., the conclusion can be false even when statements 1 and 2 are true. Since P was never asserted as the only sufficient condition for Q, other factors could account for Q (while P was false).
To put it differently, if P implies Q, the only inference that can be made is non-Q implies non-P. (Non-P and non-Q designate the opposite propositions to P and Q.) Symbolically:
(P ⇒ Q) ⇔ (non-Q ⇒ non-P)
The name affirming the consequent derives from the premise Q, which affirms the “then” clause of the conditional premise.
Examples
One way to demonstrate the invalidity of this argument form is with a counterexample with true premises but an obviously false conclusion. For example:
If Bill Gates owns Fort Knox, then he is rich.
Bill Gates is rich.
Therefore, Bill Gates owns Fort Knox.
Owning Fort Knox is not the only way to be rich. Any number of other ways exist to be rich.
However, one can affirm with certainty that “if Bill Gates is not rich” (non-Q) then “Bill Gates does not own Fort Knox” (non-P)
Arguments of the same form can sometimes seem superficially convincing, as in the following example:
If I have the flu, then I have a sore throat.
I have a sore throat.
Therefore, I have the flu.
But having the flu is not the only cause of a sore throat since many illnesses cause sore throat, such as the common cold or strep throat.
Denying the antecedent
The consequent in an indicative conditional is claimed to be false because the antecedent is false.
If A, then B; not A, therefore not B.
Denying the antecedent, sometimes also called inverse error or fallacy of the inverse, is a formal fallacy of inferring the inverse from the original statement. It is committed by reasoning in the form:
If P, then Q.
Not P.
Therefore, not Q.
Arguments of this form are invalid. Informally, this means that arguments of this form do not give good reason to establish their conclusions, even if their premises are true.
The name denying the antecedent derives from the premise “not P”, which denies the “if” clause of the conditional premise.
One way to demonstrate the invalidity of this argument form is with a counterexample with true premises but an obviously false conclusion. For example:
If Queen Elizabeth is an American citizen, then she is a human being.
Queen Elizabeth is not an American citizen.
Therefore, Queen Elizabeth is not a human being.
That argument is obviously bad, but arguments of the same form can sometimes seem superficially convincing, as in the following example offered, with apologies for its lack of logical rigour, by Alan Turing in the article “Computing Machinery and Intelligence”:
However, men could still be machines that do not follow a definite set of rules. Thus this argument (as Turing intends) is invalid.
It is possible that an argument that denies the antecedent could be valid, if the argument instantiates some other valid form. For example, if the claims P and Q express the same proposition, then the argument would be trivially valid, as it would beg the question. In everyday discourse, however, such cases are rare, typically only occurring when the “if-then” premise is actually an “if and only if” claim (i.e., a biconditional/equality). For example:
If I am President of the United States, then I can veto Congress.
I am not President.
Therefore, I cannot veto Congress.
The above argument is not valid, but would be if the first premise ended thus: “…and if I can veto Congress, then I am the U.S. President” (as is in fact true). More to the point, the validity of the new argument stems not from denying the antecedent, but modus tollens (denying the consequent).
Existential fallacy
An argument has a universal premise and a particular conclusion.
The existential fallacy, or existential instantiation, is a formal fallacy. In the existential fallacy, we presuppose that a class has members when we are not supposed to do so; that is, when we should not assume existential import.
An existential fallacy is committed in a medieval categorical syllogism because it has two universal premises and a particular conclusion with no assumption that at least one member of the class exists, which is not established by the premises.
In modern logic, the presupposition that a class has members is seen as unacceptable. In 1905, Bertrand Russell wrote an essay entitled “The Existential Import of Proposition”, in which he called this Boolean approach “Peano’s interpretation”.
The fallacy does not occur in enthymemes, where hidden premises required to make the syllogism valid assume the existence of at least one member of the class.
One central concern of the Aristotelian tradition in logic is the theory of the categorical syllogism. This is the theory of two-premised arguments in which the premises and conclusion share three terms among them, with each proposition containing two of them. It is distinctive of this enterprise that everybody agrees on which syllogisms are valid. The theory of the syllogism partly constrains the interpretation of the forms. For example, it determines that the A form has existential import, at least if the I form does. For one of the valid patterns (Darapti) is:
Every C is B’’
Every C is A’’
So, some A is B
This is invalid if the A form lacks existential import, and valid if it has existential import. It is held to be valid, and so we know how the A form is to be interpreted. One then naturally asks about the O form; what do the syllogisms tell us about it? The answer is that they tell us nothing. This is because Aristotle did not discuss weakened forms of syllogisms, in which one concludes a particular proposition when one could already conclude the corresponding universal. For example, he does not mention the form:
No C is B
Every A is C
So, some A is not B
If people had thoughtfully taken sides for or against the validity of this form, that would clearly be relevant to the understanding of the O form. But the weakened forms were typically ignored.
Affirmative conclusion from a negative premise
When a categorical syllogism has a positive conclusion, but at least one negative premise.
Affirmative conclusion from a negative premise (illicit negative) is a formal fallacy that is committed when a categorical syllogism has a positive conclusion, but one or two negative premises.
For example:
No fish are dogs, and no dogs can fly, therefore all fish can fly.
The only thing that can be properly inferred from these premises is that some things that are not fish cannot fly, provided that dogs exist.
Or:
We don’t read that trash. People who read that trash don’t appreciate real literature. Therefore, we appreciate real literature.
It is a fallacy because any valid forms of categorical syllogism that assert a negative premise must have a negative conclusion.
Appeal to probability
Takes something for granted because it would probably be the case.
An appeal to probability (or appeal to possibility) is the logical fallacy of taking something for granted because it would probably be the case, (or might possibly be the case). Inductive arguments lack deductive validity and must therefore be asserted or denied in the premises.
Example
A fallacious appeal to possibility:
Something can go wrong .
Therefore, something will go wrong .
A deductively valid argument would be explicitly premised on Murphy’s law, (see also, modal logic).
Anything that can go wrong, will go wrong .
Something can go wrong .
Therefore, something will go wrong .
Fallacy of exclusive premises
A categorical syllogism that is invalid because both of its premises are negative.
The fallacy of exclusive premises is a syllogistic fallacy committed in a categorical syllogism that is invalid because both of its premises are negative.
Example of an EOO-4 invalid proposition:
E Proposition: No mammals are fish.
O Proposition: Some fish are not whales.
O Proposition: Therefore, some whales are not mammals.
Fallacy of four terms
A categorical syllogism that has four terms.
The fallacy of four terms is the formal fallacy that occurs when a syllogism has four (or more) terms rather than the requisite three. This form of argument is thus invalid.
Explanation
Categorical syllogisms always have three terms:
Major premise: All fish have fins.
Minor premise: All goldfish are fish.
Conclusion: All goldfish have fins.
Here, the three terms are: “goldfish”, “fish”, and “fins”.
Using four terms invalidates the syllogism:
Major premise: All fish have fins.
Minor premise: All goldfish are fish.
Conclusion: All humans have fins.
The premises do not connect “humans” with “fins”, so the reasoning is invalid. Notice that there are four terms: “fish”, “fins”, “goldfish” and “humans”. Two premises are not enough to connect four different terms, since in order to establish connection, there must be one term common to both premises.
In everyday reasoning, the fallacy of four terms occurs most frequently by equivocation: using the same word or phrase but with a different meaning each time, creating a fourth term even though only three distinct words are used:
Major premise: Nothing is better than eternal happiness.
Minor premise: A ham sandwich is better than nothing.
Conclusion: A ham sandwich is better than eternal happiness.
The word “nothing” in the example above has two meanings, as presented: “nothing is better” means the thing being named has the highest value possible; “better than nothing” only means that the thing being described has some value. Therefore, “nothing” acts as two different words in this example, thus creating the fallacy of four terms.
Another example of equivocation, a more tricky one:
Major premise: The pen touches the paper.
Minor premise: The hand touches the pen.
Conclusion: The hand touches the paper.
This is more clear if one uses “is touching” instead of “touches”. It then becomes clear that “touching the pen” is not the same as “the pen”, thus creating four terms: “the hand”, “touching the pen”, “the pen”, “touching the paper”. A correct form of this statement would be:
Major premise: All that touches the pen, touches the paper.
Minor premise: The hand touches the pen.
Conclusion: The hand touches the paper.
Now the term “the pen” has been eliminated, leaving three terms. this argument is now valid but nonsensical because the major premise is untrue
The fallacy of four terms also applies to syllogisms that contain five or six terms.
Reducing terms
Sometimes a syllogism that is apparently fallacious because it is stated with more than three terms can be translated into an equivalent, valid three term syllogism. For example:
Major premise: No humans are immortal.
Minor premise: All Greeks are people.
Conclusion: All Greeks are mortal.
This EAE-1 syllogism apparently has five terms: “humans”, “people”, “immortal”, “mortal”, and “Greeks”. However it can be rewritten as a standard form AAA-1 syllogism by first substituting the synonymous term “humans” for “people” and then by reducing the complementary term “immortal” in the first premise using the immediate inference known as obversion (that is, “No humans are immortal.” is equivalent to “All humans are mortal.”).
Classification
The fallacy of four terms is a syllogistic fallacy. Types of syllogism to which it applies include statistical syllogism, hypothetical syllogism, and categorical syllogism, all of which must have exactly three terms. Because it applies to the argument’s form, as opposed to the argument’s content, it is classified as a formal fallacy.
Equivocation of the middle term is a frequently cited source of a fourth term being added to a syllogism; both of the equivocation examples above affect the middle term of the syllogism. Consequently this common error itself has been given its own name: the fallacy of the ambiguous middle. An argument that commits the ambiguous middle fallacy blurs the line between formal and informal fallacies, however it is usually considered an informal fallacy because the argument’s form appears valid.
Illicit major
A categorical syllogism that is invalid because it’s major term is not distributed in the major premise but distributed in the conclusion.
Illicit minor
A categorical syllogism that is invalid because it’s minor term is not distributed in the minor premise but distributed in the conclusion.
Illicit minor is a formal fallacy committed in a categorical syllogism that is invalid because its minor term is undistributed in the minor premise but distributed in the conclusion.
This fallacy has the following argument form:
All A are B.
All A are C.
Therefore, all C are B.
Example:
All cats are felines.
All cats are mammals.
Therefore, all mammals are felines.
The minor term here is mammal, which is not distributed in the minor premise “All cats are mammals,” because this premise is only defining a property of possibly some mammals (i.e., that they’re cats.) However, in the conclusion “All mammals are felines,” mammal is distributed (it is talking about all mammals being felines). It is shown to be false by any mammal that is not a feline; for example, a dog.
Example:
Pie is good.
Pie is unhealthy.
Thus, all good things are unhealthy.
Negative conclusion from affirmative premises.
When a categorical syllogism has a negative conclusion but affirmative premises.
Negative conclusion from affirmative premises is a syllogistic fallacy committed when a categorical syllogism has a negative conclusion yet both premises are affirmative. The inability of affirmative premises to reach a negative conclusion is usually cited as one of the basic rules of constructing a valid categorical syllogism.
Statements in syllogisms can be identified as the following forms:
a: All A is B. (affirmative)
e: No A is B. (negative)
i: Some A is B. (affirmative)
o: Some A is not B. (negative)
The rule states that a syllogism in which both premises are of form a or i (affirmative) cannot reach a conclusion of form e or o (negative). Exactly one of the premises must be negative to construct a valid syllogism with a negative conclusion. (A syllogism with two negative premises commits the related fallacy of exclusive premises.)
Example (invalid aae form):
Premise: All colonels are officers.
Premise: All officers are soldiers.
Conclusion: Therefore, no colonels are soldiers.
The aao-4 form is perhaps more subtle as it follows many of the rules governing valid syllogisms, except it reaches a negative conclusion from affirmative premises.
Invalid aao-4 form:
All A is B.
All B is C.
Therefore, some C is not A.
This is valid only if A is a proper subset of B and/or B is a proper subset of C. However, this argument reaches a faulty conclusion if A, B, and C are equivalent. In the case that A B C, the conclusion of the following simple aaa-1 syllogism would contradict the aao-4 argument above:
All B is A.
All C is B.
Therefore, all C is A.
Fallacy of the undistributed middle
The middle term in a categorical syllogism is not distributed.
The fallacy of the undistributed middle is a formal fallacy, that is committed when the middle term in a categorical syllogism is not distributed in either the minor premise or the major premise. It is thus a syllogistic fallacy.
Classical formulation
In classical syllogisms, all statements consist of two terms and are in the form of “A” (all), “E” (none), “I” (some), or “O” (some not). The first term is distributed in A statements; the second is distributed in O statements; both are distributed in E statements; and none are distributed in I statements.
The fallacy of the undistributed middle occurs when the term that links the two premises is never distributed.
In this example, distribution is marked in boldface:
All Z is B (All) y is B Therefore (All) y is Z B is the common term between the two premises (the middle term) but is never distributed, so this syllogism is invalid.
Also, a related rule of logic is that anything distributed in the conclusion must be distributed in at least one premise.
All Z is B Some Y is Z Therefore All Y is B The middle term - Z - is distributed, but Y is distributed in the conclusion and not in any premise, so this syllogism is invalid.
Pattern
The fallacy of the undistributed middle takes the following form:
All Z is B
Y is B
Therefore, Y is Z
This can be graphically represented as follows:
Undistributed middle argument map.jpg
where the premises are in the green box and the conclusion is indicated above them.
Here, B is the middle term, and it is not distributed in the major premise, “all Z is B”.
It may or may not be the case that “all Z is B,” but this is irrelevant to the conclusion. What is relevant to the conclusion is whether it is true that “all B is Z,” which is ignored in the argument. The fallacy is similar to affirming the consequent and denying the antecedent. However, the fallacy may be resolved if the terms are exchanged in either the conclusion or in the first co-premise. Indeed, from the perspective of first-order logic, all cases of the fallacy of the undistributed middle are, in fact, examples of affirming the consequent or denying the antecedent, depending on the structure of the fallacious argument.
Examples
For example:
All students carry backpacks. My grandfather carries a backpack. Therefore, my grandfather is a student. All students carry backpacks. My grandfather carries a backpack. Everyone who carries a backpack is a student. Therefore, my grandfather is a student. Grandpa backpack undistributed middle.jpg
The middle term is the one that appears in both premises — in this case, it is the class of backpack carriers. It is undistributed because neither of its uses applies to all backpack carriers. Therefore it can’t be used to connect students and my grandfather — both of them could be separate and unconnected divisions of the class of backpack carriers. Note below how “carries a backpack” is truly undistributed:
grandfather is someone who carries a backpack; student is someone who carries a backpack
Specifically, the structure of this example results in affirming the consequent.
However, if the latter two statements were switched, the syllogism would be valid:
All students carry backpacks.
My grandfather is a student.
Therefore, my grandfather carries a backpack.
In this case, the middle term is the class of students, and the first use clearly refers to ‘all students’. It is therefore distributed across the whole of its class, and so can be used to connect the other two terms (backpack carriers, and my grandfather). Again, note below that “student” is distributed:
grandfather is a student and thus carries a backpack
Argument from ignorance
Assuming that a claim is true because it has not been proven false or cannot be proven false.
Argument from repetition
Signifies that it has been discussed extensively until nobody cares to discuss it anymore.
Ad nauseam is a Latin term for something unpleasurable that has continued “to point of nausea”. “ad nauseam” definitions from Dictionary.com For example, the sentence, “This topic has been discussed ad nauseam”, signifies that the topic in question has been discussed extensively, and that those involved in the discussion have grown tired of it.
Etymology
This term is defined by the American Heritage Dictionary as:
Argumentum ad nauseam or argument from repetition or argumentum ad infinitum is an argument made repeatedly (possibly by different people) until nobody cares to discuss it any more. This may sometimes, but not always, be a form of proof by assertion
Argument from silence
Where the conclusion is based on the absence of evidence, rather than the existence of evidence.
An argument from silence (also called argumentum a silentio in Latin) is generally a conclusion drawn based on the absence of statements in historical documents.”argumentum e silentio noun phrase” The Oxford Essential Dictionary of Foreign Terms in English. Ed. Jennifer Speake. Berkley Books, 1999.John Lange, The Argument from Silence, History and Theory, Vol. 5, No. 3 (1966), pp. 288-301 In the field of classical studies, it often refers to the deduction from the lack of references to a subject in the available writings of an author to the conclusion that he was ignorant of it.”silence, the argument from”. The Concise Oxford Dictionary of the Christian Church. Ed. E. A. Livingstone. Oxford University Press, 2006.
Thus in historical analysis with an argument from silence, the absence of a reference to an event or a document is used to cast doubt on the event not mentioned. While most historical approaches rely on what an author’s works contain, an argument from silence relies on what the book or document does not contain. This approach thus uses what an author “should have said” rather than what is available in the author’s extant writings.Historical evidence and argument by David P. Henige 2005 ISBN 978-0-299-21410-4 page 176.Seven Pillories of Wisdom by David R. Hall 1991 ISBN 0-86554-369-0 pages 55-56.
Historical analysis
An argument from silence can be convincing when mentioning a fact can be seen as so natural that its omission is a good reason to assume ignorance. For example, while the editors of Yerushalmi and Bavli mention the other community, most scholars believe these documents were written independently. Louis Jacobs writes, “If the editors of either had had access to an actual text of the other, it is inconceivable that they would not have mentioned this. Here the argument from silence is very convincing.”“Talmud”. A Concise Companion to the Jewish Religion. Louis Jacobs. Oxford University Press, 1999.
Errietta Bissa, professor of Classics at University of Wales flatly state that arguments from silence are not valid.Governmental intervention in foreign trade in archaïc and classical Greece by Errietta M. A. Bissa ISBN 90-04-17504-0 page 21: “This is a fundamental methodological issue on the validity of arguments from silence, where I wish to make my position clear: arguments from silence are not valid.” David Henige states that, although risky, such arguments can at times shed light on historical events. Yifa has pointed out the perils of arguments from silence, in that although no references appear to the “Rules of purity” codes of monastic conduct of 1103 in the Transmission of the Lamp, or any of the Pure Land documents, a copy of the code in which the author identifies himself exists.The origins of Buddhist monastic codes in China by Yifa, Zongze 2002 ISBN 0-8248-2494-6 page 32: “an argumentum ex silencio is hardly conclusive”
Yifa points out that arguments from silence are often less than conclusive, e.g. the lack of references to a compilation of a set of monastic codes by contemporaries or even by disciples does not mean that it never existed. This is well as illustrated by the case of Changlu Zongze’s “Rules of purity” which he wrote for the Chan monastery in 1103.
One of his contemporaries wrote a preface to a collection of his writings neglected to mention his code. And none of his biographies nor the documents of the Transmission of the Lamp, nor the Pure Land documents (which exalt him) refer to Zongze’s collection of a monastic code. However a copy of the code in which the author identifies himself
exists.The origins of Buddhist monastic codes in China by Yifa, Zongze 2002 ISBN 0-8248-2494-6 page 32.
Frances Wood based her controversial book Did Marco Polo go to China? on arguments from silence. Woods argued that Marco Polo never went to China and fabricated his accounts because he failed to mention elements from the visual landscape such as tea, did not record the Great Wall and neglected to record practices such as foot-binding. She argued that no outsider could spend 15 years in China and not observe and record these elements. Most historians disagree with Wood’s reasoning.Historical evidence and argument by David P. Henige 2005 ISBN 978-0-299-21410-4 page 176.
Legal aspects
Jed Rubenfeld, professor of Law at Yale Law School, has shown an example of the difficulty in applying arguments from silence in constitutional law, stating that although arguments from silence can be used to draw conclusions about the intent of the Framers of the US Constitution, their application can lead to two different conclusions and hence they can not be used to settle the issues.Jed Rubenfeld Rights of Passage: Majority Rule in Congress Duke Law Journal, 1996 Section B: Arguments from silence, “From this silence one can draw clear plausible inferences about the Framers’ intent. The only difficulty is that one can draw two
different inferences…. The truth is that the argument from silence is not dispositive”.
In the context of Morocco’s Truth Commission of 1999 regarding torture and secret detentions, Wu and Livescu state that the fact that someone remained silent is no proof of their ignorance about a specific piece of information. They point out that the absence of records about the torture of prisoners under the secret detention program is no proof that such detentions did not involve torture, or that some detentions did not take place.Human Rights, Suffering, and Aesthetics in Political Prison Literature by Yenna Wu, Simona Livescu 2011 ISBN 0-7391-6741-3 pages 86-90.
Begging the question
The failure to provide what is essentially the conclusion of an argument as a premise, if so required.
Begging the question (Latin petitio principii, “assuming the initial point”) is a type of informal fallacy in which an implicit premise would directly entail the conclusion. Begging the question is one of the classic informal fallacies in Aristotle’s Prior Analytics. Some modern authors consider begging the question to be a species of circulus in probando (Latin, “circle in proving”) or circular reasoning. Were it not begging the question, the missing premise would render the argument viciously circular, and while never persuasive, arguments of the form “A therefore A” are logically valid petitio principii or Begging the question is studied in Prior Analytics II, 64b, 34 – 65a, 9 and it is considered a material fallacy. Circulus in probando, or circular reasoning, is explained in Prior Analytics II, 57b, 18 – 59b, 1. Some authors consider begging the question to be a form of circular reasoning, for example: Bradley Dowden, “Fallacies” in Internet Encyclopedia of Philosophy. because asserting the premise while denying the self-same conclusion is a direct contradiction. In general, validity only guarantees the conclusion must follow given the truth of the premises. Absent that, a valid argument proves nothing: the conclusion may or may not follow from faulty premises—although in this particular example, it’s self-evident that the conclusion is false if and only if the premise is false (see logical equivalence and logical equality).The reason petitio principii is considered to be a fallacy is not that the inference is invalid (because any statement is indeed equivalent to itself), but that the argument can be deceptive. A statement cannot prove itself. A premise must have a different source of reason, ground or evidence for its truth from that of the conclusion: Lander University, “Petitio Principii”.
In modern days, English speakers are prone to use “beg the question” as a way of saying “raises the question”. However, the former denotes a failure to explicitly raise an essential premise, so that it may be taken as given, whereas the latter simply functions as a segue for whatever comes to mind.
Definition
The fallacy of petitio principii, or “begging the question”, is committed “when a proposition which requires proof is assumed without proof”; in order to charitably entertain the argument, it must be taken as given “in some form of the very proposition to be proved, as a premise from which to deduce it”.Welton (1905), 279. One must take it upon oneself that the goal, taken as given, is essentially the means to that end.
When the fallacy of begging the question is committed in a single step, it is sometimes called a hysteron proteron,Davies (1915), 572.Welton (1905), 280-282. as in the statement “Opium induces sleep because it has a soporific quality”.Welton (1905), 281. Such fallacies may not be immediately obvious due to the use of synonyms or synonymous phrases; one way to beg the question is to make a statement first in concrete terms, then in abstract ones, or vice-versa. Another is to “bring forth a proposition expressed in words of Saxon origin, and give as a reason for it the very same proposition stated in words of Norman origin”,Gibson (1908), 291. as in this example: “To allow every man an unbounded freedom of speech must always be, on the whole, advantageous to the State, for it is highly conducive to the interests of the community that each individual should enjoy a liberty perfectly unlimited of expressing his sentiments”.Richard Whately, Elements of Logic (1826) quoted in Gibson (1908), 291.
When the fallacy of begging the question is committed in more than one step, some authors consider it circulus in probando or reasoning in a circle however, there is no fallacy if the missing premise is acknowledged, and if not, there is no circle.
“Begging the question” can also refer to an argument in which the unstated premise is essential to, but not identical with the conclusion, or is “controversial or questionable for the same reasons that typically might lead someone to question the conclusion”.Kahane and Cavender (2005), 60.
History
The term was translated into English from Latin in the 16th century. The Latin version, petitio principii, can be interpreted in different ways. Petitio (from peto), in the post-classical context in which the phrase arose, means “assuming” or “postulating”, but in the older classical sense means “petition”, “request” or “beseeching”. Principii, genitive of principium, means “beginning”, “basis” or “premise” (of an argument). Literally petitio principii means “assuming the premise” or “assuming the original point”, or, alternatively, “a request for the beginning or premise” that is, the premise depends on the truth of the very matter in question.
The Latin phrase comes from the Greek en archei aiteisthai in Aristotle’s Prior Analytics II xvi:
Begging or assuming the point at issue consists (to take the expression in its widest sense) failing to demonstrate the required proposition. But there are several other ways in which this may happen; for example, if the argument has not taken syllogistic form at all, he may argue from premises which are less known or equally unknown, or he may establish the antecedent by means of its consequents; for demonstration proceeds from what is more certain and is prior. Now begging the question is none of these. If, however, the relation of B to C is such that they are identical, or that they are clearly convertible, or that one applies to the other, then he is begging the point at issue…. begging the question is proving what is not self-evident by means of itself…either because predicates which are identical belong to the same subject, or because the same predicate belongs to subjects which are identical. Thomas Fowler believed that Petitio Principii would be more properly called Petitio Quæsiti, which is literally “begging the question”.Fowler, Thomas (1887). The Elements of Deductive Logic, Ninth Edition (p. 145). Oxford, England: Clarendon Press.
Related fallacies
Circular reasoning is a fallacy in which “the reasoner begins with what he or she is trying to end up with”. The individual components of a circular argument can be logically valid because if the premises are true, the conclusion must be true, and will not lack relevance. However, circular reasoning is not persuasive because, if the conclusion is doubted, the premise which leads to it will also be doubted.
Begging the question is similar to the complex question or fallacy of many questions: questioning that presupposes something that would not be acceptable to everyone involved. For example, “Is Mary wearing a blue or a red dress?” is fallacious because it artificially restricts the possible responses to a blue or red dress. If the person being questioned wouldn’t necessarily consent to those constraints, the question is thus fallacious.
Modern usage
Many English speakers use “begs the question” to mean “raises the question”, “impels the question”, or even “invites the question”, and follow that phrase with the question raised,see definitions at Wiktionary and at The Free Dictionary (accessed 30th May 2011); each source gives both definitions. for example, “this year’s deficit is half a trillion dollars, which begs the question: how are we ever going to balance the budget?” Philosophers and many grammarians deem such usage incorrect.Follett (1966), 228; Kilpatrick (1997); Martin (2002), 71; Safire (1998).Brians, Common Errors in English Usage: Online Edition (full text of book: 2nd Edition, November, 2008, William, James & Company) (accessed 1 July 2011) Academic linguist Mark Liberman recommends avoiding the phrase entirely, noting that because of shifts in usage in both Latin and English over the centuries, the relationship of the literal expression to its intended meaning is unintelligible and therefore it is now “such a confusing way to say it that only a few pedants understand the phrase.”
Circular reasoning
When the reasoner begins with what he or she is trying to end up with.
Circular reasoning (also known as paradoxical thinking or circular logic), is a logical fallacy in which “the reasoner begins with what he or she is trying to end up with”. The individual components of a circular argument will sometimes be logically valid because if the premises are true, the conclusion must be true, and will not lack relevance. Circular logic cannot prove a conclusion because, if the conclusion is doubted, the premise which leads to it will also be doubted. Begging the question is a form of circular reasoning.
Circular reasoning is often of the form: “a is true because b is true; b is true because a is true.” Circularity can be difficult to detect if it involves a longer chain of propositions.
Academic Douglas Walton used the following example of a fallacious circular argument:
Wellington is in New Zealand.
Therefore, Wellington is in New Zealand.
He notes that, although the argument is deductively valid, it cannot prove that Wellington is in New Zealand because it contains no evidence that is distinct from the conclusion. The context – that of an argument – means that the proposition does not meet the requirement of proving the statement, thus it is a fallacy. He proposes that the context of a dialogue determines whether a circular argument is fallacious: if it forms part of an argument, then it is. Citing Cederblom and Paulsen 1986:109) Hugh G. Gauch observes that non-logical facts can be difficult to capture formally:
“Whatever is less dense than water will float, because whatever is less dense than water will float” sounds stupid, but “Whatever is less dense than water will float, because such objects won’t sink in water” might pass.
Circular reasoning and the problem of induction
Joel Feinberg and Russ Shafer-Landau note that “using the scientific method to judge the scientific method is circular reasoning”. Scientists attempt to discover the laws of nature and to predict what will happen in the future, based on those laws. However, per David Hume’s problem of induction, science cannot be proven inductively by empirical evidence, and thus science cannot be proven scientifically. An appeal to a principle of the uniformity of nature would be required to deductively necessitate the continued accuracy of predictions based on laws that have only succeeded in generalizing past observations. But as Bertrand Russell observed, “The method of ‘postulating’ what we want has many advantages; they are the same as the advantages of theft over honest toil”.
Circular cause and consequence
Where the consequence of the phenomenon is claimed to be its root cause.
Correlation does not imply causation (cum hoc propter hoc, Latin for “with this, because of this”) is a phrase used in science and statistics to emphasize that a correlation between two variables does not necessarily imply that one causes the other. Many statistical tests calculate correlation between variables. A few go further and calculate the likelihood of a true causal relationship; examples are the Granger causality test and convergent cross mapping.
The opposite assumption, that correlation proves causation, is one of several questionable cause logical fallacies by which two events that occur together are taken to have a cause-and-effect relationship. This fallacy is also known as cum hoc ergo propter hoc, Latin for “with this, therefore because of this”, and “false cause”. A similar fallacy, that an event that follows another was necessarily a consequence of the first event, is sometimes described as post hoc ergo propter hoc (Latin for “after this, therefore because of this”).
In a widely studied example, numerous epidemiological studies showed that women who were taking combined hormone replacement therapy (HRT) also had a lower-than-average incidence of coronary heart disease (CHD), leading doctors to propose that HRT was protective against CHD. But randomized controlled trials showed that HRT caused a small but statistically significant increase in risk of CHD. Re-analysis of the data from the epidemiological studies showed that women undertaking HRT were more likely to be from higher socio-economic groups (ABC1), with better-than-average diet and exercise regimens. The use of HRT and decreased incidence of coronary heart disease were coincident effects of a common cause (i.e. the benefits associated with a higher socioeconomic status), rather than cause and effect, as had been supposed.
As with any logical fallacy, identifying that the reasoning behind an argument is flawed does not imply that the resulting conclusion is false. In the instance above, if the trials had found that hormone replacement therapy caused a decrease in coronary heart disease, but not to the degree suggested by the epidemiological studies, the assumption of causality would have been correct, although the logic behind the assumption would still have been flawed.
Usage
In logic, the technical use of the word “implies” means “to be a sufficient circumstance.” This is the meaning intended by statisticians when they say causation is not certain. Indeed, p implies q has the technical meaning of logical implication: if p then q symbolized as p → q. That is “if circumstance p is true, then q necessarily follows.” In this sense, it is always correct to say “Correlation does not imply causation.”
However, in casual use, the word “imply” loosely means suggests rather than requires. The idea that correlation and causation are connected is certainly true; where there is causation, there is likely to be correlation. Indeed, correlation is used when inferring causation; the important point is that such inferences are made after correlations are confirmed to be real and all causational relationship are systematically explored using large enough data sets.
Edward Tufte, in a criticism of the brevity of “correlation does not imply causation,” deprecates the use of “is” to relate correlation and causation (as in “Correlation is not causation”), citing its inaccuracy as incomplete. While it is not the case that correlation is causation, simply stating their nonequivalence omits information about their relationship. Tufte suggests that the shortest true statement that can be made about causality and correlation is one of the following:
“Empirically observed covariation is a necessary but not sufficient condition for causality.”
“Correlation is not causation but it sure is a hint.”
General pattern
For any two correlated events A and B, the following relationships are possible:
A causes B;
B causes A;
A and B are consequences of a common cause, but do not cause each other;
There is no connection between A and B, the correlation is coincidental.
Less clear-cut correlations are also possible. For example, causality is not necessarily one-way; in a predator-prey relationship, predator numbers affect prey, but prey numbers, i.e. food supply, also affect predators.
The cum hoc ergo propter hoc logical fallacy can be expressed as follows:
A occurs in correlation with B.
Therefore, A causes B.
In this type of logical fallacy, one makes a premature conclusion about causality after observing only a correlation between two or more factors. Generally, if one factor (A) is observed to only be correlated with another factor (B), it is sometimes taken for granted that A is causing B, even when no evidence supports it. This is a logical fallacy because there are at least five possibilities:
A may be the cause of B.
B may be the cause of A.
some unknown third factor C may actually be the cause of both A and B.
there may be a combination of the above three relationships. For example, B may be the cause of A at the same time as A is the cause of B (contradicting that the only relationship between A and B is that A causes B). This describes a self-reinforcing system.
the “relationship” is a coincidence or so complex or indirect that it is more effectively called a coincidence (i.e. two events occurring at the same time that have no direct relationship to each other besides the fact that they are occurring at the same time). A larger sample size helps to reduce the chance of a coincidence, unless there is a systematic error in the experiment.
In other words, there can be no conclusion made regarding the existence or the direction of a cause and effect relationship only from the fact that A and B are correlated. Determining whether there is an actual cause and effect relationship requires further investigation, even when the relationship between A and B is statistically significant, a large effect size is observed, or a large part of the variance is explained.
Examples of illogically inferring causation from correlation
B causes A (reverse causation)
The more firemen fighting a fire, the bigger the fire is observed to be.
Therefore firemen cause an increase in the size of a fire.
In this example, the correlation between the number of firemen at a scene and the size of the fire does not imply that the firemen cause the fire. Firemen are sent according to the severity of the fire and if there is a large fire, a greater number of firemen are sent; therefore, it is rather that fire causes firemen to arrive at the scene. So the above conclusion is false.
A causes B and B causes A (bidirectional causation)
Increased pressure is associated with increased temperature.
Therefore pressure causes temperature.
The ideal gas law, PV=nRT, describes the direct relationship between pressure and temperature (along with other factors) to show that there is a direct correlation between the two properties. For a fixed volume and mass of gas, an increase in temperature will cause an increase in pressure; likewise, increased pressure will cause an increase in temperature. This demonstrates bidirectional causation. The conclusion that pressure causes temperature is true but is not logically guaranteed by the premise.
Third factor C (the common-causal variable) causes both A and B
All these examples deal with a lurking variable, which is simply a hidden third variable that affects both causes of the correlation; for example, the fact that it is summer in Example 3. A difficulty often also arises where the third factor, though fundamentally different from A and B, is so closely related to A and/or B as to be confused with them or very difficult to scientifically disentangle from them (see Example 4).
;Example 1
Sleeping with one’s shoes on is strongly correlated with waking up with a headache.
Therefore, sleeping with one’s shoes on causes headache.
The above example commits the correlation-implies-causation fallacy, as it prematurely concludes that sleeping with one’s shoes on causes headache. A more plausible explanation is that both are caused by a third factor, in this case going to bed drunk, which thereby gives rise to a correlation. So the conclusion is false.
;Example 2
Young children who sleep with the light on are much more likely to develop myopia in later life.
Therefore, sleeping with the light on causes myopia.
This is a scientific example that resulted from a study at the University of Pennsylvania Medical Center. Published in the May 13, 1999 issue of Nature, the study received much coverage at the time in the popular press.CNN, May 13, 1999. Night-light may lead to nearsightedness However, a later study at Ohio State University did not find that infants sleeping with the light on caused the development of myopia. It did find a strong link between parental myopia and the development of child myopia, also noting that myopic parents were more likely to leave a light on in their children’s bedroom.Ohio State University Research News, March 9, 2000. Night lights don’t lead to nearsightedness, study suggests In this case, the cause of both conditions is parental myopia, and the above-stated conclusion is false.
;Example 3
As ice cream sales increase, the rate of drowning deaths increases sharply.
Therefore, ice cream consumption causes drowning.
The aforementioned example fails to recognize the importance of time and temperature in relationship to ice cream sales. Ice cream is sold during the hot summer months at a much greater rate than during colder times, and it is during these hot summer months that people are more likely to engage in activities involving water, such as swimming. The increased drowning deaths are simply caused by more exposure to water-based activities, not ice cream. The stated conclusion is false.
;Example 4
A hypothetical study shows a relationship between test anxiety scores and shyness scores, with a statistical r value (strength of correlation) of +.59.The Psychology of Personality: Viewpoints, Research, and Applications. Carducci, Bernard J. 2nd Edition. Wiley-Blackwell: UK, 2009.
Therefore, it may be simply concluded that shyness, in some part, causally influences test anxiety.
However, as encountered in many psychological studies, another variable, a “self-consciousness score,” is discovered which has a sharper correlation (+.73) with shyness. This suggests a possible “third variable” problem, however, when three such closely related measures are found, it further suggests that each may have bidirectional tendencies (see “bidirectional variable,” above), being a cluster of correlated values each influencing one another to some extent. Therefore, the simple conclusion above may be false.
;Example 5
Since the 1950s, both the atmospheric CO2 level and obesity levels have increased sharply.
Hence, atmospheric CO2 causes obesity.
Richer populations tend to eat more food and consume more energy
;Example 6
HDL (“good”) cholesterol is negatively correlated with incidence of heart attack.
Therefore, taking medication to raise HDL will decrease the chance of having a heart attack.
Further researchOrnish, Dean. “Cholesterol: The good, the bad, and the truth” ]1] (retrieved 3 June 2011) has called this conclusion into question. Instead, it may be that other underlying factors, like genes, diet and exercise, affect both HDL levels and the likelihood of having a heart attack; it is possible that medicines may affect the directly measurable factor, HDL levels, without affecting the chance of heart attack.
Coincidence
With a decrease in the wearing of hats, there has been an increase in global warming over the same period.
Therefore, global warming is caused by people abandoning the practice of wearing hats.
A similar example is used by the parody religion Pastafarianism to illustrate the logical fallacy of assuming that correlation equals causation.
Relation to the Ecological fallacy
There is a relation between this subject-matter and the ecological fallacy, described in a 1950 paper by William S. Robinson. Robinson shows that ecological correlations, where the statistical object is a group of persons (i.e. an ethnic group), does not show the same behaviour as individual correlations, where the objects of inquiry are individuals: “The relation between ecological and individual correlations which is discussed in this paper provides a definite answer as to whether ecological correlations can validly be used as substitutes for individual correlations. They cannot.” (…) “(a)n ecological correlation is almost certainly not equal to its corresponding individual correlation.”
Determining causation
David Hume argued that causality is based on experience, and experience similarly based on the assumption that the future models the past, which in turn can only be based on experience – leading to circular logic. In conclusion, he asserted that causality is not based on actual reasoning: only correlation can actually be perceived. David Hume (Stanford Encyclopedia of Philosophy)
In order for a correlation to be established as causal, the cause and the effect must be connected through an impact mechanism in accordance with known laws of nature.
Intuitively, causation seems to require not just a correlation, but a counterfactual dependence. Suppose that a student performed poorly on a test and guesses that the cause was his not studying. To prove this, one thinks of the counterfactual – the same student writing the same test under the same circumstances but having studied the night before. If one could rewind history, and change only one small thing (making the student study for the exam), then causation could be observed (by comparing version 1 to version 2). Because one cannot rewind history and replay events after making small controlled changes, causation can only be inferred, never exactly known. This is referred to as the Fundamental Problem of Causal Inference – it is impossible to directly observe causal effects.Paul W. Holland. 1986. “Statistics and Causal Inference” Journal of the American Statistical Association, Vol. 81, No. 396. (Dec., 1986), pp. 945-960.
A major goal of scientific experiments and statistical methods is to approximate as best as possible the counterfactual state of the world.Judea Pearl. 2000. Causality: Models, Reasoning, and Inference, Cambridge University Press. For example, one could run an experiment on identical twins who were known to consistently get the same grades on their tests. One twin is sent to study for six hours while the other is sent to the amusement park. If their test scores suddenly diverged by a large degree, this would be strong evidence that studying (or going to the amusement park) had a causal effect on test scores. In this case, correlation between studying and test scores would almost certainly imply causation.
Well-designed experimental studies replace equality of individuals as in the previous example by equality of groups. This is achieved by randomization of the subjects to two or more groups. Although not a perfect system, the likeliness of being equal in all aspects rises with the number of subjects placed randomly in the treatment/placebo groups. From the significance of the difference of the effect of the treatment vs. the placebo, one can conclude the likeliness of the treatment having a causal effect on the disease. This likeliness can be quantified in statistical terms by the P-value .
When experimental studies are impossible and only pre-existing data are available, as is usually the case for example in economics, regression analysis can be used. Factors other than the potential causative variable of interest are controlled for by including them as regressors in addition to the regressor representing the variable of interest. False inferences of causation due to reverse causation (or wrong estimates of the magnitude of causation due the presence of bidirectional causation) can be avoided by using explanators (regressors) that are necessarily exogenous, such as physical explanators like rainfall amount (as a determinant of, say, futures prices), lagged variables whose values were determined before the dependent variable’s value was determined, instrumental variables for the explanators (chosen based on their known exogeneity), etc. See Causality#Statistics and Economics. Spurious correlation due to mutual influence from a third, common, causative variable, is harder to avoid: the model must be specified such that there is a theoretical reason to believe that no such underlying causative variable has been omitted from the model; in particular, underlying time trends of both the dependent variable and the independent (potentially causative) variable must be controlled for by including time as another independent variable.
Use of correlation as scientific evidence
Much of scientific evidence is based upon a correlation of variables – they tend to occur together. Scientists are careful to point out that correlation does not necessarily mean causation. The assumption that A causes B simply because A correlates with B is a logical fallacy – it is not a legitimate form of argument. However, sometimes people commit the opposite fallacy – dismissing correlation entirely, as if it does not imply causation. This would dismiss a large swath of important scientific evidence.
In conclusion, correlation is an extremely valuable type of scientific evidence in fields such as medicine, psychology, and sociology. But first correlations must be confirmed as real, and then every possible causational relationship must be systematically explored. In the end correlation can be used as powerful evidence for a cause and effect relationship between a treatment and benefit, a risk factor and a disease, or a social or economic factor and various outcomes. But it is also one of the most abused types of evidence, because it is easy and even tempting to come to premature conclusions based upon the preliminary appearance of a correlation.
Continuum fallacy
Improperly rejecting a claim for being imprecise.
The continuum fallacy (also called the fallacy of the beard,David Roberts: Reasoning: Other Fallacies line drawing fallacy, bald man fallacy, fallacy of the heap, the fallacy of grey, the sorites fallacy) is an informal fallacy closely related to the sorites paradox, or paradox of the heap. The fallacy causes one to erroneously reject a vague claim simply because it is not as precise as one would like it to be. Vagueness alone does not necessarily imply invalidity.
The fallacy appears to demonstrate that two states or conditions cannot be considered distinct (or do not exist at all) because between them there exists a continuum of states. According to the fallacy, differences in quality cannot result from differences in quantity.
There are clearly reasonable and clearly unreasonable cases in which objects either belong or do not belong to a particular group of objects based on their properties. We are able to take them case by case and designate them as such even in the case of properties which may be vaguely defined. The existence of hard or controversial cases does not preclude our ability to designate members of particular kinds of groups.
Relation with sorites paradox
Narrowly speaking, the sorites paradox refers to situations where there are many discrete states (classically between 1 and 1,000,000 grains of sand, hence 1,000,000 possible states), while the continuum fallacy refers to situations where there is (or appears to be) a continuum of states, such as temperature – is a room hot or cold? Whether any continua exist in the physical world is the classic question of atomism, and while Newtonian physics models the world as continuous, in modern quantum physics, notions of continuous length break down at the Planck length, and thus what appear to be continua may, at base, simply be very many discrete states.
For the purpose of the continuum fallacy, one assumes that there is in fact a continuum, though this is generally a minor distinction: in general, any argument against the sorites paradox can also be used against the continuum fallacy. One argument against the fallacy is based on the simple counterexample: there do exist bald people and people who aren’t bald. Another argument is that for each degree of change in states, the degree of the condition changes slightly, and these “slightly”s build up to shift the state from one category to another. For example, perhaps the addition of a grain of rice causes the total group of rice to be “slightly more” of a heap, and enough “slightly”s will certify the group’s heap status – see fuzzy logic.
Examples
Fred can never be called bald
Fred can never be called bald. Fred isn’t bald now. However, if he loses one hair, that won’t make him go from not bald to bald either. If he loses one more hair after that, this loss of a second hair also does not make him go from not bald to bald. Therefore, no matter how much hair he loses, he can never be called bald.
The heap
The fallacy can be described in the form of a conversation:
Q: Does one grain of wheat form a heap?
A: No.
Q: If we add one, do two grains of wheat form a heap?
A: No.
Q: If we add one, do three grains of wheat form a heap?
A: No.
…
Q: If we add one, do one hundred grains of wheat form a heap?
A: No.
Q: Therefore, no matter how many grains of wheat we add, we will never have a heap. Therefore, heaps don’t exist!
Correlation proves causation
A faulty assumption that correlation between two variables implies that one causes the other.
Correlation does not imply causation (cum hoc propter hoc, Latin for “with this, because of this”) is a phrase used in science and statistics to emphasize that a correlation between two variables does not necessarily imply that one causes the other. Many statistical tests calculate correlation between variables. A few go further and calculate the likelihood of a true causal relationship; examples are the Granger causality test and convergent cross mapping.
The opposite assumption, that correlation proves causation, is one of several questionable cause logical fallacies by which two events that occur together are taken to have a cause-and-effect relationship. This fallacy is also known as cum hoc ergo propter hoc, Latin for “with this, therefore because of this”, and “false cause”. A similar fallacy, that an event that follows another was necessarily a consequence of the first event, is sometimes described as post hoc ergo propter hoc (Latin for “after this, therefore because of this”).
In a widely studied example, numerous epidemiological studies showed that women who were taking combined hormone replacement therapy (HRT) also had a lower-than-average incidence of coronary heart disease (CHD), leading doctors to propose that HRT was protective against CHD. But randomized controlled trials showed that HRT caused a small but statistically significant increase in risk of CHD. Re-analysis of the data from the epidemiological studies showed that women undertaking HRT were more likely to be from higher socio-economic groups (ABC1), with better-than-average diet and exercise regimens. The use of HRT and decreased incidence of coronary heart disease were coincident effects of a common cause (i.e. the benefits associated with a higher socioeconomic status), rather than cause and effect, as had been supposed.
As with any logical fallacy, identifying that the reasoning behind an argument is flawed does not imply that the resulting conclusion is false. In the instance above, if the trials had found that hormone replacement therapy caused a decrease in coronary heart disease, but not to the degree suggested by the epidemiological studies, the assumption of causality would have been correct, although the logic behind the assumption would still have been flawed.
Usage
In logic, the technical use of the word “implies” means “to be a sufficient circumstance.” This is the meaning intended by statisticians when they say causation is not certain. Indeed, p implies q has the technical meaning of logical implication: if p then q symbolized as p → q. That is “if circumstance p is true, then q necessarily follows.” In this sense, it is always correct to say “Correlation does not imply causation.”
However, in casual use, the word “imply” loosely means suggests rather than requires. The idea that correlation and causation are connected is certainly true; where there is causation, there is likely to be correlation. Indeed, correlation is used when inferring causation; the important point is that such inferences are made after correlations are confirmed to be real and all causational relationship are systematically explored using large enough data sets.
Edward Tufte, in a criticism of the brevity of “correlation does not imply causation,” deprecates the use of “is” to relate correlation and causation (as in “Correlation is not causation”), citing its inaccuracy as incomplete. While it is not the case that correlation is causation, simply stating their nonequivalence omits information about their relationship. Tufte suggests that the shortest true statement that can be made about causality and correlation is one of the following:
“Empirically observed covariation is a necessary but not sufficient condition for causality.”
“Correlation is not causation but it sure is a hint.”
General pattern
For any two correlated events A and B, the following relationships are possible:
A causes B;
B causes A;
A and B are consequences of a common cause, but do not cause each other;
There is no connection between A and B, the correlation is coincidental.
Less clear-cut correlations are also possible. For example, causality is not necessarily one-way; in a predator-prey relationship, predator numbers affect prey, but prey numbers, i.e. food supply, also affect predators.
The cum hoc ergo propter hoc logical fallacy can be expressed as follows:
A occurs in correlation with B.
Therefore, A causes B.
In this type of logical fallacy, one makes a premature conclusion about causality after observing only a correlation between two or more factors. Generally, if one factor (A) is observed to only be correlated with another factor (B), it is sometimes taken for granted that A is causing B, even when no evidence supports it. This is a logical fallacy because there are at least five possibilities:
A may be the cause of B.
B may be the cause of A.
some unknown third factor C may actually be the cause of both A and B.
there may be a combination of the above three relationships. For example, B may be the cause of A at the same time as A is the cause of B (contradicting that the only relationship between A and B is that A causes B). This describes a self-reinforcing system.
the “relationship” is a coincidence or so complex or indirect that it is more effectively called a coincidence (i.e. two events occurring at the same time that have no direct relationship to each other besides the fact that they are occurring at the same time). A larger sample size helps to reduce the chance of a coincidence, unless there is a systematic error in the experiment.
In other words, there can be no conclusion made regarding the existence or the direction of a cause and effect relationship only from the fact that A and B are correlated. Determining whether there is an actual cause and effect relationship requires further investigation, even when the relationship between A and B is statistically significant, a large effect size is observed, or a large part of the variance is explained.
Examples of illogically inferring causation from correlation
B causes A (reverse causation)
The more firemen fighting a fire, the bigger the fire is observed to be.
Therefore firemen cause an increase in the size of a fire.
In this example, the correlation between the number of firemen at a scene and the size of the fire does not imply that the firemen cause the fire. Firemen are sent according to the severity of the fire and if there is a large fire, a greater number of firemen are sent; therefore, it is rather that fire causes firemen to arrive at the scene. So the above conclusion is false.
A causes B and B causes A (bidirectional causation)
Increased pressure is associated with increased temperature.
Therefore pressure causes temperature.
The ideal gas law, PV=nRT, describes the direct relationship between pressure and temperature (along with other factors) to show that there is a direct correlation between the two properties. For a fixed volume and mass of gas, an increase in temperature will cause an increase in pressure; likewise, increased pressure will cause an increase in temperature. This demonstrates bidirectional causation. The conclusion that pressure causes temperature is true but is not logically guaranteed by the premise.
Third factor C (the common-causal variable) causes both A and B
All these examples deal with a lurking variable, which is simply a hidden third variable that affects both causes of the correlation; for example, the fact that it is summer in Example 3. A difficulty often also arises where the third factor, though fundamentally different from A and B, is so closely related to A and/or B as to be confused with them or very difficult to scientifically disentangle from them (see Example 4).
;Example 1
Sleeping with one’s shoes on is strongly correlated with waking up with a headache.
Therefore, sleeping with one’s shoes on causes headache.
The above example commits the correlation-implies-causation fallacy, as it prematurely concludes that sleeping with one’s shoes on causes headache. A more plausible explanation is that both are caused by a third factor, in this case going to bed drunk, which thereby gives rise to a correlation. So the conclusion is false.
;Example 2
Young children who sleep with the light on are much more likely to develop myopia in later life.
Therefore, sleeping with the light on causes myopia.
This is a scientific example that resulted from a study at the University of Pennsylvania Medical Center. Published in the May 13, 1999 issue of Nature, the study received much coverage at the time in the popular press.CNN, May 13, 1999. Night-light may lead to nearsightedness However, a later study at Ohio State University did not find that infants sleeping with the light on caused the development of myopia. It did find a strong link between parental myopia and the development of child myopia, also noting that myopic parents were more likely to leave a light on in their children’s bedroom.Ohio State University Research News, March 9, 2000. Night lights don’t lead to nearsightedness, study suggests In this case, the cause of both conditions is parental myopia, and the above-stated conclusion is false.
;Example 3
As ice cream sales increase, the rate of drowning deaths increases sharply.
Therefore, ice cream consumption causes drowning.
The aforementioned example fails to recognize the importance of time and temperature in relationship to ice cream sales. Ice cream is sold during the hot summer months at a much greater rate than during colder times, and it is during these hot summer months that people are more likely to engage in activities involving water, such as swimming. The increased drowning deaths are simply caused by more exposure to water-based activities, not ice cream. The stated conclusion is false.
;Example 4
A hypothetical study shows a relationship between test anxiety scores and shyness scores, with a statistical r value (strength of correlation) of +.59.The Psychology of Personality: Viewpoints, Research, and Applications. Carducci, Bernard J. 2nd Edition. Wiley-Blackwell: UK, 2009.
Therefore, it may be simply concluded that shyness, in some part, causally influences test anxiety.
However, as encountered in many psychological studies, another variable, a “self-consciousness score,” is discovered which has a sharper correlation (+.73) with shyness. This suggests a possible “third variable” problem, however, when three such closely related measures are found, it further suggests that each may have bidirectional tendencies (see “bidirectional variable,” above), being a cluster of correlated values each influencing one another to some extent. Therefore, the simple conclusion above may be false.
;Example 5
Since the 1950s, both the atmospheric CO2 level and obesity levels have increased sharply.
Hence, atmospheric CO2 causes obesity.
Richer populations tend to eat more food and consume more energy
;Example 6
HDL (“good”) cholesterol is negatively correlated with incidence of heart attack.
Therefore, taking medication to raise HDL will decrease the chance of having a heart attack.
Further researchOrnish, Dean. “Cholesterol: The good, the bad, and the truth” (retrieved 3 June 2011) has called this conclusion into question. Instead, it may be that other underlying factors, like genes, diet and exercise, affect both HDL levels and the likelihood of having a heart attack; it is possible that medicines may affect the directly measurable factor, HDL levels, without affecting the chance of heart attack.
Coincidence
With a decrease in the wearing of hats, there has been an increase in global warming over the same period.
Therefore, global warming is caused by people abandoning the practice of wearing hats.
A similar example is used by the parody religion Pastafarianism to illustrate the logical fallacy of assuming that correlation equals causation.
Relation to the Ecological fallacy
There is a relation between this subject-matter and the ecological fallacy, described in a 1950 paper by William S. Robinson. Robinson shows that ecological correlations, where the statistical object is a group of persons (i.e. an ethnic group), does not show the same behaviour as individual correlations, where the objects of inquiry are individuals: “The relation between ecological and individual correlations which is discussed in this paper provides a definite answer as to whether ecological correlations can validly be used as substitutes for individual correlations. They cannot.” (…) “(a)n ecological correlation is almost certainly not equal to its corresponding individual correlation.”
Determining causation
David Hume argued that causality is based on experience, and experience similarly based on the assumption that the future models the past, which in turn can only be based on experience – leading to circular logic. In conclusion, he asserted that causality is not based on actual reasoning: only correlation can actually be perceived. David Hume (Stanford Encyclopedia of Philosophy)
In order for a correlation to be established as causal, the cause and the effect must be connected through an impact mechanism in accordance with known laws of nature.
Intuitively, causation seems to require not just a correlation, but a counterfactual dependence. Suppose that a student performed poorly on a test and guesses that the cause was his not studying. To prove this, one thinks of the counterfactual – the same student writing the same test under the same circumstances but having studied the night before. If one could rewind history, and change only one small thing (making the student study for the exam), then causation could be observed (by comparing version 1 to version 2). Because one cannot rewind history and replay events after making small controlled changes, causation can only be inferred, never exactly known. This is referred to as the Fundamental Problem of Causal Inference – it is impossible to directly observe causal effects.Paul W. Holland. 1986. “Statistics and Causal Inference” Journal of the American Statistical Association, Vol. 81, No. 396. (Dec., 1986), pp. 945-960.
A major goal of scientific experiments and statistical methods is to approximate as best as possible the counterfactual state of the world.Judea Pearl. 2000. Causality: Models, Reasoning, and Inference, Cambridge University Press. For example, one could run an experiment on identical twins who were known to consistently get the same grades on their tests. One twin is sent to study for six hours while the other is sent to the amusement park. If their test scores suddenly diverged by a large degree, this would be strong evidence that studying (or going to the amusement park) had a causal effect on test scores. In this case, correlation between studying and test scores would almost certainly imply causation.
Well-designed experimental studies replace equality of individuals as in the previous example by equality of groups. This is achieved by randomization of the subjects to two or more groups. Although not a perfect system, the likeliness of being equal in all aspects rises with the number of subjects placed randomly in the treatment/placebo groups. From the significance of the difference of the effect of the treatment vs. the placebo, one can conclude the likeliness of the treatment having a causal effect on the disease. This likeliness can be quantified in statistical terms by the P-value .
When experimental studies are impossible and only pre-existing data are available, as is usually the case for example in economics, regression analysis can be used. Factors other than the potential causative variable of interest are controlled for by including them as regressors in addition to the regressor representing the variable of interest. False inferences of causation due to reverse causation (or wrong estimates of the magnitude of causation due the presence of bidirectional causation) can be avoided by using explanators (regressors) that are necessarily exogenous, such as physical explanators like rainfall amount (as a determinant of, say, futures prices), lagged variables whose values were determined before the dependent variable’s value was determined, instrumental variables for the explanators (chosen based on their known exogeneity), etc. See Causality#Statistics and Economics. Spurious correlation due to mutual influence from a third, common, causative variable, is harder to avoid: the model must be specified such that there is a theoretical reason to believe that no such underlying causative variable has been omitted from the model; in particular, underlying time trends of both the dependent variable and the independent (potentially causative) variable must be controlled for by including time as another independent variable.
Use of correlation as scientific evidence
Much of scientific evidence is based upon a correlation of variables – they tend to occur together. Scientists are careful to point out that correlation does not necessarily mean causation. The assumption that A causes B simply because A correlates with B is a logical fallacy – it is not a legitimate form of argument. However, sometimes people commit the opposite fallacy – dismissing correlation entirely, as if it does not imply causation. This would dismiss a large swath of important scientific evidence.
In conclusion, correlation is an extremely valuable type of scientific evidence in fields such as medicine, psychology, and sociology. But first correlations must be confirmed as real, and then every possible causational relationship must be systematically explored. In the end correlation can be used as powerful evidence for a cause and effect relationship between a treatment and benefit, a risk factor and a disease, or a social or economic factor and various outcomes. But it is also one of the most abused types of evidence, because it is easy and even tempting to come to premature conclusions based upon the preliminary appearance of a correlation.
Equivocation
The misleading use of a term with more than one meaning.
Equivocation (“to call by the same name”) is classified as an informal logical fallacy. It is the misleading use of a term with more than one meaning or sense (by glossing over which meaning is intended at a particular time). It generally occurs with polysemic words (words with multiple meanings).
It is often confused with amphibology (amphiboly) (ambiguous sentences.); however, equivocation is ambiguity arising from the misleading use of a word and amphiboly is ambiguity arising from the misleading use of punctuation or syntax.
Examples
Fallacious reasoning
Equivocation is the use in a syllogism (a logical chain of reasoning) of a term several times, but giving the term a different meaning each time. For example:
A feather is light.
What is light cannot be dark.
Therefore, a feather cannot be dark.
In this use of equivocation, the word “light” is first used as the opposite of “heavy”, but then used as a synonym of “bright” (the fallacy usually becomes obvious as soon as one tries to translate this argument into another language). Because the “middle term” of this syllogism is not one term, but two separate ones masquerading as one (all feathers are indeed “not heavy”, but it is not true that all feathers are “bright”), this type of equivocation is actually an example of the fallacy of four terms.
Semantic shift
The fallacy of equivocation is often used with words that have a strong emotional content and many meanings. These meanings often coincide within proper context, but the fallacious arguer does a semantic shift, slowly changing the context by treating, as equivalent, distinct meanings of the term.
In English language, one equivocation is with the word “man”, which can mean both “member of the species, Homo sapiens,” and “male member of the species, Homo sapiens.” The following sentence is a well-known equivocation:
“Do women need to worry about man-eating sharks?”, in which “man-eating” is construed to mean a shark that devours only male human beings.
Switch-referencing
This occurs where the referent of a word or expression in a second sentence is different from that in the immediately preceding sentence, especially where a change in referent has not been clearly identified.
Metaphor
All jackasses have long ears.
Carl is a jackass.
Therefore, Carl has long ears.
Here the equivocation is the metaphorical use of “jackass” to imply a stupid or obnoxious person instead of a male donkey.
“Better than nothing”
Margarine is better than nothing.
Nothing is better than butter.
Therefore, margarine is better than butter.
Specific types of equivocation fallacies
See main articles: False attribution, Fallacy of quoting out of context, No true Scotsman, Shifting ground fallacy.
Ecological fallacy
Inferences about the nature of specific individuals are based solely upon aggregate statistics collected for the group to which those individuals belong.
Etymological fallacy
Which reasons that the original or historical meaning of a word or phrase is necessarily similar to its actual present-day meaning.
The etymological fallacy is a genetic fallacy that holds, erroneously, that the present-day meaning of a word or phrase should necessarily be similar to its historical meaning. This is a linguistic misconception.Kenneth G. Wilson (1993) “The Columbia Guide to Standard American English”, article “Etymological Fallacy” An argument constitutes an etymological fallacy if it makes a claim about the present meaning of a word based exclusively on its etymology. This does not, however, show that etymology is irrelevant in any way, nor does it attempt to prove such.
A variant of the etymological fallacy involves looking for the “true” meaning of words by delving into their etymologies,
or claiming that a word should be used in a particular way because it has a particular etymology. A similar concept is that of false friends.
Prerequisites
An etymological fallacy becomes possible when a word has changed its meaning over time. Such changes can include a shift in scope (narrowing or widening of meanings) or of connotation (amelioration or pejoration). In some cases, meanings can also shift completely, so that the etymological meaning has no evident connection to the current meaning.
For example:
The word hound originally simply meant “dog” in general. This usage is now archaic or poetic only, and hound now almost exclusively refers to dogs bred for hunting in particular.
The meaning of a word may change to connote higher status, as when knight, originally “servant” like German Knecht, came to mean “military knight” and subsequently “someone of high rank”.
Conversely, the word knave originally meant “boy” and only gradually acquired its meaning of “person of low, despicable character”.
The word lady derives from Old English hlæf-dige (“loaf-digger; kneader of bread”), and lord from hlafweard (“loaf-ward; ensurer, provider of bread”). No connection with bread is retained in the current meaning of either word.
Examples
Not every change in meaning provokes an etymological fallacy; but such changes are frequently the basis of inaccurate arguments.
From the fact that logos is Greek for “word”, Stuart Chase concluded in his book The Tyranny of Words that logic was mere manipulation of words.
Some dictionaries of old languages do not distinguish glosses (meanings) from etymologies, as when Old English is defined as “one who sits on the same rowing bench; companion”. Here the only attested meaning is the second one, while the first is simply the word’s etymology.
The word gyp for “cheat” has been described as offensive because it is probably derived from Gypsy.
The word apologize comes from the Greek word ἀπολογία (apologia) which originally only meant “a speech in defence”. Later on it began to carry the sense of expressing remorse or “saying sorry” over something that one may feel regret for, as well as to explain or defend, in some contexts. The word began to be used eventually as only expressing regret mainly because words of remorse would often accompany explanations, or at least some defense or justification along with it. But some feel today that a “full apology”, in keeping with the word’s original etymology, should always include explanations, while others feel that an apology should only be an expression of remorse. 2.1 Core Elements of an Apology - A TIME FOR APOLOGIES: THE LEGAL AND ETHICAL IMPLICATIONS OF APOLOGIES IN CIVIL CASES - Leslie H. Macleod & Associates - Retrieved 24 April 2012.
Phrases such as to grow smaller or to climb down have been criticised for being incoherent, based on the “true” meanings of grow and climb.
Pitfalls
While the assumption that a word may still be used etymologically can be fallacious, the conclusion from such reasoning is not necessarily false. Some words can retain their meaning for many centuries, with extreme cases like mouse, which denoted the same animal in the Proto-Indo-European language several thousand years ago (as ).
Fallacy of composition
Assuming that something true of part of a whole must also be true of the whole.
The fallacy of composition arises when one infers that something is true of the whole from the fact that it is true of some part of the whole (or even of every proper part). For example: “This fragment of metal cannot be fractured with a hammer, therefore the machine of which it is a part cannot be fractured with a hammer.” This is clearly fallacious, because many machines can be broken-apart, without any of those parts being fracturable.
This fallacy is often confused with the fallacy of hasty generalization, in which an unwarranted inference is made from a statement about a sample to a statement about the population from which it is drawn.
The fallacy of composition is the converse of the fallacy of division. The fallacy of composition is also known as the “un-ecological fallacy.”
Examples
If someone stands up out of their seat at a baseball game, they can see better. Therefore, if everyone stands up they can all see better.
If a runner runs faster, she can win the race. Therefore if all the runners run faster, they can all win the race.
An important example from economics is the Paradox of thrift: if one household saves money, it can consume more in the future. Therefore if all households save money, they can all consume more in the future.
In voting theory, the Condorcet paradox demonstrates a fallacy of composition: Even if all voters have rational preferences, the collective choice induced by majority rule is not transitive and hence not rational. The fallacy of composition occurs if from the rationality of the individuals one infers that society can be equally rational. The principle generalizes beyond the aggregation via majority rule to any reasonable aggregation rule, demonstrating that the aggregation of individual preferences into a Social welfare function is fraught with severe difficulties (see Arrow’s impossibility theorem and Social choice theory).
Modo hoc fallacy
The modo hoc (or “just this”) fallacy is the informal error of assessing meaning to an existent based on the constituent properties of its material makeup while omitting the matter’s arrangement. For instance, metaphysical naturalism states that while matter and motion are all that comprise man, it cannot be assumed that the characteristics inherent in the elements and physical reactions that make up man ultimately and solely define man’s meaning; for, a cow which is alive and well and a cow which has been chopped up into meat are the same matter but it is obvious that the arrangement of that matter clarifies those different situational meanings.
Fallacy of division
Assuming that something true of a thing must also be true of all or some of its parts.
A fallacy of division occurs when one reasons logically that something true of a thing must also be true of all or some of its parts.
An example:
A Boeing 747 can fly unaided across the ocean.
A Boeing 747 has jet engines.
Therefore, one of its jet engines can fly unaided across the ocean.
The converse of this fallacy is called fallacy of composition, which arises when one fallaciously attributes a property of some part of a thing to the thing as a whole. Both fallacies were addressed by Aristotle in Sophistical Refutations.
Another example:
Functioning brains think.
Functioning brains are nothing but the neurons that they are composed of.
If functioning brains think, then the individual neurons in them think.
Individual neurons do not think.
Functioning brains do not think. (From 3 & 4)
Functioning brains think and functioning brains do not think. (From 1 & 5)
Since the premises entail a contradiction (6), at least one of the premises must be false. We may diagnose the problem as located in premise 3, which quite plausibly commits the fallacy of division.
An application: Famously and controversially, in the philosophy of the Greek Anaxagoras (at least as it is discussed by the Roman atomist Lucretius), it was assumed that the atoms constituting a substance must themselves have the salient observed properties of that substance: so atoms of water would be wet, atoms of iron would be hard, atoms of wool would be soft, etc. This doctrine is called homeomeria, and it plainly depends on the fallacy of division.
If a system as a whole has some property that none of its constituents has (or perhaps, it has it but not as a result of some constituent having that property), this is sometimes called a strongly emergent property of the system.
False dilemma
Two alternative statements are held to be the only possible options when in reality there are more.
A false dilemma (also called the fallacy of the false alternative, false dichotomy, the either-or fallacy, fallacy of false choice, black-and/or-white thinking, or the fallacy of exhaustive hypotheses) is a type of informal fallacy that involves a situation in which limited alternatives are considered, when in fact there is at least one additional option. The options may be a position that is between two extremes (such as when there are shades of grey) or may be completely different alternatives. The opposite of this fallacy is argument to moderation.
False dilemma can arise intentionally, when fallacy is used in an attempt to force a choice (such as, in some contexts, the assertion that “if you are not with us, you are against us”). But the fallacy can also arise simply by accidental omission of additional options rather than by deliberate deception.
In the community of philosophers and scholars, many believe that “unless a distinction can be made rigorous and precise it isn’t really a distinction.”Jacques Derrida (1991) Afterword: Toward An Ethic of Discussion, published in the English translation of Limited Inc., pp.123-4, 126 An exception is analytic philosopher John Searle, who called it an incorrect assumption which produces false dichotomies.Searle, John. (1983) The Word Turned Upside Down. The New York Review of Books, Volume 30, Number 16, October 27, 1983. Searle insists that “it is a condition of the adequacy of a precise theory of an indeterminate phenomenon that it should precisely characterize that phenomenon as indeterminate; and a distinction is no less a distinction for allowing for a family of related, marginal, diverging cases.” Similarly, when two options are presented, they are often, though not always, two extreme points on some spectrum of possibilities; this can lend credence to the larger argument by giving the impression that the options are mutually exclusive, even though they need not be. Furthermore, the options in false dichotomies are typically presented as being collectively exhaustive, in which case the fallacy can be overcome, or at least weakened, by considering other possibilities, or perhaps by considering a whole spectrum of possibilities, as in fuzzy logic.
Examples
Morton’s Fork
Morton’s Fork, a choice between two equally unpleasant options, is often a false dilemma. The phrase originates from an argument for taxing English nobles:
“Either the nobles of this country appear wealthy, in which case they can be taxed for good; or they appear poor, in which case they are living frugally and must have immense savings, which can be taxed for good.”Evans, Ivor H. (1989). Brewer’s Dictionary of Phrase & Fable, 14th edition, Harper & Row. ISBN 0-06-016200-7.
This is a false dilemma and a catch-22, because it fails to allow for the possibility that some members of the nobility may in fact lack liquid assets as well as the possibility that those who appear poor also lack liquid assets.
False choice
The presentation of a false choice often reflects a deliberate attempt to eliminate the middle ground on an issue. A common argument against noise pollution laws involves a false choice. It might be argued that in New York City noise should not be regulated, because if it were, the city would drastically change in a negative way. This argument assumes that, for example, a bar must be shut down for it to not cause disturbing levels of noise after midnight. This ignores the fact that the bar could simply lower its noise levels, and/or install soundproofing structural elements to keep the noise from excessively transmitting onto others’ properties, but this is also a false choice because it ignores the fact that the noise could be emanating from the patrons outside the bar.
Black-and-white thinking
In psychology, a related phenomenon to the false dilemma is black-and-white thinking. Many people routinely engage in black-and-white thinking, an example of which is someone who labels other people as all good or all bad.AJ Giannini. Use of fiction in therapy. Psychiatric Times. 18(7):56-57,2001.
Falsum in uno, falsum in omnibus
The Latin phrase falsum in uno, falsum in omnibus which, roughly translated, means “false in one thing, false in everything”, is fallacious in so far as someone found to be wrong about one thing, is presumed to be wrong about some other thing entirely.Lynch, Jack (2008). Deception and detection in eighteenth-century Britain. Ashgate Publishing, Ltd. p. 73. Arising in Roman courts, this principle meant that if a witness was proved false in some parts of his testimony, any further statements were also regarded as false unless they were independently corroborated. Falsus is thus a fallacy of logic. The description that an initial false statement is a prelude to the making of more false statements is false; however, even one false premise will suffice to disprove an argument. This is a special case of the associatory fallacy.
Falsum in uno, falsum in omnibus status as a fallacy is independent of whether it is wise or unwise to use as a legal rule, with witnesses testifying in courts being held for perjury if part of their statements are false.
If-by-whiskey
An argument that supports both sides of an issue by using terms that are selectively emotionally sensitive.
In political discourse, if-by-whiskey is a relativist fallacy where the response to a question is contingent on the questioner’s opinions and use of words with strong positive or negative connotations (e.g., terrorist as negative and freedom fighter as positive). An if-by-whiskey argument implemented through doublespeak appears to affirm both sides of an issue, and agrees with whichever side the listener supports, in effect, taking a position without taking a position. A similar idiom is “all things to all people”, which is often used as a negative term in politics.
Canonical example
The label if-by-whiskey refers to a 1952 speech by Noah S. “Soggy” Sweat, Jr., a young lawmaker from the U.S. state of Mississippi, on the subject of whether Mississippi should continue to prohibit (which it did until 1966) or finally legalize alcoholic beverages:
The American columnist William Safire popularized the term in his column in The New York Times, but wrongly attributed it to Florida Governor Fuller Warren. He corrected this reference in his book Safire’s Political Dictionary, on page 337.
Fallacy of many questions (loaded question)
Someone asks a question that presupposes something that has not been proven or accepted by all the people involved. This fallacy is often used rhetorically, so that the question limits direct replies to those that serve the questioners agenda.
A loaded question is a question which contains a controversial or unjustified assumption ( e.g., a presumption of guilt).
Aside from being an informal fallacy depending on usage, such questions may be used as a rhetorical tool: the question attempts to limit direct replies to be those that serve the questioner’s agenda. The traditional example is the question “Have you stopped beating your wife?” Whether the respondent answers yes or no, he will admit to having a wife, and having beaten her at some time in the past. Thus, these facts are presupposed by the question, and in this case an entrapment, because it narrows the respondent to a single answer, and the fallacy of many questions has been committed. The fallacy relies upon context for its effect: the fact that a question presupposes something does not in itself make the question fallacious. Only when some of these presuppositions are not necessarily agreed to by the person who is asked the question does the argument containing them become fallacious. Hence the same question may be loaded in one context, but not in the other. For example the previous question would not be loaded if it was asked during a trial in which the defendant has already admitted to beating his wife.Douglas N. Walton, Informal logic: a handbook for critical argumentation, Cambridge University Press, 1989, ISBN 0-521-37925-3, Google Print, p.36-37
This fallacy should be distinguished from that of begging the question, Fallacy: Begging the Question The Nizkor Project. Retrieved on: January 22, 2008 which offers a premise the plausibility of which depends on the truth of the proposition asked about, and which is often an implicit restatement of the proposition.
The term “loaded question” is sometimes used to refer to loaded language that is phrased as a question. This type of question does not necessarily contain a fallacious presupposition, but rather this usage refers to the question having an unspoken and often emotive implication. For example, “Are you a murderer?” would be such a loaded question, as “murder” has a very negative connotation. Such a question may be asked merely to harass or upset the respondent with no intention of listening to their reply, or asked with the full expectation that the respondent will predictably deny it.
Defense
A common way out of this argument is not to answer the question (e.g. with a simple ‘yes’ or ‘no’), but to challenge the assumption behind the question. To use an earlier example, a good response to the question “Have you stopped beating your wife?” would be “I have never beaten my wife”. This removes the ambiguity of the expected response, therefore nullifying the tactic. However, the askers of said questions have learned to get around this tactic by accusing the one who answers of dodging the question. A rhetorical question such as “Then please explain, how could I possibly have beaten a wife that I’ve never had?” can be an effective antidote to this further tactic, placing the burden on the deceptive questioner either to expose their tactic or stop the line of inquiry. In many cases a short answer is important. I neither did nor do I now makes a good example on how to answer the question without letting the asker interrupt and misshape the response.
Historical examples
Madeleine Albright (U.S. Ambassador to the U.N.) claims to have answered a loaded question (and later regretted not challenging it instead) on 60 Minutes on 12 May 1996. Lesley Stahl asked, regarding the effects of UN sanctions against Iraq, “We have heard that a half million children have died. I mean, that is more children than died in Hiroshima. And, you know, is the price worth it?”
Madeleine Albright: “I think that is a very hard choice, but the price, we think, the price is worth it.”
She later wrote of this response
I must have been crazy; I should have answered the question by reframing it and pointing out the inherent flaws in the premise behind it. … As soon as I had spoken, I wished for the power to freeze time and take back those words. My reply had been a terrible mistake, hasty, clumsy, and wrong. … I had fallen into a trap and said something that I simply did not mean. That is no one’s fault but my own.
President Bill Clinton, the moderator in a town meeting discussing the topic “Race In America”, in response to a participant argument that the issue was not affirmative action but “racial preferences” asked the participant a loaded question: “Do you favor the United States Army abolishing the affirmative-action program that produced Colin Powell? Yes or no?”
For another example, the New Zealand corporal punishment referendum, 2009 asked:
“Should a smack as part of good parental correction be a criminal offence in New Zealand?”
Murray Edridge, of Barnardos New Zealand, criticized the question as “loaded and ambiguous” and claimed “the question presupposes that smacking is a part of good parental correction”.
On June 13, 2012, National Basketball Association (NBA) commissioner David Stern asked radio personality Jim Rome the loaded question, “Have you stopped beating your wife yet?” in making a point about his feelings about Rome’s interview.
Ludic fallacy
The belief that the outcomes of non-regulated random occurrences can be encapsulated by a statistic; a failure to take into account unknown unknowns in determining the probability of an events taking place.
The ludic fallacy is a term coined by Nassim Nicholas Taleb in his 2007 book The Black Swan. “Ludic” is from the Latin ludus, meaning “play, game, sport, pastime.”D.P. Simpson, “Cassell’s Latin and English Dictionary” (New York: Hungry Minds, 1987) p. 134. It is summarized as “the misuse of games to model real-life situations.” Black Swans, the Ludic Fallacy and Wealth Management, François Sicart. Taleb explains the fallacy as “basing studies of chance on the narrow world of games and dice.”Nassim Taleb, The Black Swan (New York: Random House, 2007) p. 309.
It is a central argument in the book and a rebuttal of the predictive mathematical models used to predict the future – as well as an attack on the idea of applying naïve and simplified statistical models in complex domains. According to Taleb, statistics works only in some domains like casinos in which the odds are visible and defined. Taleb’s argument centers on the idea that predictive models are based on platonified forms, gravitating towards mathematical purity and failing to take some key ideas into account:
It is impossible to be in possession of all the information.
Very small unknown variations in the data could have a huge impact. Taleb does differentiate his idea from that of mathematical notions in chaos theory, e.g. the butterfly effect.
Theories/Models based on empirical data are flawed, as events that have not taken place before for which no conclusive explanation or account can be provided.
Examples
Example 1: Suspicious coin
One example given in the book is the following thought experiment. There are two people:
Dr John, who is regarded as a man of science and logical thinking.
Fat Tony, who is regarded as a man who lives by his wits.
A third party asks them, “assume a fair coin is flipped 99 times, and each time it comes up heads. What are the odds that the 100th flip would also come up heads?”
Dr John says that the odds are not affected by the previous outcomes so the odds must still be 50.
Fat Tony says that the odds of the coin coming up heads 99 times in a row are so low (less than 1 in 6.33 × 1029) that the initial assumption that the coin had a 50:50 chance of coming up heads is most likely incorrect.
The ludic fallacy here is to assume that in real life the rules from the purely hypothetical model (where Dr John is correct) apply. Would a reasonable person bet on black on a roulette table that has come up red 99 times in a row (especially as the reward for a correct guess is so low when compared with the probable odds that the game is fixed)?
In classical terms, highly statistically significant (unlikely) events should make one question one’s model assumptions. In Bayesian statistics, this can be modelled by using a prior distribution for one’s assumptions on the fairness of the coin, then Bayesian inference to update this distribution.
Example 2: Job interview
A man considers going to a job interview. He recently studied statistics and utility theory in college and performed well in the exams. Considering whether to take the interview, he tries to calculate the probability he will get the job versus the cost of the time spent.
This young job seeker forgets that real life has more variables than the small set he has chosen to estimate. Even with a low probability of success, a really good job may be worth the effort of going to the interview. Will he enjoy the process of the interview? Will his interview technique improve regardless of whether he gets the job or not? Even the statistics of the job business are non-linear. What other jobs could come the man’s way by meeting the interviewer? Might there be a possibility of a very high pay-off in this company that he has not thought of?
Example 3: Stock returns
Any decision theory based on a fixed universe or model of possible outcomes ignores and minimizes the impact of events which are “outside model.” For instance, a simple model of daily stock market returns may include extreme moves such as Black Monday but might not model the market breakdowns following the 2011 Japanese tsunami and its consequences. A fixed model considers the “known unknowns,” but ignores the “unknown unknowns.”
Relation to Platonicity
The ludic fallacy is a specific case of the more general problem of Platonicity defined by Taleb as:
the focus on those pure, well-defined, and easily discernible objects like triangles, or more social notions like friendship or love, at the cost of ignoring those objects of seemingly messier and less tractable structures.
Fallacy of the single cause
It is assumed that there is one, simple cause of an outcome when in reality it may have been caused by a number of only jointly sufficient causes.
The fallacy of the single cause, also known as complex cause, causal oversimplification, causal reductionism, and reduction fallacy, is a fallacy of questionable cause that occurs when it is assumed that there is a single, simple cause of an outcome when in reality it may have been caused by a number of only jointly sufficient causes.
It can be logically reduced to: X occurred after Y. Therefore, Y caused X (although A,B,C…etc also caused X.)
Often after a tragedy it is asked, “What was the cause of this?” Such language implies that there is one cause, when instead there were probably a large number of contributing factors. However, having produced a list of several contributing factors, it may be worthwhile to look for the strongest of the factors, or a single cause underlying several of them. A need for simplification may be perceived in order to make the explanation of the tragedy operational, so that responsible authorities can be seen to have taken action.
For instance, after a school shooting, editorialists debate whether it was caused by the shooter’s parents, violence in media, stress on students, or the accessibility of guns. In fact, many different causes—including some of those—may all have necessarily contributed. Similarly, the music industry might claim that peer-to-peer file sharing is the cause of a loss in profit whereas factors such as a growing videogame market and economic depression are also likely to be major factors.
Causal oversimplification is a specific kind of false dilemma where conjoint possibilities are ignored. In other words, the possible causes are assumed to be “A or B or C” when “A and B and C” or “A and B and not C” (etc.) are not taken into consideration.
A notable scientific example of what can happen when this kind of fallacy is identified and resolved is the development in economics of the Coase theorem.
False attribution
An advocate appeals to an irrelevant, unqualified, unidentified, biased or fabricated source in support of an argument.
The fallacy of a false attribution occurs when an advocate appeals to an irrelevant, unqualified, unidentified, biased or fabricated source in support of an argument. A contextomy is a type of false attribution.
A more deceptive and difficult to detect version of a false attribution is where a fraudulent advocate goes so far as to fabricate a source, such as creating a fake website, in order to support a claim. For example, the “Levitt Institute” was a fake organisation created in 2009 solely for the purposes of (successfully) fooling the Australian media into reporting that Sydney was Australia’s most naive city. Deception Detection Deficiency, Media Watch.
A particular case of misattribution is the Matthew effect: a quotation is often attributed to someone more famous than the real author. This leads the quotation to be more famous, but the real author to be forgotten (see also: obliteration by incorporation).
Argument to moderation
Assuming that the compromise between two positions is always correct.
Argument to moderation (Latin: argumentum ad temperantiam; also known as from middle ground, false compromise, gray fallacy and the golden mean fallacy) is an informal fallacy which asserts that the truth can be found as a compromise between two opposite positions. This fallacy’s opposite is the false dilemma.
As Vladimir Bukovsky puts it, the middle ground between the Big Lie of Soviet propaganda and the truth is a lie, and one should not be looking for a middle ground between disinformation and information.Vladimir Bukovsky, The wind returns. Letters by Russian traveler (Russian edition, Буковский В. К. И возвращается ветер. Письма русского путешественника.) Moscow, 1990, ISBN 5-235-01826, page 345. According to him, people from the Western pluralistic civilization are more prone to this fallacy because they are used to resolving problems by making compromises and accepting alternative interpretations, unlike Russians who are looking for the absolute truth.
An individual demonstrating this false compromise fallacy implies that the positions being considered represent extremes of a continuum of opinions, and that such extremes are always wrong, and the middle ground is always correct. Fallacy: Middle Ground, The Nizkor Project (accessed 29 November 2012) This is not always the case. Sometimes only X or Y is acceptable, with no middle ground possible. Additionally, the middle ground fallacy allows any position to be invalidated, even those that have been reached by previous applications of the same method; all one must do is present yet another, radically opposed position, and the middle-ground compromise will be forced closer to that position. In politics, this is part of the basis behind Overton window theory.
It is important to note that this does not mean the middle ground position is a bad strategy, or even incorrect; only that the fact that it is moderate cannot be used as evidence of its truthfulness.
Examples
“Some would say that hydrogen cyanide is a delicious and necessary part of the human diet, but others claim it is a toxic and dangerous substance. The truth must therefore be somewhere in between.”
“Bob says we should buy a computer. Sue says we shouldn’t. Therefore, the best solution is to compromise and buy half a computer.”
“Should array indices start at 0 or 1? My compromise of 0.5 was rejected without, I thought, proper consideration.” — Stan Kelly-Bootle
“The choice of 48 bytes as the ATM cell payload size, as a compromise between the 64 bytes proposed by parties from the United States and the 32 bytes proposed by European parties; the compromise was made for entirely political reasons, since it did not technically favor either of the parties.”
“The fact that one is confronted with an individual who strongly argues that slavery is wrong and another who argues equally strongly that slavery is perfectly legitimate in no way suggests that the truth must be somewhere in the middle.”Susan T. Gardner (2009). Thinking Your Way to Freedom: A Guide to Owning Your Own Practical Reasoning. Temple University Press.
Gamblers fallacy
The incorrect belief that separate, independent events affect the likelihood of another random event. If a coin flip lands on heads (x) times in a row, the belief that it is “due to land on tails” is incorrect.
The Gambler’s fallacy, also known as the Monte Carlo fallacy (because its most famous example happened in a Monte Carlo Casino in 1913), Blog - “Fallacy Files” What happened at Monte Carlo in 1913. and also referred to as the fallacy of the maturity of chances, is the belief that if deviations from expected behaviour are observed in repeated independent trials of some random process, future deviations in the opposite direction are then more likely.
An example: coin-tossing
The gambler’s fallacy can be illustrated by considering the repeated toss of a fair coin. With a fair coin, the outcomes in different tosses are statistically independent and the probability of getting heads on a single toss is exactly (one in two). It follows that the probability of getting two heads in two tosses is (one in four) and the probability of getting three heads in three tosses is (one in eight).
Now suppose that we have just tossed four heads in a row, so that if the next coin toss were also to come up heads, it would complete a run of five successive heads. Since the probability of a run of five successive heads is only (one in thirty-two), a person subject to the gambler’s fallacy might believe that this next flip was less likely to be heads than to be tails. However, this is not correct, and is a manifestation of the gambler’s fallacy; the event of 5 heads in a row and the event of “first 4 heads, then a tails” are equally likely, each having probability . Given that the first four rolls turn up heads, the probability that the next toss is a head is in fact,
While a run of five heads is only = 0.03125, it is only that before the coin is first tossed. After the first four tosses the results are no longer unknown, so their probabilities are 1. Reasoning that it is more likely that the next toss will be a tail than a head due to the past tosses, that a run of luck in the past somehow influences the odds in the future, is the fallacy.
Explaining why the probability is 1/2 for a fair coin
We can see from the above that, if one flips a fair coin 21 times, then the probability of 21 heads is 1 in 2,097,152. However, the probability of flipping a head after having already flipped 20 heads in a row is simply . This is an application of Bayes’ theorem.
This can also be seen without knowing that 20 heads have occurred for certain (without applying of Bayes’ theorem). Consider the following two probabilities, assuming a fair coin:
probability of 20 heads, then 1 tail
0.520 × 0.5
0.521
probability of 20 heads, then 1 head
0.520 × 0.5
0.521
The probability of getting 20 heads then 1 tail, and the probability of getting 20 heads then another head are both 1 in 2,097,152. Therefore, it is equally likely to flip 21 heads as it is to flip 20 heads and then 1 tail when flipping a fair coin 21 times. Furthermore, these two probabilities are equally as likely as any other 21-flip combinations that can be obtained (there are 2,097,152 total); all 21-flip combinations will have probabilities equal to 0.521, or 1 in 2,097,152. From these observations, there is no reason to assume at any point that a change of luck is warranted based on prior trials (flips), because every outcome observed will always have been as likely as the other outcomes that were not observed for that particular trial, given a fair coin. Therefore, just as Bayes’ theorem shows, the result of each trial comes down to the base probability of the fair coin: .
Other examples
There is another way to emphasize the fallacy. As already mentioned, the fallacy is built on the notion that previous failures indicate an increased probability of success on subsequent attempts. This is, in fact, the inverse of what actually happens, even on a fair chance of a successful event, given a set number of iterations. Assume a fair 16-sided die, where a win is defined as rolling a 1. Assume a player is given 16 rolls to obtain at least one win (1−p(rolling no ones)). The low winning odds are just to make the change in probability more noticeable. The probability of having at least one win in the 16 rolls is:
64.39%
However, assume now that the first roll was a loss (93.75% chance of that, ). The player now only has 15 rolls left and, according to the fallacy, should have a higher chance of winning since one loss has occurred. His chances of having at least one win are now:
62.02%
Simply by losing one toss the player’s probability of winning dropped by 2 percentage points. By the time this reaches 5 losses (11 rolls left), his probability of winning on one of the remaining rolls will have dropped to ~50%. The player’s odds for at least one win in those 16 rolls has not increased given a series of losses; his odds have decreased because he has fewer iterations left to win. In other words, the previous losses in no way contribute to the odds of the remaining attempts, but there are fewer remaining attempts to gain a win, which results in a lower probability of obtaining it.
The player becomes more likely to lose in a set number of iterations as he fails to win, and eventually his probability of winning will again equal the probability of winning a single toss, when only one toss is left: 6.25% in this instance.
Some lottery players will choose the same numbers every time, or intentionally change their numbers, but both are equally likely to win any individual lottery draw. Copying the numbers that won the previous lottery draw gives an equal probability, although a rational gambler might attempt to predict other players’ choices and then deliberately avoid these numbers. Low numbers (below 31 and especially below 12) are popular because people play birthdays as their so-called lucky numbers; hence a win in which these numbers are over-represented is more likely to result in a shared payout.
A joke told among mathematicians demonstrates the nature of the fallacy. When flying on an aircraft, a man decides to always bring a bomb with him. “The chances of an aircraft having a bomb on it are very small,” he reasons, “and certainly the chances of having two are almost none!” A similar example is in the book The World According to Garp when the hero Garp decides to buy a house a moment after a small plane crashes into it, reasoning that the chances of another plane hitting the house have just dropped to zero.
Reverse fallacy
The reversal is also a fallacy (not to be confused with the inverse gambler’s fallacy) in which a gambler may instead decide, after a consistent tendency towards tails, that tails are more likely out of some mystical preconception that fate has thus far allowed for consistent results of tails. Believing the odds to favor tails, the gambler sees no reason to change to heads. Again, the fallacy is the belief that the “universe” somehow carries a memory of past results which tend to favor or disfavor future outcomes.
Caveats
In most illustrations of the gambler’s fallacy and the reversed gambler’s fallacy, the trial (e.g. flipping a coin) is assumed to be fair. In practice, this assumption may not hold.
For example, if one flips a fair coin 21 times, then the probability of 21 heads is 1 in 2,097,152 (above). If the coin is fair, then the probability of the next flip being heads is 1/2. However, because the odds of flipping 21 heads in a row is so slim, it may well be that the coin is somehow biased towards landing on heads, or that it is being controlled by hidden magnets, or similar.Martin Gardner, Entertaining Mathematical Puzzles, Dover Publications, 69-70. In this case, the smart bet is “heads” because the Bayesian inference from the empirical evidence — 21 “heads” in a row — suggests that the coin is likely to be biased toward “heads”, contradicting the general assumption that the coin is fair.
The opening scene of the play Rosencrantz and Guildenstern Are Dead by Tom Stoppard discusses these issues as one man continually flips heads and the other considers various possible explanations.
Childbirth
Instances of the gambler’s fallacy being applied to childbirth can be traced all the way back to 1796, in Pierre-Simon Laplace’s A Philosophical Essay on Probabilities. Laplace wrote of the ways in which men calculated their probability of having sons: “I have seen men, ardently desirous of having a son, who could learn only with anxiety of the births of boys in the month when they expected to become fathers. Imagining that the ratio of these births to those of girls ought to be the same at the end of each month, they judged that the boys already born would render more probable the births next of girls.” In short, the expectant fathers feared that if more sons were born in the surrounding community, then they themselves would be more likely to have a daughter.Barron, G. and Leider, S. (2010). The role of experience in the gambler’s fallacy. Journal of Behavioral Decision Making, 23, 117-129.
Some expectant parents believe that, after having multiple children of the same sex, they are “due” to have a child of the opposite sex. While the Trivers–Willard hypothesis predicts that birth sex is dependent on living conditions (i.e. more male children are born in “good” living conditions, while more female children are born in poorer living conditions), the probability of having a child of either gender is still generally regarded as 50/50.
Monte Carlo Casino
The most famous example of the gambler’s fallacy occurred in a game of roulette at the Monte Carlo Casino on August 18, 1913”Roulette”, in The Universal Book of Mathematics: From Abracadabra to Zeno’s Paradoxes, by David Darling (John Wiley & Sons, 2004) p278, when the ball fell in black 26 times in a row. This was an extremely uncommon occurrence, although no more nor less common than any of the other 67,108,863 sequences of 26 red or black. Gamblers lost millions of francs betting against black, reasoning incorrectly that the streak was causing an “imbalance” in the randomness of the wheel, and that it had to be followed by a long streak of red.
Non-examples of the fallacy
There are many scenarios where the gambler’s fallacy might superficially seem to apply, when it actually does not. When the probability of different events is not independent, the probability of future events can change based on the outcome of past events (see statistical permutation). Formally, the system is said to have memory. An example of this is cards drawn without replacement. For example, if an ace is drawn from a deck and not reinserted, the next draw is less likely to be an ace and more likely to be of another rank. The odds for drawing another ace, assuming that it was the first card drawn and that there are no jokers, have decreased from (7.69%) to (5.88%), while the odds for each other rank have increased from (7.69%) to (7.84%). This type of effect is what allows card counting systems to work (for example in the game of blackjack).
The reversed gambler’s fallacy may appear to apply in the story of Joseph Jagger, who hired clerks to record the results of roulette wheels in Monte Carlo. He discovered that one wheel favored nine numbers and won large sums of money until the casino started rebalancing the roulette wheels daily. In this situation, the observation of the wheel’s behavior provided information about the physical properties of the wheel rather than its “probability” in some abstract sense, a concept which is the basis of both the gambler’s fallacy and its reversal. Even a biased wheel’s past results will not affect future results, but the results can provide information about what sort of results the wheel tends to produce. However, if it is known for certain that the wheel is completely fair, then past results provide no information about future ones.
The outcome of future events can be affected if external factors are allowed to change the probability of the events (e.g., changes in the rules of a game affecting a sports team’s performance levels). Additionally, an inexperienced player’s success may decrease after opposing teams discover his weaknesses and exploit them. The player must then attempt to compensate and randomize his strategy. Such analysis is part of game theory.
Non-example: unknown probability of event
When the probability of repeated events are not known, outcomes may not be equally probable. In the case of coin tossing, as a run of heads gets longer and longer, the likelihood that the coin is biased towards heads increases. If one flips a coin 21 times in a row and obtains 21 heads, one might rationally conclude a high probability of bias towards heads, and hence conclude that future flips of this coin are also highly likely to be heads. In fact, Bayesian inference can be used to show that when the long-run proportion of different outcomes are unknown but exchangeable (meaning that the random process from which they are generated may be biased but is equally likely to be biased in any direction) and that previous observations demonstrate the likely direction of the bias, such that the outcome which has occurred the most in the observed data is the most likely to occur again.O’Neill, B. and Puza, B.D. (2004) Dice have no memories but I do: A defence of the reverse gambler’s belief. Reprinted in abridged form as O’Neill, B. and Puza, B.D. (2005) In defence of the reverse gambler’s belief. The Mathematical Scientist 30(1), pp. 13–16.
Psychology behind the fallacy
Origins
Gambler’s fallacy arises out of a belief in the law of small numbers, or the erroneous belief that small samples must be representative of the larger population. According to the fallacy, “streaks” must eventually even out in order to be representative.Burns, B.D. and Corpus, B. (2004). Randomness and inductions from streaks: “Gambler’s fallacy” versus “hot hand.” Psychonomic Bulletin and Review. 11, 179-184 Amos Tversky and Daniel Kahneman first proposed that the gambler’s fallacy is a cognitive bias produced by a psychological heuristic called the representativeness heuristic, which states that people evaluate the probability of a certain event by assessing how similar it is to events they have experienced before, and how similar the events surrounding those two processes are. According to this view, “after observing a long run of red on the roulette wheel, for example, most people erroneously believe that black will result in a more representative sequence than the occurrence of an additional red”,Tversky & Kahneman, 1974. so people expect that a short run of random outcomes should share properties of a longer run, specifically in that deviations from average should balance out. When people are asked to make up a random-looking sequence of coin tosses, they tend to make sequences where the proportion of heads to tails stays closer to 0.5 in any short segment than would be predicted by chance (insensitivity to sample size); Kahneman and Tversky interpret this to mean that people believe short sequences of random events should be representative of longer ones.Tversky & Kahneman, 1971. The representativeness heuristic is also cited behind the related phenomenon of the clustering illusion, according to which people see streaks of random events as being non-random when such streaks are actually much more likely to occur in small samples than people expect.
The gambler’s fallacy can also be attributed to the mistaken belief that gambling (or even chance itself) is a fair process that can correct itself in the event of streaks, otherwise known as the just-world hypothesis.Rogers, P. (1998). The cognitive psychology of lottery gambling: A theoretical review. Journal of Gambling Studies, 14, 111-134 Other researchers believe that individuals with an internal locus of control - that is, people who believe that the gambling outcomes are the result of their own skill - are more susceptible to the gambler’s fallacy because they reject the idea that chance could overcome skill or talent.Sundali, J. and Croson, R. (2006). Biases in casino betting: The hot hand and the gambler’s fallacy. Judgment and Decision Making, 1, 1-12.
Variations of the gambler’s fallacy
Some researchers believe that there are actually two types of gambler’s fallacy: Type I and Type II. Type I is the “classic” gambler’s fallacy, when individuals believe that a certain outcome is “due” after a long streak of another outcome. Type II gambler’s fallacy, as defined by Gideon Keren and Charles Lewis, occurs when a gambler underestimates how many observations are needed to detect a favorable outcome (such as watching a roulette wheel for a length of time and then betting on the numbers that appear most often). Detecting a bias that will lead to a favorable outcome takes an impractically large amount of time and is very difficult, if not impossible, to do, therefore people fall prey to the Type II gambler’s fallacy.Keren, G. and Lewis, C. (1994). The two fallacies of gamblers: Type I and Type II. Organizational Behavior and Human Decision Processes, 60, 75-89. The two types are different in that Type I wrongly assumes that gambling conditions are fair and perfect, while Type II assumes that the conditions are biased, and that this bias can be detected after a certain amount of time.
Another variety, known as the retrospective gambler’s fallacy, occurs when individuals judge that a seemingly rare event must come from a longer sequence than a more common event does. For example, people believe that an imaginary sequence of die rolls is more than three times as long when a set of three 6’s is observed as opposed to when there are only two 6’s. This effect can be observed in isolated instances, or even sequentially. A real world example is when a teenager becomes pregnant after having unprotected sex, people assume that she has been engaging in unprotected sex for longer than someone who has been engaging in unprotected sex and is not pregnant.Oppenheimer, D.M. and Monin, B. (2009). The retrospective gambler’s fallacy: Unlikely events, constructing the past, and multiple universes. Judgment and Decision Making, 4, 326-334.
Relationship to hot-hand fallacy
Another psychological perspective states that gambler’s fallacy can be seen as the counterpart to basketball’s Hot-hand fallacy. In the hot-hand fallacy, people tend to predict the same outcome of the last event (positive recency) - that a high scorer will continue to score. In gambler’s fallacy, however, people predict the opposite outcome of the last event (negative recency) - that, for example, since the roulette wheel has landed on black the last six times, it is due to land on red the next. Ayton and Fischer have theorized that people display positive recency for the hot-hand fallacy because the fallacy deals with human performance, and that people do not believe that an inanimate object can become “hot.” Human performance is not perceived as “random,” and people are more likely to continue streaks when they believe that the process generating the results is nonrandom. Usually, when a person exhibits the gambler’s fallacy, they are more likely to exhibit the hot-hand fallacy as well, suggesting that one construct is responsible for the two fallacies.
The difference between the two fallacies is also represented in economic decision-making. A study by Huber, Kirchler, and Stockl (2010) examined how the hot hand and the gambler’s fallacy are exhibited in the financial market. The researchers gave their participants a choice: they could either bet on the outcome of a series of coin tosses, use an “expert” opinion to sway their decision, or choose a risk-free alternative instead for a smaller financial reward. Participants turned to the “expert” opinion to make their decision 24% of the time based on their past experience of success, which exemplifies the hot-hand. If the expert was correct, 78% of the participants chose the expert’s opinion again, as opposed to 57% doing so when the expert was wrong. The participants also exhibited the gambler’s fallacy, with their selection of either heads or tails decreasing after noticing a streak of that outcome. This experiment helped bolster Ayton and Fischer’s theory that people put more faith in human performance than they do in seemingly random processes.
Neurophysiology
While the representativeness heuristic and other cognitive biases are the most commonly cited cause of the gambler’s fallacy, research suggests that there may be a neurological component to it as well. Functional magnetic resonance imaging has revealed that, after losing a bet or gamble (“riskloss”), the frontoparietal network of the brain is activated, resulting in more risk-taking behavior. In contrast, there is decreased activity in the amygdala, caudate and ventral striatum after a riskloss. Activation in the amygdala is negatively correlated with gambler’s fallacy - the more activity exhibited in the amygdala, the less likely an individual is to fall prey to the gambler’s fallacy. These results suggest that gambler’s fallacy relies more on the prefrontal cortex (responsible for executive, goal-directed processes) and less on the brain areas that control affective decision-making.
The desire to continue gambling or betting is controlled by the striatum, which supports a choice-outcome contingency learning method. The striatum processes the errors in prediction and the behavior changes accordingly. After a win, the positive behavior is reinforced and after a loss, the behavior is conditioned to be avoided. In individuals exhibiting the gambler’s fallacy, this choice-outcome contingency method is impaired, and they continue to make risks after a series of losses.
Possible solutions
The gambler’s fallacy is a deep-seated cognitive bias and therefore very difficult to eliminate. For the most part, educating individuals about the nature of randomness has not proven effective in reducing or eliminating any manifestation of the gambler’s fallacy. Participants in an early study by Beach and Swensson (1967) were shown a shuffled deck of index cards with shapes on them, and were told to guess which shape would come next in a sequence. The experimental group of participants was informed about the nature and existence of the gambler’s fallacy, and were explicitly instructed not to rely on “run dependency” to make their guesses. The control group was not given this information. Even so, the response styles of the two groups were similar, indicating that the experimental group still based their choices on the length of the run sequence. Clearly, instructing individuals about randomness is not sufficient in lessening the gambler’s fallacy.
It does appear, however, that an individual’s susceptibility to the gambler’s fallacy decreases with age. Fischbein and Schnarch (1997) administered a questionnaire to five groups: students in grades 5, 7, 9, 11, and college students specializing in teaching mathematics. None of the participants had received any prior education regarding probability. The question was, “Ronni flipped a coin three times and in all cases heads came up. Ronni intends to flip the coin again. What is the chance of getting heads the fourth time?” The results indicated that as the older the students got, the less likely they were to answer with “smaller than the chance of getting tails,” which would indicate a negative recency effect. 35% of the 5th graders, 35% of the 7th graders, and 20% of the 9th graders exhibited the negative recency effect. Only 10% of the 11th graders answered this way, however, and none of the college students did. Fischbein and Schnarch therefore theorized that an individual’s tendency to rely on the representativeness heuristic and other cognitive biases can be overcome with age.
Another possible solution that could be seen as more proactive comes from Roney and Trick, Gestalt psychologists who suggest that the fallacy may be eliminated as a result of grouping. When a future event (ex: a coin toss) is described as part of a sequence, no matter how arbitrarily, a person will automatically consider the event as it relates to the past events, resulting in the gambler’s fallacy. When a person considers every event as independent, however, the fallacy can be greatly reduced.
In their experiment, Roney and Trick told participants that they were betting on either two blocks of six coin tosses, or on two blocks of seven coin tosses. The fourth, fifth, and sixth tosses all had the same outcome, either three heads or three tails. The seventh toss was grouped with either the end of one block, or the beginning of the next block. Participants exhibited the strongest gambler’s fallacy when the seventh trial was part of the first block, directly after the sequence of three heads or tails. Additionally, the researchers pointed out how insidious the fallacy can be - the participants that did not show the gambler’s fallacy showed less confidence in their bets and bet fewer times than the participants who picked “with” the gambler’s fallacy. However, when the seventh trial was grouped with the second block (and was therefore perceived as not being part of a streak), the gambler’s fallacy did not occur.
Roney and Trick argue that a solution to gambler’s fallacy could be, instead of teaching individuals about the nature of randomness, training people to treat each event as if it is a beginning and not a continuation of previous events. This would prevent people from gambling when they are losing in the vain hope that their chances of winning are due to increase.