Fallacies Flashcards

1
Q

Argument from fallacy

A

Assumes that if an argument for some conclusion is fallacious then the conclusion itself is false.

Argument from fallacy is the formal fallacy of analyzing an argument and inferring that, since it contains a fallacy, its conclusion must be false.K. S. Pope (2003) “Logical Fallacies in Psychology: 21 Types” Fallacies & Pitfalls in Psychology It is also called argument to logic (argumentum ad logicam), fallacy fallacy, or fallacist’s fallacy.

Fallacious arguments can arrive at true conclusions, so this is an informal fallacy of relevance.

Form

It has the general argument form:

If P, then Q.
P is a fallacious argument.
Therefore, Q is false.
c since A
A is fallacious
¬c
Thus, it is a special case of denying the antecedent where the antecedent, rather than being a proposition that is false, is an entire argument that is fallacious. A fallacious argument, just as with a false antecedent, can still have a consequent that happens to be true. The fallacy is in concluding the consequent of a fallacious argument has to be false.

That the argument is fallacious only means that the argument cannot succeed in proving its consequent.John Woods, The death of argument: fallacies in agent based reasoning, Springer 2004, pp. XXIII–XXV But showing how one argument in a complex thesis is fallaciously reasoned does not necessarily invalidate the proof; the complete proof could still logically imply its conclusion if that conclusion is not dependent on the fallacy:

Examples:

Tom: All cats are animals. Ginger is an animal. This means Ginger is a cat.
Bill: Ah, you just committed the affirming the consequent logical fallacy. Sorry, you are wrong, which means that Ginger is not a cat.
Tom: OK – I’ll prove I’m English – I speak English so that proves it.
Bill: But Americans and Canadians, among others, speak English too. You have committed the package-deal fallacy, assuming that speaking English and being English always go together. That means you are not English.
Both of Bill’s rebuttals are arguments from fallacy, because Ginger may or may not be a cat, and Tom may or may not be English. Of course, the mere fact that one can invoke the argument from fallacy against a position does not automatically “prove” one’s own position either, as this would itself be yet another argument from fallacy. An example of this false reasoning follows:

Joe: Bill’s assumption that Ginger is not a cat uses the argument from fallacy. Therefore, Ginger absolutely must be a cat.
An argument using fallacious reasoning is capable of being consequentially correct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Base rate fallacy

A

Making a probability judgement based on conditional probabilities, without taking into account the effect of prior probabilities.

The base rate fallacy, also called base rate neglect or base rate bias, is an error that occurs when the conditional probability of some hypothesis H given some evidence E is assessed without taking into account the prior probability (“base rate”) of H and the total probability of evidence E. The conditional probability can be expressed as P(H|E), the probability of H given E, and the base rate error happens when the values of sensitivity and specificity, which depend only on the test itself, are used in place of positive predictive value and negative predictive value, which depend on both the test and the baseline prevalence of event.

Example

In a city of 1 million inhabitants there are 100 terrorists and 999,900 non-terrorists. To simplify the example, it is assumed that the only people in the city are inhabitants. Thus, the base rate probability of a randomly selected inhabitant of the city being a terrorist is 0.0001, and the base rate probability of that same inhabitant being a non-terrorist is 0.9999. In an attempt to catch the terrorists, the city installs an alarm system with a surveillance camera and automatic facial recognition software. The software has two failure rates of 1%:

The false negative rate: If the camera scans a terrorist, a bell will ring 99% of the time, and it will fail to ring 1% of the time.
The false positive rate: If the camera scans a non-terrorist, a bell will not ring 99% of the time, but it will ring 1% of the time.
Suppose now that an inhabitant triggers the alarm. What is the chance that the person is a terrorist? In other words, what is P(P|B), the probability that a terrorist has been detected given the ringing of the bell? Someone making the ‘base rate fallacy’ would infer that there is a 99% chance that the detected person is a terrorist. Although the inference seems to make sense, it is actually bad reasoning, and a calculation below will show that the chances they are a terrorist are actually near 1%, not near 99%.

The fallacy arises from confusing the natures of two different failure rates. The ‘number of non-bells per 100 terrorists’ and the ‘number of non-terrorists per 100 bells’ are unrelated quantities. One does not necessarily equal the other, and they don’t even have to be almost equal. To show this, consider what happens if an identical alarm system were set up in a second city with no terrorists at all. As in the first city, the alarm sounds for 1 out of every 100 non-terrorist inhabitants detected, but unlike in the first city, the alarm never sounds for a terrorist. Therefore 100% of all occasions of the alarm sounding are for non-terrorists, but a false negative rate cannot even be calculated. The ‘number of non-terrorists per 100 bells’ in that city is 100, yet P(T|B) = 0%. There is zero chance that a terrorist has been detected given the ringing of the bell.

Imagine that the city’s entire population of one million people pass in front of the camera. About 99 of the 100 terrorists will trigger the alarm—-and so will about 9,999 of the 999,900 non-terrorists. Therefore, about 10,098 people will trigger the alarm, among which about 99 will be terrorists. So the probability that a person triggering the alarm is actually a terrorist is only about 99 in 10,098, which is less than 1%, and very very far below our initial guess of 99%.

The base rate fallacy is so misleading in this example because there are many more non-terrorists than terrorists. If, instead, the city had about as many terrorists as non-terrorists, and the false-positive rate and the false-negative rate were nearly equal, then the probability of misidentification would be about the same as the false-positive rate of the device. These special conditions hold sometimes: as for instance, about half the women undergoing a pregnancy test are actually pregnant, and some pregnancy tests give about the same rates of false positives and of false negatives. In this case, the rate of false positives per positive test will be nearly equal to the rate of false positives per nonpregnant woman. This is why it is very easy to fall into this fallacy: by coincidence it gives the correct answer in many common situations.

In many real-world situations, though, particularly problems like detecting criminals in a largely law-abiding population, the small proportion of targets in the large population makes the base rate fallacy very applicable. Even a very low false-positive rate will result in so many false alarms as to make such a system useless in practice.

Findings in psychology

In experiments, people have been found to prefer individuating information over general information when the former is available.

In some experiments, students were asked to estimate the grade point averages (GPAs) of hypothetical students. When given relevant statistics about GPA distribution, students tended to ignore them if given descriptive information about the particular student, even if the new descriptive information was obviously of little or no relevance to school performance. This finding has been used to argue that interviews are an unnecessary part of the college admissions process because interviewers are unable to pick successful candidates better than basic statistics.

Psychologists Daniel Kahneman and Amos Tversky attempted to explain this finding in terms of a simple rule or “heuristic” called representativeness. They argued that many judgements relating to likelihood, or to cause and effect, are based on how representative one thing is of another, or of a category. Richard Nisbett has argued that some attributional biases like the fundamental attribution error are instances of the base rate fallacy: people underutilize “consensus information” (the “base rate”) about how others behaved in similar situations and instead prefer simpler dispositional attributions.

Kahneman considers base rate neglect to be a specific form of extension neglect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Conjunction fallacy

A

Assumption that an outcome simultaneously satisfying multiple conditions is more probable than an outcome satisfying a single one of them.

The conjunction fallacy is a formal fallacy that occurs when it is assumed that specific conditions are more probable than a single general one.

The most often-cited example of this fallacy originated with Amos Tversky and Daniel Kahneman:Tversky & Kahneman (1982, 1983)

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which is more probable?

Linda is a bank teller.
Linda is a bank teller and is active in the feminist movement.
90% of those asked chose option 2. However the probability of two events occurring together (in “conjunction”) is always less than or equal to the probability of either one occurring alone—formally, for two events A and B this inequality could be written as \Pr(A \and B) \leq \Pr(A), and \Pr(A \and B) \leq \Pr(B).

For example, even choosing a very low probability of Linda being a bank teller, say Pr(Linda is a bank teller)

  1. 05 and a high probability that she would be a feminist, say Pr(Linda is a feminist)
  2. 95, then, assuming independence, Pr(Linda is a bank teller and Linda is a feminist) = 0.05 × 0.95 or 0.0475, lower than Pr(Linda is a bank teller).

Tversky and Kahneman argue that most people get this problem wrong because they use a heuristic called representativeness to make this kind of judgment: Option 2 seems more “representative” of Linda based on the description of her, even though it is clearly mathematically less likely.Tversky & Kahneman (1983)

In other demonstrations they argued that specific scenario seemed more likely because of representativeness, but each added detail would actually make the scenario less and less likely. In this way it could be similar to the misleading vividness or slippery slope fallacies. More recently Kahneman has argued that the conjunction fallacy is a type of extension neglectKahneman (2003)

Joint versus separate evaluation

In some experimental demonstrations the conjoint option is evaluated separately from its basic option. In other words, one group of participants is asked to rank order the likelihood that Linda is a bank teller, a high school teacher, and several other options, and another group is asked to rank order whether Linda is a bank teller and active in the feminist movement versus the same set of options (without Linda is a bankteller as an option). In this type of demonstration different groups of subjects rank order Linda as a bank teller and active in the feminist movement more highly than Linda as a bank teller.

Separate evaluation experiments preceded the earliest joint evaluation experiments, and Kahneman and Tversky were surprised when the effect was still observed under joint evaluation.Kahneman (2011) chapter 15

In separate evaluation the term conjunction effect may be preferred.

Criticism of the Linda problem

Critics such as Gerd Gigerenzer and Ralph Hertwig criticized the Linda problem on grounds such as the wording and framing. The question of the Linda problem may violate conversational maxims in that people assume that the question obeys the maxim of relevance. Gigerenzer argues that some of the terminology used have polysemous meanings, the alternatives of which he claimed were more “natural”. He argues that the meaning of probable “what happens frequently”, corresponds to the mathematical probability people are supposed to be tested on, but the meanings of probable “what is plausible”, and “whether there is evidence” do not.Gigerenzer (1996), Hertwig & Gigerenzer (1999) The term “and” has even been argued to have relevant polysemous meanings.Mellers, Hertwig & Kahneman (2001) Many techniques have been developed to control for this possible misinterpretation but none of them has dissipated the effect.Moro, 2009; Tentori & Crupi, 2012

Many variations in wording of the Linda problem were studied by Tversky and Kahneman. If the first option is changed to obey conversational relevance, i.e., “Linda is a bank teller whether or not she is active in the feminist movement” the effect is decreased, but the majority (57%) of the respondents still commit the conjunction error. If the probability is changed to frequency format (see debiasing section below) the effect is reduced or eliminated. However, studies exist in which indistinguishable conjunction fallacy rates have been observed with stimuli framed in terms of probabilities versus frequencies.see, for example, Tentori, Bonini, & Osherson, 2004 or Weddell & Moro, 2008

The wording criticisms may be less applicable to the conjunction effect in separate evaluation.Gigerenzer (1996) The “Linda problem” has been studied and criticized more than other types of demonstration of the effect (some described below).Kahneman (2011) ch. 15, Kahneman & Tversky (1996), Mellers, Hertwig & Kahneman (2001)

Other demonstrations

Policy experts were asked to rate the probability that the Soviet Union would invade Poland, and the United States would break off diplomatic relations, all in the following year. They rated it on average as having a 4% probability of occurring. Another group of experts was asked to rate the probability simply that the United States would break off relations with the Soviet Union in the following year. They gave it an average probability of only 1%.

In an experiment conducted in 1980, respondents were asked the following:

Suppose Bjorn Borg reaches the Wimbledon finals in 1981. Please rank order the following outcomes from most to least likely.

Borg will win the match
Borg will lose the first set
Borg will lose the first set but win the match
Borg will win the first set but lose the match
On average, participants rated “Borg will lose the first set but win the match” more highly than “Borg will lose the first set”.

In another experiment, participants were asked:

Consider a regular six-sided die with four green faces and two red faces. The die will be rolled 20 times and the sequence of greens (G) and reds (R) will be recorded. You are asked to select one sequence, from a set of three, and you will win $25 if the sequence you choose appears on successive rolls of the die.

RGRRR
GRGRRR
GRRRRR
65% of participants chose the second sequence, though option 1 is contained within it and is shorter than the other options. In a version where the $25 bet was only hypothetical the results did not significantly differ. Tversky and Kahneman argued that sequence 2 appears “representative” of a chance sequence (compare to the clustering illusion).

Debiasing

Drawing attention to set relationships, using frequencies instead of probabilities and/or thinking diagrammatically sharply reduce the error in some forms of the conjunction fallacy.Tversky & Kahneman (1983), Gigerenzer (1991), Hertwig & Gigerenzer (1999), Mellers, Hertwig & Kahneman (2001)

In one experiment the question of the Linda problem was reformulated as follows:

There are 100 persons who fit the description above (that is, Linda’s). How many of them are:

Bank tellers? __ of 100
Bank tellers and active in the feminist movement? __ of 100
Whereas previously 85% of participants gave the wrong answer (bank teller and active in the feminist movement) in experiments done with this questioning none of the participants gave a wrong answer.Gigerenzer (1991)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Masked man fallacy

A

The substitution of identical designators in a true statement can lead to a false one.

The masked man fallacy is a fallacy of formal logic in which substitution of identical designators in a true statement can lead to a false one.

One form of the fallacy may be summarized as follows:

Premise 1: I know who X is.
Premise 2: I do not know who Y is.
Conclusion: Therefore, X is not Y.
The problem arises because Premise 1 and Premise 2 can be simultaneously true even when X and Y refer to the same person. Consider the argument, “I know who my father is. I do not know who the thief is. Therefore, my father is not the thief.” The premises may be true and the conclusion false if the father is the thief but the speaker does not know this about his father. Thus the argument is a fallacious one.

The name of the fallacy comes from the example, “I do not know who the masked man is”, which can be true even though the masked man is Jones, and I know who Jones is.

If someone were to say, “I do not know the masked man,” it implies, “If I do know the masked man, I do not know that he is the masked man.” The masked man fallacy omits the implication.

Note that the following similar argument is valid:

X is Z
Y is not Z
Therefore, X is not Y
But this is because being something is different from knowing (or believing, etc.) something.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Affirming a disjunct

A

Concluded that one disjunct of a logical disjunction must be false because the other disjunct is true. A or B; A; therefore not B.

The formal fallacy of affirming a disjunct also known as the fallacy of the alternative disjunct or a false exclusionary disjunct occurs when a deductive argument takes the following logical form:

A or B
A
Therefore, it is not the case that B
Explanation

The fallacy lies in concluding that one disjunct must be false because the other disjunct is true; in fact they may both be true because “or” is defined inclusively rather than exclusively. It is a fallacy of equivocation between the operations OR and XOR.

Affirming the disjunct should not be confused with the valid argument known as the disjunctive syllogism.

Example

The following argument indicates the invalidity of affirming a disjunct:

Max is a cat or Max is a mammal.
Max is a cat.
Therefore, Max is not a mammal.
This inference is invalid. If Max is a cat then Max is also a mammal. (Remember “or” is defined in an inclusive sense not an exclusive sense.)

The car is red or the car is large.
The car is red.
Therefore, the car is not large.
The above example of an argument also demonstrates the fallacy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Affirming the consequent

A

The antecedent in an indicative conditional is claimed to be true because the consequent is true.
If A, then B; B, therefore A.

Affirming the consequent, sometimes called converse error or fallacy of the converse, is a formal fallacy of inferring the converse from the original statement. The corresponding argument has the general form:

If P, then Q.
Q.
Therefore, P.
An argument of this form is invalid, i.e., the conclusion can be false even when statements 1 and 2 are true. Since P was never asserted as the only sufficient condition for Q, other factors could account for Q (while P was false).

To put it differently, if P implies Q, the only inference that can be made is non-Q implies non-P. (Non-P and non-Q designate the opposite propositions to P and Q.) Symbolically:

(P ⇒ Q) ⇔ (non-Q ⇒ non-P)

The name affirming the consequent derives from the premise Q, which affirms the “then” clause of the conditional premise.

Examples

One way to demonstrate the invalidity of this argument form is with a counterexample with true premises but an obviously false conclusion. For example:

If Bill Gates owns Fort Knox, then he is rich.
Bill Gates is rich.
Therefore, Bill Gates owns Fort Knox.
Owning Fort Knox is not the only way to be rich. Any number of other ways exist to be rich.

However, one can affirm with certainty that “if Bill Gates is not rich” (non-Q) then “Bill Gates does not own Fort Knox” (non-P)

Arguments of the same form can sometimes seem superficially convincing, as in the following example:

If I have the flu, then I have a sore throat.
I have a sore throat.
Therefore, I have the flu.
But having the flu is not the only cause of a sore throat since many illnesses cause sore throat, such as the common cold or strep throat.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Denying the antecedent

A

The consequent in an indicative conditional is claimed to be false because the antecedent is false.
If A, then B; not A, therefore not B.

Denying the antecedent, sometimes also called inverse error or fallacy of the inverse, is a formal fallacy of inferring the inverse from the original statement. It is committed by reasoning in the form:

If P, then Q.
Not P.
Therefore, not Q.
Arguments of this form are invalid. Informally, this means that arguments of this form do not give good reason to establish their conclusions, even if their premises are true.

The name denying the antecedent derives from the premise “not P”, which denies the “if” clause of the conditional premise.

One way to demonstrate the invalidity of this argument form is with a counterexample with true premises but an obviously false conclusion. For example:

If Queen Elizabeth is an American citizen, then she is a human being.
Queen Elizabeth is not an American citizen.
Therefore, Queen Elizabeth is not a human being.
That argument is obviously bad, but arguments of the same form can sometimes seem superficially convincing, as in the following example offered, with apologies for its lack of logical rigour, by Alan Turing in the article “Computing Machinery and Intelligence”:

However, men could still be machines that do not follow a definite set of rules. Thus this argument (as Turing intends) is invalid.

It is possible that an argument that denies the antecedent could be valid, if the argument instantiates some other valid form. For example, if the claims P and Q express the same proposition, then the argument would be trivially valid, as it would beg the question. In everyday discourse, however, such cases are rare, typically only occurring when the “if-then” premise is actually an “if and only if” claim (i.e., a biconditional/equality). For example:

If I am President of the United States, then I can veto Congress.
I am not President.
Therefore, I cannot veto Congress.
The above argument is not valid, but would be if the first premise ended thus: “…and if I can veto Congress, then I am the U.S. President” (as is in fact true). More to the point, the validity of the new argument stems not from denying the antecedent, but modus tollens (denying the consequent).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Existential fallacy

A

An argument has a universal premise and a particular conclusion.

The existential fallacy, or existential instantiation, is a formal fallacy. In the existential fallacy, we presuppose that a class has members when we are not supposed to do so; that is, when we should not assume existential import.

An existential fallacy is committed in a medieval categorical syllogism because it has two universal premises and a particular conclusion with no assumption that at least one member of the class exists, which is not established by the premises.

In modern logic, the presupposition that a class has members is seen as unacceptable. In 1905, Bertrand Russell wrote an essay entitled “The Existential Import of Proposition”, in which he called this Boolean approach “Peano’s interpretation”.

The fallacy does not occur in enthymemes, where hidden premises required to make the syllogism valid assume the existence of at least one member of the class.

One central concern of the Aristotelian tradition in logic is the theory of the categorical syllogism. This is the theory of two-premised arguments in which the premises and conclusion share three terms among them, with each proposition containing two of them. It is distinctive of this enterprise that everybody agrees on which syllogisms are valid. The theory of the syllogism partly constrains the interpretation of the forms. For example, it determines that the A form has existential import, at least if the I form does. For one of the valid patterns (Darapti) is:

Every C is B’’
Every C is A’’
So, some A is B
This is invalid if the A form lacks existential import, and valid if it has existential import. It is held to be valid, and so we know how the A form is to be interpreted. One then naturally asks about the O form; what do the syllogisms tell us about it? The answer is that they tell us nothing. This is because Aristotle did not discuss weakened forms of syllogisms, in which one concludes a particular proposition when one could already conclude the corresponding universal. For example, he does not mention the form:

No C is B
Every A is C
So, some A is not B
If people had thoughtfully taken sides for or against the validity of this form, that would clearly be relevant to the understanding of the O form. But the weakened forms were typically ignored.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Affirmative conclusion from a negative premise

A

When a categorical syllogism has a positive conclusion, but at least one negative premise.

Affirmative conclusion from a negative premise (illicit negative) is a formal fallacy that is committed when a categorical syllogism has a positive conclusion, but one or two negative premises.

For example:

No fish are dogs, and no dogs can fly, therefore all fish can fly.
The only thing that can be properly inferred from these premises is that some things that are not fish cannot fly, provided that dogs exist.

Or:

We don’t read that trash. People who read that trash don’t appreciate real literature. Therefore, we appreciate real literature.
It is a fallacy because any valid forms of categorical syllogism that assert a negative premise must have a negative conclusion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Appeal to probability

A

Takes something for granted because it would probably be the case.

An appeal to probability (or appeal to possibility) is the logical fallacy of taking something for granted because it would probably be the case, (or might possibly be the case). Inductive arguments lack deductive validity and must therefore be asserted or denied in the premises.

Example

A fallacious appeal to possibility:

Something can go wrong .
Therefore, something will go wrong .
A deductively valid argument would be explicitly premised on Murphy’s law, (see also, modal logic).

Anything that can go wrong, will go wrong .
Something can go wrong .
Therefore, something will go wrong .

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Fallacy of exclusive premises

A

A categorical syllogism that is invalid because both of its premises are negative.

The fallacy of exclusive premises is a syllogistic fallacy committed in a categorical syllogism that is invalid because both of its premises are negative.

Example of an EOO-4 invalid proposition:

E Proposition: No mammals are fish.
O Proposition: Some fish are not whales.
O Proposition: Therefore, some whales are not mammals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Fallacy of four terms

A

A categorical syllogism that has four terms.

The fallacy of four terms is the formal fallacy that occurs when a syllogism has four (or more) terms rather than the requisite three. This form of argument is thus invalid.

Explanation

Categorical syllogisms always have three terms:

Major premise: All fish have fins.
Minor premise: All goldfish are fish.
Conclusion: All goldfish have fins.
Here, the three terms are: “goldfish”, “fish”, and “fins”.

Using four terms invalidates the syllogism:

Major premise: All fish have fins.
Minor premise: All goldfish are fish.
Conclusion: All humans have fins.
The premises do not connect “humans” with “fins”, so the reasoning is invalid. Notice that there are four terms: “fish”, “fins”, “goldfish” and “humans”. Two premises are not enough to connect four different terms, since in order to establish connection, there must be one term common to both premises.

In everyday reasoning, the fallacy of four terms occurs most frequently by equivocation: using the same word or phrase but with a different meaning each time, creating a fourth term even though only three distinct words are used:

Major premise: Nothing is better than eternal happiness.
Minor premise: A ham sandwich is better than nothing.
Conclusion: A ham sandwich is better than eternal happiness.
The word “nothing” in the example above has two meanings, as presented: “nothing is better” means the thing being named has the highest value possible; “better than nothing” only means that the thing being described has some value. Therefore, “nothing” acts as two different words in this example, thus creating the fallacy of four terms.

Another example of equivocation, a more tricky one:

Major premise: The pen touches the paper.
Minor premise: The hand touches the pen.
Conclusion: The hand touches the paper.
This is more clear if one uses “is touching” instead of “touches”. It then becomes clear that “touching the pen” is not the same as “the pen”, thus creating four terms: “the hand”, “touching the pen”, “the pen”, “touching the paper”. A correct form of this statement would be:

Major premise: All that touches the pen, touches the paper.
Minor premise: The hand touches the pen.
Conclusion: The hand touches the paper.
Now the term “the pen” has been eliminated, leaving three terms. this argument is now valid but nonsensical because the major premise is untrue

The fallacy of four terms also applies to syllogisms that contain five or six terms.

Reducing terms

Sometimes a syllogism that is apparently fallacious because it is stated with more than three terms can be translated into an equivalent, valid three term syllogism. For example:

Major premise: No humans are immortal.
Minor premise: All Greeks are people.
Conclusion: All Greeks are mortal.
This EAE-1 syllogism apparently has five terms: “humans”, “people”, “immortal”, “mortal”, and “Greeks”. However it can be rewritten as a standard form AAA-1 syllogism by first substituting the synonymous term “humans” for “people” and then by reducing the complementary term “immortal” in the first premise using the immediate inference known as obversion (that is, “No humans are immortal.” is equivalent to “All humans are mortal.”).

Classification

The fallacy of four terms is a syllogistic fallacy. Types of syllogism to which it applies include statistical syllogism, hypothetical syllogism, and categorical syllogism, all of which must have exactly three terms. Because it applies to the argument’s form, as opposed to the argument’s content, it is classified as a formal fallacy.

Equivocation of the middle term is a frequently cited source of a fourth term being added to a syllogism; both of the equivocation examples above affect the middle term of the syllogism. Consequently this common error itself has been given its own name: the fallacy of the ambiguous middle. An argument that commits the ambiguous middle fallacy blurs the line between formal and informal fallacies, however it is usually considered an informal fallacy because the argument’s form appears valid.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Illicit major

A

A categorical syllogism that is invalid because it’s major term is not distributed in the major premise but distributed in the conclusion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Illicit minor

A

A categorical syllogism that is invalid because it’s minor term is not distributed in the minor premise but distributed in the conclusion.

Illicit minor is a formal fallacy committed in a categorical syllogism that is invalid because its minor term is undistributed in the minor premise but distributed in the conclusion.

This fallacy has the following argument form:

All A are B.
All A are C.
Therefore, all C are B.
Example:

All cats are felines.
All cats are mammals.
Therefore, all mammals are felines.
The minor term here is mammal, which is not distributed in the minor premise “All cats are mammals,” because this premise is only defining a property of possibly some mammals (i.e., that they’re cats.) However, in the conclusion “All mammals are felines,” mammal is distributed (it is talking about all mammals being felines). It is shown to be false by any mammal that is not a feline; for example, a dog.

Example:

Pie is good.
Pie is unhealthy.
Thus, all good things are unhealthy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Negative conclusion from affirmative premises.

A

When a categorical syllogism has a negative conclusion but affirmative premises.

Negative conclusion from affirmative premises is a syllogistic fallacy committed when a categorical syllogism has a negative conclusion yet both premises are affirmative. The inability of affirmative premises to reach a negative conclusion is usually cited as one of the basic rules of constructing a valid categorical syllogism.

Statements in syllogisms can be identified as the following forms:

a: All A is B. (affirmative)
e: No A is B. (negative)
i: Some A is B. (affirmative)
o: Some A is not B. (negative)
The rule states that a syllogism in which both premises are of form a or i (affirmative) cannot reach a conclusion of form e or o (negative). Exactly one of the premises must be negative to construct a valid syllogism with a negative conclusion. (A syllogism with two negative premises commits the related fallacy of exclusive premises.)

Example (invalid aae form):

Premise: All colonels are officers.
Premise: All officers are soldiers.
Conclusion: Therefore, no colonels are soldiers.
The aao-4 form is perhaps more subtle as it follows many of the rules governing valid syllogisms, except it reaches a negative conclusion from affirmative premises.

Invalid aao-4 form:

All A is B.
All B is C.
Therefore, some C is not A.
This is valid only if A is a proper subset of B and/or B is a proper subset of C. However, this argument reaches a faulty conclusion if A, B, and C are equivalent. In the case that A B C, the conclusion of the following simple aaa-1 syllogism would contradict the aao-4 argument above:

All B is A.
All C is B.
Therefore, all C is A.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Fallacy of the undistributed middle

A

The middle term in a categorical syllogism is not distributed.

The fallacy of the undistributed middle is a formal fallacy, that is committed when the middle term in a categorical syllogism is not distributed in either the minor premise or the major premise. It is thus a syllogistic fallacy.

Classical formulation

In classical syllogisms, all statements consist of two terms and are in the form of “A” (all), “E” (none), “I” (some), or “O” (some not). The first term is distributed in A statements; the second is distributed in O statements; both are distributed in E statements; and none are distributed in I statements.

The fallacy of the undistributed middle occurs when the term that links the two premises is never distributed.

In this example, distribution is marked in boldface:

All Z is B
(All) y is B
Therefore
(All) y is Z
B is the common term between the two premises (the middle term) but is never distributed, so this syllogism is invalid.

Also, a related rule of logic is that anything distributed in the conclusion must be distributed in at least one premise.

All Z is B
Some Y is Z
Therefore
All Y is B
The middle term - Z - is distributed, but Y is distributed in the conclusion and not in any premise, so this syllogism is invalid.

Pattern

The fallacy of the undistributed middle takes the following form:

All Z is B
Y is B
Therefore, Y is Z
This can be graphically represented as follows:

Undistributed middle argument map.jpg

where the premises are in the green box and the conclusion is indicated above them.

Here, B is the middle term, and it is not distributed in the major premise, “all Z is B”.

It may or may not be the case that “all Z is B,” but this is irrelevant to the conclusion. What is relevant to the conclusion is whether it is true that “all B is Z,” which is ignored in the argument. The fallacy is similar to affirming the consequent and denying the antecedent. However, the fallacy may be resolved if the terms are exchanged in either the conclusion or in the first co-premise. Indeed, from the perspective of first-order logic, all cases of the fallacy of the undistributed middle are, in fact, examples of affirming the consequent or denying the antecedent, depending on the structure of the fallacious argument.

Examples

For example:

All students carry backpacks.
My grandfather carries a backpack.
Therefore, my grandfather is a student.
All students carry backpacks.
My grandfather carries a backpack.
Everyone who carries a backpack is a student.
Therefore, my grandfather is a student.
Grandpa backpack undistributed middle.jpg

The middle term is the one that appears in both premises — in this case, it is the class of backpack carriers. It is undistributed because neither of its uses applies to all backpack carriers. Therefore it can’t be used to connect students and my grandfather — both of them could be separate and unconnected divisions of the class of backpack carriers. Note below how “carries a backpack” is truly undistributed:

grandfather is someone who carries a backpack; student is someone who carries a backpack
Specifically, the structure of this example results in affirming the consequent.

However, if the latter two statements were switched, the syllogism would be valid:

All students carry backpacks.
My grandfather is a student.
Therefore, my grandfather carries a backpack.
In this case, the middle term is the class of students, and the first use clearly refers to ‘all students’. It is therefore distributed across the whole of its class, and so can be used to connect the other two terms (backpack carriers, and my grandfather). Again, note below that “student” is distributed:

grandfather is a student and thus carries a backpack

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Argument from ignorance

A

Assuming that a claim is true because it has not been proven false or cannot be proven false.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Argument from repetition

A

Signifies that it has been discussed extensively until nobody cares to discuss it anymore.

Ad nauseam is a Latin term for something unpleasurable that has continued “to point of nausea”. “ad nauseam” definitions from Dictionary.com For example, the sentence, “This topic has been discussed ad nauseam”, signifies that the topic in question has been discussed extensively, and that those involved in the discussion have grown tired of it.

Etymology

This term is defined by the American Heritage Dictionary as:

Argumentum ad nauseam or argument from repetition or argumentum ad infinitum is an argument made repeatedly (possibly by different people) until nobody cares to discuss it any more. This may sometimes, but not always, be a form of proof by assertion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Argument from silence

A

Where the conclusion is based on the absence of evidence, rather than the existence of evidence.

An argument from silence (also called argumentum a silentio in Latin) is generally a conclusion drawn based on the absence of statements in historical documents.”argumentum e silentio noun phrase” The Oxford Essential Dictionary of Foreign Terms in English. Ed. Jennifer Speake. Berkley Books, 1999.John Lange, The Argument from Silence, History and Theory, Vol. 5, No. 3 (1966), pp. 288-301 In the field of classical studies, it often refers to the deduction from the lack of references to a subject in the available writings of an author to the conclusion that he was ignorant of it.”silence, the argument from”. The Concise Oxford Dictionary of the Christian Church. Ed. E. A. Livingstone. Oxford University Press, 2006.

Thus in historical analysis with an argument from silence, the absence of a reference to an event or a document is used to cast doubt on the event not mentioned. While most historical approaches rely on what an author’s works contain, an argument from silence relies on what the book or document does not contain. This approach thus uses what an author “should have said” rather than what is available in the author’s extant writings.Historical evidence and argument by David P. Henige 2005 ISBN 978-0-299-21410-4 page 176.Seven Pillories of Wisdom by David R. Hall 1991 ISBN 0-86554-369-0 pages 55-56.

Historical analysis

An argument from silence can be convincing when mentioning a fact can be seen as so natural that its omission is a good reason to assume ignorance. For example, while the editors of Yerushalmi and Bavli mention the other community, most scholars believe these documents were written independently. Louis Jacobs writes, “If the editors of either had had access to an actual text of the other, it is inconceivable that they would not have mentioned this. Here the argument from silence is very convincing.”“Talmud”. A Concise Companion to the Jewish Religion. Louis Jacobs. Oxford University Press, 1999.

Errietta Bissa, professor of Classics at University of Wales flatly state that arguments from silence are not valid.Governmental intervention in foreign trade in archaïc and classical Greece by Errietta M. A. Bissa ISBN 90-04-17504-0 page 21: “This is a fundamental methodological issue on the validity of arguments from silence, where I wish to make my position clear: arguments from silence are not valid.” David Henige states that, although risky, such arguments can at times shed light on historical events. Yifa has pointed out the perils of arguments from silence, in that although no references appear to the “Rules of purity” codes of monastic conduct of 1103 in the Transmission of the Lamp, or any of the Pure Land documents, a copy of the code in which the author identifies himself exists.The origins of Buddhist monastic codes in China by Yifa, Zongze 2002 ISBN 0-8248-2494-6 page 32: “an argumentum ex silencio is hardly conclusive”

Yifa points out that arguments from silence are often less than conclusive, e.g. the lack of references to a compilation of a set of monastic codes by contemporaries or even by disciples does not mean that it never existed. This is well as illustrated by the case of Changlu Zongze’s “Rules of purity” which he wrote for the Chan monastery in 1103.

One of his contemporaries wrote a preface to a collection of his writings neglected to mention his code. And none of his biographies nor the documents of the Transmission of the Lamp, nor the Pure Land documents (which exalt him) refer to Zongze’s collection of a monastic code. However a copy of the code in which the author identifies himself

exists.The origins of Buddhist monastic codes in China by Yifa, Zongze 2002 ISBN 0-8248-2494-6 page 32.

Frances Wood based her controversial book Did Marco Polo go to China? on arguments from silence. Woods argued that Marco Polo never went to China and fabricated his accounts because he failed to mention elements from the visual landscape such as tea, did not record the Great Wall and neglected to record practices such as foot-binding. She argued that no outsider could spend 15 years in China and not observe and record these elements. Most historians disagree with Wood’s reasoning.Historical evidence and argument by David P. Henige 2005 ISBN 978-0-299-21410-4 page 176.

Legal aspects

Jed Rubenfeld, professor of Law at Yale Law School, has shown an example of the difficulty in applying arguments from silence in constitutional law, stating that although arguments from silence can be used to draw conclusions about the intent of the Framers of the US Constitution, their application can lead to two different conclusions and hence they can not be used to settle the issues.Jed Rubenfeld Rights of Passage: Majority Rule in Congress Duke Law Journal, 1996 Section B: Arguments from silence, “From this silence one can draw clear plausible inferences about the Framers’ intent. The only difficulty is that one can draw two

different inferences…. The truth is that the argument from silence is not dispositive”.

In the context of Morocco’s Truth Commission of 1999 regarding torture and secret detentions, Wu and Livescu state that the fact that someone remained silent is no proof of their ignorance about a specific piece of information. They point out that the absence of records about the torture of prisoners under the secret detention program is no proof that such detentions did not involve torture, or that some detentions did not take place.Human Rights, Suffering, and Aesthetics in Political Prison Literature by Yenna Wu, Simona Livescu 2011 ISBN 0-7391-6741-3 pages 86-90.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Begging the question

A

The failure to provide what is essentially the conclusion of an argument as a premise, if so required.

Begging the question (Latin petitio principii, “assuming the initial point”) is a type of informal fallacy in which an implicit premise would directly entail the conclusion. Begging the question is one of the classic informal fallacies in Aristotle’s Prior Analytics. Some modern authors consider begging the question to be a species of circulus in probando (Latin, “circle in proving”) or circular reasoning. Were it not begging the question, the missing premise would render the argument viciously circular, and while never persuasive, arguments of the form “A therefore A” are logically valid petitio principii or Begging the question is studied in Prior Analytics II, 64b, 34 – 65a, 9 and it is considered a material fallacy. Circulus in probando, or circular reasoning, is explained in Prior Analytics II, 57b, 18 – 59b, 1. Some authors consider begging the question to be a form of circular reasoning, for example: Bradley Dowden, “Fallacies” in Internet Encyclopedia of Philosophy. because asserting the premise while denying the self-same conclusion is a direct contradiction. In general, validity only guarantees the conclusion must follow given the truth of the premises. Absent that, a valid argument proves nothing: the conclusion may or may not follow from faulty premises—although in this particular example, it’s self-evident that the conclusion is false if and only if the premise is false (see logical equivalence and logical equality).The reason petitio principii is considered to be a fallacy is not that the inference is invalid (because any statement is indeed equivalent to itself), but that the argument can be deceptive. A statement cannot prove itself. A premise must have a different source of reason, ground or evidence for its truth from that of the conclusion: Lander University, “Petitio Principii”.

In modern days, English speakers are prone to use “beg the question” as a way of saying “raises the question”. However, the former denotes a failure to explicitly raise an essential premise, so that it may be taken as given, whereas the latter simply functions as a segue for whatever comes to mind.

Definition

The fallacy of petitio principii, or “begging the question”, is committed “when a proposition which requires proof is assumed without proof”; in order to charitably entertain the argument, it must be taken as given “in some form of the very proposition to be proved, as a premise from which to deduce it”.Welton (1905), 279. One must take it upon oneself that the goal, taken as given, is essentially the means to that end.

When the fallacy of begging the question is committed in a single step, it is sometimes called a hysteron proteron,Davies (1915), 572.Welton (1905), 280-282. as in the statement “Opium induces sleep because it has a soporific quality”.Welton (1905), 281. Such fallacies may not be immediately obvious due to the use of synonyms or synonymous phrases; one way to beg the question is to make a statement first in concrete terms, then in abstract ones, or vice-versa. Another is to “bring forth a proposition expressed in words of Saxon origin, and give as a reason for it the very same proposition stated in words of Norman origin”,Gibson (1908), 291. as in this example: “To allow every man an unbounded freedom of speech must always be, on the whole, advantageous to the State, for it is highly conducive to the interests of the community that each individual should enjoy a liberty perfectly unlimited of expressing his sentiments”.Richard Whately, Elements of Logic (1826) quoted in Gibson (1908), 291.

When the fallacy of begging the question is committed in more than one step, some authors consider it circulus in probando or reasoning in a circle however, there is no fallacy if the missing premise is acknowledged, and if not, there is no circle.

“Begging the question” can also refer to an argument in which the unstated premise is essential to, but not identical with the conclusion, or is “controversial or questionable for the same reasons that typically might lead someone to question the conclusion”.Kahane and Cavender (2005), 60.

History

The term was translated into English from Latin in the 16th century. The Latin version, petitio principii, can be interpreted in different ways. Petitio (from peto), in the post-classical context in which the phrase arose, means “assuming” or “postulating”, but in the older classical sense means “petition”, “request” or “beseeching”. Principii, genitive of principium, means “beginning”, “basis” or “premise” (of an argument). Literally petitio principii means “assuming the premise” or “assuming the original point”, or, alternatively, “a request for the beginning or premise” that is, the premise depends on the truth of the very matter in question.

The Latin phrase comes from the Greek en archei aiteisthai in Aristotle’s Prior Analytics II xvi:

Begging or assuming the point at issue consists (to take the expression in its widest sense) failing to demonstrate the required proposition. But there are several other ways in which this may happen; for example, if the argument has not taken syllogistic form at all, he may argue from premises which are less known or equally unknown, or he may establish the antecedent by means of its consequents; for demonstration proceeds from what is more certain and is prior. Now begging the question is none of these. If, however, the relation of B to C is such that they are identical, or that they are clearly convertible, or that one applies to the other, then he is begging the point at issue…. begging the question is proving what is not self-evident by means of itself…either because predicates which are identical belong to the same subject, or because the same predicate belongs to subjects which are identical. Thomas Fowler believed that Petitio Principii would be more properly called Petitio Quæsiti, which is literally “begging the question”.Fowler, Thomas (1887). The Elements of Deductive Logic, Ninth Edition (p. 145). Oxford, England: Clarendon Press.

Related fallacies

Circular reasoning is a fallacy in which “the reasoner begins with what he or she is trying to end up with”. The individual components of a circular argument can be logically valid because if the premises are true, the conclusion must be true, and will not lack relevance. However, circular reasoning is not persuasive because, if the conclusion is doubted, the premise which leads to it will also be doubted.

Begging the question is similar to the complex question or fallacy of many questions: questioning that presupposes something that would not be acceptable to everyone involved. For example, “Is Mary wearing a blue or a red dress?” is fallacious because it artificially restricts the possible responses to a blue or red dress. If the person being questioned wouldn’t necessarily consent to those constraints, the question is thus fallacious.

Modern usage

Many English speakers use “begs the question” to mean “raises the question”, “impels the question”, or even “invites the question”, and follow that phrase with the question raised,see definitions at Wiktionary and at The Free Dictionary (accessed 30th May 2011); each source gives both definitions. for example, “this year’s deficit is half a trillion dollars, which begs the question: how are we ever going to balance the budget?” Philosophers and many grammarians deem such usage incorrect.Follett (1966), 228; Kilpatrick (1997); Martin (2002), 71; Safire (1998).Brians, Common Errors in English Usage: Online Edition (full text of book: 2nd Edition, November, 2008, William, James & Company) (accessed 1 July 2011) Academic linguist Mark Liberman recommends avoiding the phrase entirely, noting that because of shifts in usage in both Latin and English over the centuries, the relationship of the literal expression to its intended meaning is unintelligible and therefore it is now “such a confusing way to say it that only a few pedants understand the phrase.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Circular reasoning

A

When the reasoner begins with what he or she is trying to end up with.

Circular reasoning (also known as paradoxical thinking or circular logic), is a logical fallacy in which “the reasoner begins with what he or she is trying to end up with”. The individual components of a circular argument will sometimes be logically valid because if the premises are true, the conclusion must be true, and will not lack relevance. Circular logic cannot prove a conclusion because, if the conclusion is doubted, the premise which leads to it will also be doubted. Begging the question is a form of circular reasoning.

Circular reasoning is often of the form: “a is true because b is true; b is true because a is true.” Circularity can be difficult to detect if it involves a longer chain of propositions.

Academic Douglas Walton used the following example of a fallacious circular argument:

Wellington is in New Zealand.
Therefore, Wellington is in New Zealand.
He notes that, although the argument is deductively valid, it cannot prove that Wellington is in New Zealand because it contains no evidence that is distinct from the conclusion. The context – that of an argument – means that the proposition does not meet the requirement of proving the statement, thus it is a fallacy. He proposes that the context of a dialogue determines whether a circular argument is fallacious: if it forms part of an argument, then it is. Citing Cederblom and Paulsen 1986:109) Hugh G. Gauch observes that non-logical facts can be difficult to capture formally:

“Whatever is less dense than water will float, because whatever is less dense than water will float” sounds stupid, but “Whatever is less dense than water will float, because such objects won’t sink in water” might pass.

Circular reasoning and the problem of induction

Joel Feinberg and Russ Shafer-Landau note that “using the scientific method to judge the scientific method is circular reasoning”. Scientists attempt to discover the laws of nature and to predict what will happen in the future, based on those laws. However, per David Hume’s problem of induction, science cannot be proven inductively by empirical evidence, and thus science cannot be proven scientifically. An appeal to a principle of the uniformity of nature would be required to deductively necessitate the continued accuracy of predictions based on laws that have only succeeded in generalizing past observations. But as Bertrand Russell observed, “The method of ‘postulating’ what we want has many advantages; they are the same as the advantages of theft over honest toil”.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Circular cause and consequence

A

Where the consequence of the phenomenon is claimed to be its root cause.

Correlation does not imply causation (cum hoc propter hoc, Latin for “with this, because of this”) is a phrase used in science and statistics to emphasize that a correlation between two variables does not necessarily imply that one causes the other. Many statistical tests calculate correlation between variables. A few go further and calculate the likelihood of a true causal relationship; examples are the Granger causality test and convergent cross mapping.

The opposite assumption, that correlation proves causation, is one of several questionable cause logical fallacies by which two events that occur together are taken to have a cause-and-effect relationship. This fallacy is also known as cum hoc ergo propter hoc, Latin for “with this, therefore because of this”, and “false cause”. A similar fallacy, that an event that follows another was necessarily a consequence of the first event, is sometimes described as post hoc ergo propter hoc (Latin for “after this, therefore because of this”).

In a widely studied example, numerous epidemiological studies showed that women who were taking combined hormone replacement therapy (HRT) also had a lower-than-average incidence of coronary heart disease (CHD), leading doctors to propose that HRT was protective against CHD. But randomized controlled trials showed that HRT caused a small but statistically significant increase in risk of CHD. Re-analysis of the data from the epidemiological studies showed that women undertaking HRT were more likely to be from higher socio-economic groups (ABC1), with better-than-average diet and exercise regimens. The use of HRT and decreased incidence of coronary heart disease were coincident effects of a common cause (i.e. the benefits associated with a higher socioeconomic status), rather than cause and effect, as had been supposed.

As with any logical fallacy, identifying that the reasoning behind an argument is flawed does not imply that the resulting conclusion is false. In the instance above, if the trials had found that hormone replacement therapy caused a decrease in coronary heart disease, but not to the degree suggested by the epidemiological studies, the assumption of causality would have been correct, although the logic behind the assumption would still have been flawed.

Usage

In logic, the technical use of the word “implies” means “to be a sufficient circumstance.” This is the meaning intended by statisticians when they say causation is not certain. Indeed, p implies q has the technical meaning of logical implication: if p then q symbolized as p → q. That is “if circumstance p is true, then q necessarily follows.” In this sense, it is always correct to say “Correlation does not imply causation.”

However, in casual use, the word “imply” loosely means suggests rather than requires. The idea that correlation and causation are connected is certainly true; where there is causation, there is likely to be correlation. Indeed, correlation is used when inferring causation; the important point is that such inferences are made after correlations are confirmed to be real and all causational relationship are systematically explored using large enough data sets.

Edward Tufte, in a criticism of the brevity of “correlation does not imply causation,” deprecates the use of “is” to relate correlation and causation (as in “Correlation is not causation”), citing its inaccuracy as incomplete. While it is not the case that correlation is causation, simply stating their nonequivalence omits information about their relationship. Tufte suggests that the shortest true statement that can be made about causality and correlation is one of the following:

“Empirically observed covariation is a necessary but not sufficient condition for causality.”
“Correlation is not causation but it sure is a hint.”
General pattern

For any two correlated events A and B, the following relationships are possible:

A causes B;
B causes A;
A and B are consequences of a common cause, but do not cause each other;
There is no connection between A and B, the correlation is coincidental.
Less clear-cut correlations are also possible. For example, causality is not necessarily one-way; in a predator-prey relationship, predator numbers affect prey, but prey numbers, i.e. food supply, also affect predators.

The cum hoc ergo propter hoc logical fallacy can be expressed as follows:

A occurs in correlation with B.
Therefore, A causes B.
In this type of logical fallacy, one makes a premature conclusion about causality after observing only a correlation between two or more factors. Generally, if one factor (A) is observed to only be correlated with another factor (B), it is sometimes taken for granted that A is causing B, even when no evidence supports it. This is a logical fallacy because there are at least five possibilities:

A may be the cause of B.
B may be the cause of A.
some unknown third factor C may actually be the cause of both A and B.
there may be a combination of the above three relationships. For example, B may be the cause of A at the same time as A is the cause of B (contradicting that the only relationship between A and B is that A causes B). This describes a self-reinforcing system.
the “relationship” is a coincidence or so complex or indirect that it is more effectively called a coincidence (i.e. two events occurring at the same time that have no direct relationship to each other besides the fact that they are occurring at the same time). A larger sample size helps to reduce the chance of a coincidence, unless there is a systematic error in the experiment.
In other words, there can be no conclusion made regarding the existence or the direction of a cause and effect relationship only from the fact that A and B are correlated. Determining whether there is an actual cause and effect relationship requires further investigation, even when the relationship between A and B is statistically significant, a large effect size is observed, or a large part of the variance is explained.

Examples of illogically inferring causation from correlation

B causes A (reverse causation)

The more firemen fighting a fire, the bigger the fire is observed to be.
Therefore firemen cause an increase in the size of a fire.
In this example, the correlation between the number of firemen at a scene and the size of the fire does not imply that the firemen cause the fire. Firemen are sent according to the severity of the fire and if there is a large fire, a greater number of firemen are sent; therefore, it is rather that fire causes firemen to arrive at the scene. So the above conclusion is false.

A causes B and B causes A (bidirectional causation)

Increased pressure is associated with increased temperature.
Therefore pressure causes temperature.
The ideal gas law, PV=nRT, describes the direct relationship between pressure and temperature (along with other factors) to show that there is a direct correlation between the two properties. For a fixed volume and mass of gas, an increase in temperature will cause an increase in pressure; likewise, increased pressure will cause an increase in temperature. This demonstrates bidirectional causation. The conclusion that pressure causes temperature is true but is not logically guaranteed by the premise.

Third factor C (the common-causal variable) causes both A and B

All these examples deal with a lurking variable, which is simply a hidden third variable that affects both causes of the correlation; for example, the fact that it is summer in Example 3. A difficulty often also arises where the third factor, though fundamentally different from A and B, is so closely related to A and/or B as to be confused with them or very difficult to scientifically disentangle from them (see Example 4).

;Example 1

Sleeping with one’s shoes on is strongly correlated with waking up with a headache.
Therefore, sleeping with one’s shoes on causes headache.
The above example commits the correlation-implies-causation fallacy, as it prematurely concludes that sleeping with one’s shoes on causes headache. A more plausible explanation is that both are caused by a third factor, in this case going to bed drunk, which thereby gives rise to a correlation. So the conclusion is false.

;Example 2

Young children who sleep with the light on are much more likely to develop myopia in later life.
Therefore, sleeping with the light on causes myopia.
This is a scientific example that resulted from a study at the University of Pennsylvania Medical Center. Published in the May 13, 1999 issue of Nature, the study received much coverage at the time in the popular press.CNN, May 13, 1999. Night-light may lead to nearsightedness However, a later study at Ohio State University did not find that infants sleeping with the light on caused the development of myopia. It did find a strong link between parental myopia and the development of child myopia, also noting that myopic parents were more likely to leave a light on in their children’s bedroom.Ohio State University Research News, March 9, 2000. Night lights don’t lead to nearsightedness, study suggests In this case, the cause of both conditions is parental myopia, and the above-stated conclusion is false.

;Example 3

As ice cream sales increase, the rate of drowning deaths increases sharply.
Therefore, ice cream consumption causes drowning.
The aforementioned example fails to recognize the importance of time and temperature in relationship to ice cream sales. Ice cream is sold during the hot summer months at a much greater rate than during colder times, and it is during these hot summer months that people are more likely to engage in activities involving water, such as swimming. The increased drowning deaths are simply caused by more exposure to water-based activities, not ice cream. The stated conclusion is false.

;Example 4

A hypothetical study shows a relationship between test anxiety scores and shyness scores, with a statistical r value (strength of correlation) of +.59.The Psychology of Personality: Viewpoints, Research, and Applications. Carducci, Bernard J. 2nd Edition. Wiley-Blackwell: UK, 2009.
Therefore, it may be simply concluded that shyness, in some part, causally influences test anxiety.
However, as encountered in many psychological studies, another variable, a “self-consciousness score,” is discovered which has a sharper correlation (+.73) with shyness. This suggests a possible “third variable” problem, however, when three such closely related measures are found, it further suggests that each may have bidirectional tendencies (see “bidirectional variable,” above), being a cluster of correlated values each influencing one another to some extent. Therefore, the simple conclusion above may be false.

;Example 5

Since the 1950s, both the atmospheric CO2 level and obesity levels have increased sharply.
Hence, atmospheric CO2 causes obesity.
Richer populations tend to eat more food and consume more energy

;Example 6

HDL (“good”) cholesterol is negatively correlated with incidence of heart attack.
Therefore, taking medication to raise HDL will decrease the chance of having a heart attack.
Further researchOrnish, Dean. “Cholesterol: The good, the bad, and the truth” ]1] (retrieved 3 June 2011) has called this conclusion into question. Instead, it may be that other underlying factors, like genes, diet and exercise, affect both HDL levels and the likelihood of having a heart attack; it is possible that medicines may affect the directly measurable factor, HDL levels, without affecting the chance of heart attack.

Coincidence

With a decrease in the wearing of hats, there has been an increase in global warming over the same period.
Therefore, global warming is caused by people abandoning the practice of wearing hats.
A similar example is used by the parody religion Pastafarianism to illustrate the logical fallacy of assuming that correlation equals causation.

Relation to the Ecological fallacy

There is a relation between this subject-matter and the ecological fallacy, described in a 1950 paper by William S. Robinson. Robinson shows that ecological correlations, where the statistical object is a group of persons (i.e. an ethnic group), does not show the same behaviour as individual correlations, where the objects of inquiry are individuals: “The relation between ecological and individual correlations which is discussed in this paper provides a definite answer as to whether ecological correlations can validly be used as substitutes for individual correlations. They cannot.” (…) “(a)n ecological correlation is almost certainly not equal to its corresponding individual correlation.”

Determining causation

David Hume argued that causality is based on experience, and experience similarly based on the assumption that the future models the past, which in turn can only be based on experience – leading to circular logic. In conclusion, he asserted that causality is not based on actual reasoning: only correlation can actually be perceived. David Hume (Stanford Encyclopedia of Philosophy)

In order for a correlation to be established as causal, the cause and the effect must be connected through an impact mechanism in accordance with known laws of nature.

Intuitively, causation seems to require not just a correlation, but a counterfactual dependence. Suppose that a student performed poorly on a test and guesses that the cause was his not studying. To prove this, one thinks of the counterfactual – the same student writing the same test under the same circumstances but having studied the night before. If one could rewind history, and change only one small thing (making the student study for the exam), then causation could be observed (by comparing version 1 to version 2). Because one cannot rewind history and replay events after making small controlled changes, causation can only be inferred, never exactly known. This is referred to as the Fundamental Problem of Causal Inference – it is impossible to directly observe causal effects.Paul W. Holland. 1986. “Statistics and Causal Inference” Journal of the American Statistical Association, Vol. 81, No. 396. (Dec., 1986), pp. 945-960.

A major goal of scientific experiments and statistical methods is to approximate as best as possible the counterfactual state of the world.Judea Pearl. 2000. Causality: Models, Reasoning, and Inference, Cambridge University Press. For example, one could run an experiment on identical twins who were known to consistently get the same grades on their tests. One twin is sent to study for six hours while the other is sent to the amusement park. If their test scores suddenly diverged by a large degree, this would be strong evidence that studying (or going to the amusement park) had a causal effect on test scores. In this case, correlation between studying and test scores would almost certainly imply causation.

Well-designed experimental studies replace equality of individuals as in the previous example by equality of groups. This is achieved by randomization of the subjects to two or more groups. Although not a perfect system, the likeliness of being equal in all aspects rises with the number of subjects placed randomly in the treatment/placebo groups. From the significance of the difference of the effect of the treatment vs. the placebo, one can conclude the likeliness of the treatment having a causal effect on the disease. This likeliness can be quantified in statistical terms by the P-value .

When experimental studies are impossible and only pre-existing data are available, as is usually the case for example in economics, regression analysis can be used. Factors other than the potential causative variable of interest are controlled for by including them as regressors in addition to the regressor representing the variable of interest. False inferences of causation due to reverse causation (or wrong estimates of the magnitude of causation due the presence of bidirectional causation) can be avoided by using explanators (regressors) that are necessarily exogenous, such as physical explanators like rainfall amount (as a determinant of, say, futures prices), lagged variables whose values were determined before the dependent variable’s value was determined, instrumental variables for the explanators (chosen based on their known exogeneity), etc. See Causality#Statistics and Economics. Spurious correlation due to mutual influence from a third, common, causative variable, is harder to avoid: the model must be specified such that there is a theoretical reason to believe that no such underlying causative variable has been omitted from the model; in particular, underlying time trends of both the dependent variable and the independent (potentially causative) variable must be controlled for by including time as another independent variable.

Use of correlation as scientific evidence

Much of scientific evidence is based upon a correlation of variables – they tend to occur together. Scientists are careful to point out that correlation does not necessarily mean causation. The assumption that A causes B simply because A correlates with B is a logical fallacy – it is not a legitimate form of argument. However, sometimes people commit the opposite fallacy – dismissing correlation entirely, as if it does not imply causation. This would dismiss a large swath of important scientific evidence.

In conclusion, correlation is an extremely valuable type of scientific evidence in fields such as medicine, psychology, and sociology. But first correlations must be confirmed as real, and then every possible causational relationship must be systematically explored. In the end correlation can be used as powerful evidence for a cause and effect relationship between a treatment and benefit, a risk factor and a disease, or a social or economic factor and various outcomes. But it is also one of the most abused types of evidence, because it is easy and even tempting to come to premature conclusions based upon the preliminary appearance of a correlation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Continuum fallacy

A

Improperly rejecting a claim for being imprecise.

The continuum fallacy (also called the fallacy of the beard,David Roberts: Reasoning: Other Fallacies line drawing fallacy, bald man fallacy, fallacy of the heap, the fallacy of grey, the sorites fallacy) is an informal fallacy closely related to the sorites paradox, or paradox of the heap. The fallacy causes one to erroneously reject a vague claim simply because it is not as precise as one would like it to be. Vagueness alone does not necessarily imply invalidity.

The fallacy appears to demonstrate that two states or conditions cannot be considered distinct (or do not exist at all) because between them there exists a continuum of states. According to the fallacy, differences in quality cannot result from differences in quantity.

There are clearly reasonable and clearly unreasonable cases in which objects either belong or do not belong to a particular group of objects based on their properties. We are able to take them case by case and designate them as such even in the case of properties which may be vaguely defined. The existence of hard or controversial cases does not preclude our ability to designate members of particular kinds of groups.

Relation with sorites paradox

Narrowly speaking, the sorites paradox refers to situations where there are many discrete states (classically between 1 and 1,000,000 grains of sand, hence 1,000,000 possible states), while the continuum fallacy refers to situations where there is (or appears to be) a continuum of states, such as temperature – is a room hot or cold? Whether any continua exist in the physical world is the classic question of atomism, and while Newtonian physics models the world as continuous, in modern quantum physics, notions of continuous length break down at the Planck length, and thus what appear to be continua may, at base, simply be very many discrete states.

For the purpose of the continuum fallacy, one assumes that there is in fact a continuum, though this is generally a minor distinction: in general, any argument against the sorites paradox can also be used against the continuum fallacy. One argument against the fallacy is based on the simple counterexample: there do exist bald people and people who aren’t bald. Another argument is that for each degree of change in states, the degree of the condition changes slightly, and these “slightly”s build up to shift the state from one category to another. For example, perhaps the addition of a grain of rice causes the total group of rice to be “slightly more” of a heap, and enough “slightly”s will certify the group’s heap status – see fuzzy logic.

Examples

Fred can never be called bald

Fred can never be called bald. Fred isn’t bald now. However, if he loses one hair, that won’t make him go from not bald to bald either. If he loses one more hair after that, this loss of a second hair also does not make him go from not bald to bald. Therefore, no matter how much hair he loses, he can never be called bald.

The heap

The fallacy can be described in the form of a conversation:

Q: Does one grain of wheat form a heap?
A: No.
Q: If we add one, do two grains of wheat form a heap?
A: No.
Q: If we add one, do three grains of wheat form a heap?
A: No.

Q: If we add one, do one hundred grains of wheat form a heap?
A: No.
Q: Therefore, no matter how many grains of wheat we add, we will never have a heap. Therefore, heaps don’t exist!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Correlation proves causation

A

A faulty assumption that correlation between two variables implies that one causes the other.

Correlation does not imply causation (cum hoc propter hoc, Latin for “with this, because of this”) is a phrase used in science and statistics to emphasize that a correlation between two variables does not necessarily imply that one causes the other. Many statistical tests calculate correlation between variables. A few go further and calculate the likelihood of a true causal relationship; examples are the Granger causality test and convergent cross mapping.

The opposite assumption, that correlation proves causation, is one of several questionable cause logical fallacies by which two events that occur together are taken to have a cause-and-effect relationship. This fallacy is also known as cum hoc ergo propter hoc, Latin for “with this, therefore because of this”, and “false cause”. A similar fallacy, that an event that follows another was necessarily a consequence of the first event, is sometimes described as post hoc ergo propter hoc (Latin for “after this, therefore because of this”).

In a widely studied example, numerous epidemiological studies showed that women who were taking combined hormone replacement therapy (HRT) also had a lower-than-average incidence of coronary heart disease (CHD), leading doctors to propose that HRT was protective against CHD. But randomized controlled trials showed that HRT caused a small but statistically significant increase in risk of CHD. Re-analysis of the data from the epidemiological studies showed that women undertaking HRT were more likely to be from higher socio-economic groups (ABC1), with better-than-average diet and exercise regimens. The use of HRT and decreased incidence of coronary heart disease were coincident effects of a common cause (i.e. the benefits associated with a higher socioeconomic status), rather than cause and effect, as had been supposed.

As with any logical fallacy, identifying that the reasoning behind an argument is flawed does not imply that the resulting conclusion is false. In the instance above, if the trials had found that hormone replacement therapy caused a decrease in coronary heart disease, but not to the degree suggested by the epidemiological studies, the assumption of causality would have been correct, although the logic behind the assumption would still have been flawed.

Usage

In logic, the technical use of the word “implies” means “to be a sufficient circumstance.” This is the meaning intended by statisticians when they say causation is not certain. Indeed, p implies q has the technical meaning of logical implication: if p then q symbolized as p → q. That is “if circumstance p is true, then q necessarily follows.” In this sense, it is always correct to say “Correlation does not imply causation.”

However, in casual use, the word “imply” loosely means suggests rather than requires. The idea that correlation and causation are connected is certainly true; where there is causation, there is likely to be correlation. Indeed, correlation is used when inferring causation; the important point is that such inferences are made after correlations are confirmed to be real and all causational relationship are systematically explored using large enough data sets.

Edward Tufte, in a criticism of the brevity of “correlation does not imply causation,” deprecates the use of “is” to relate correlation and causation (as in “Correlation is not causation”), citing its inaccuracy as incomplete. While it is not the case that correlation is causation, simply stating their nonequivalence omits information about their relationship. Tufte suggests that the shortest true statement that can be made about causality and correlation is one of the following:

“Empirically observed covariation is a necessary but not sufficient condition for causality.”
“Correlation is not causation but it sure is a hint.”
General pattern

For any two correlated events A and B, the following relationships are possible:

A causes B;
B causes A;
A and B are consequences of a common cause, but do not cause each other;
There is no connection between A and B, the correlation is coincidental.
Less clear-cut correlations are also possible. For example, causality is not necessarily one-way; in a predator-prey relationship, predator numbers affect prey, but prey numbers, i.e. food supply, also affect predators.

The cum hoc ergo propter hoc logical fallacy can be expressed as follows:

A occurs in correlation with B.
Therefore, A causes B.
In this type of logical fallacy, one makes a premature conclusion about causality after observing only a correlation between two or more factors. Generally, if one factor (A) is observed to only be correlated with another factor (B), it is sometimes taken for granted that A is causing B, even when no evidence supports it. This is a logical fallacy because there are at least five possibilities:

A may be the cause of B.
B may be the cause of A.
some unknown third factor C may actually be the cause of both A and B.
there may be a combination of the above three relationships. For example, B may be the cause of A at the same time as A is the cause of B (contradicting that the only relationship between A and B is that A causes B). This describes a self-reinforcing system.
the “relationship” is a coincidence or so complex or indirect that it is more effectively called a coincidence (i.e. two events occurring at the same time that have no direct relationship to each other besides the fact that they are occurring at the same time). A larger sample size helps to reduce the chance of a coincidence, unless there is a systematic error in the experiment.
In other words, there can be no conclusion made regarding the existence or the direction of a cause and effect relationship only from the fact that A and B are correlated. Determining whether there is an actual cause and effect relationship requires further investigation, even when the relationship between A and B is statistically significant, a large effect size is observed, or a large part of the variance is explained.

Examples of illogically inferring causation from correlation

B causes A (reverse causation)

The more firemen fighting a fire, the bigger the fire is observed to be.
Therefore firemen cause an increase in the size of a fire.
In this example, the correlation between the number of firemen at a scene and the size of the fire does not imply that the firemen cause the fire. Firemen are sent according to the severity of the fire and if there is a large fire, a greater number of firemen are sent; therefore, it is rather that fire causes firemen to arrive at the scene. So the above conclusion is false.

A causes B and B causes A (bidirectional causation)

Increased pressure is associated with increased temperature.
Therefore pressure causes temperature.
The ideal gas law, PV=nRT, describes the direct relationship between pressure and temperature (along with other factors) to show that there is a direct correlation between the two properties. For a fixed volume and mass of gas, an increase in temperature will cause an increase in pressure; likewise, increased pressure will cause an increase in temperature. This demonstrates bidirectional causation. The conclusion that pressure causes temperature is true but is not logically guaranteed by the premise.

Third factor C (the common-causal variable) causes both A and B

All these examples deal with a lurking variable, which is simply a hidden third variable that affects both causes of the correlation; for example, the fact that it is summer in Example 3. A difficulty often also arises where the third factor, though fundamentally different from A and B, is so closely related to A and/or B as to be confused with them or very difficult to scientifically disentangle from them (see Example 4).

;Example 1

Sleeping with one’s shoes on is strongly correlated with waking up with a headache.
Therefore, sleeping with one’s shoes on causes headache.
The above example commits the correlation-implies-causation fallacy, as it prematurely concludes that sleeping with one’s shoes on causes headache. A more plausible explanation is that both are caused by a third factor, in this case going to bed drunk, which thereby gives rise to a correlation. So the conclusion is false.

;Example 2

Young children who sleep with the light on are much more likely to develop myopia in later life.
Therefore, sleeping with the light on causes myopia.
This is a scientific example that resulted from a study at the University of Pennsylvania Medical Center. Published in the May 13, 1999 issue of Nature, the study received much coverage at the time in the popular press.CNN, May 13, 1999. Night-light may lead to nearsightedness However, a later study at Ohio State University did not find that infants sleeping with the light on caused the development of myopia. It did find a strong link between parental myopia and the development of child myopia, also noting that myopic parents were more likely to leave a light on in their children’s bedroom.Ohio State University Research News, March 9, 2000. Night lights don’t lead to nearsightedness, study suggests In this case, the cause of both conditions is parental myopia, and the above-stated conclusion is false.

;Example 3

As ice cream sales increase, the rate of drowning deaths increases sharply.
Therefore, ice cream consumption causes drowning.
The aforementioned example fails to recognize the importance of time and temperature in relationship to ice cream sales. Ice cream is sold during the hot summer months at a much greater rate than during colder times, and it is during these hot summer months that people are more likely to engage in activities involving water, such as swimming. The increased drowning deaths are simply caused by more exposure to water-based activities, not ice cream. The stated conclusion is false.

;Example 4

A hypothetical study shows a relationship between test anxiety scores and shyness scores, with a statistical r value (strength of correlation) of +.59.The Psychology of Personality: Viewpoints, Research, and Applications. Carducci, Bernard J. 2nd Edition. Wiley-Blackwell: UK, 2009.
Therefore, it may be simply concluded that shyness, in some part, causally influences test anxiety.
However, as encountered in many psychological studies, another variable, a “self-consciousness score,” is discovered which has a sharper correlation (+.73) with shyness. This suggests a possible “third variable” problem, however, when three such closely related measures are found, it further suggests that each may have bidirectional tendencies (see “bidirectional variable,” above), being a cluster of correlated values each influencing one another to some extent. Therefore, the simple conclusion above may be false.

;Example 5

Since the 1950s, both the atmospheric CO2 level and obesity levels have increased sharply.
Hence, atmospheric CO2 causes obesity.
Richer populations tend to eat more food and consume more energy

;Example 6

HDL (“good”) cholesterol is negatively correlated with incidence of heart attack.
Therefore, taking medication to raise HDL will decrease the chance of having a heart attack.
Further researchOrnish, Dean. “Cholesterol: The good, the bad, and the truth” (retrieved 3 June 2011) has called this conclusion into question. Instead, it may be that other underlying factors, like genes, diet and exercise, affect both HDL levels and the likelihood of having a heart attack; it is possible that medicines may affect the directly measurable factor, HDL levels, without affecting the chance of heart attack.

Coincidence

With a decrease in the wearing of hats, there has been an increase in global warming over the same period.
Therefore, global warming is caused by people abandoning the practice of wearing hats.
A similar example is used by the parody religion Pastafarianism to illustrate the logical fallacy of assuming that correlation equals causation.

Relation to the Ecological fallacy

There is a relation between this subject-matter and the ecological fallacy, described in a 1950 paper by William S. Robinson. Robinson shows that ecological correlations, where the statistical object is a group of persons (i.e. an ethnic group), does not show the same behaviour as individual correlations, where the objects of inquiry are individuals: “The relation between ecological and individual correlations which is discussed in this paper provides a definite answer as to whether ecological correlations can validly be used as substitutes for individual correlations. They cannot.” (…) “(a)n ecological correlation is almost certainly not equal to its corresponding individual correlation.”

Determining causation

David Hume argued that causality is based on experience, and experience similarly based on the assumption that the future models the past, which in turn can only be based on experience – leading to circular logic. In conclusion, he asserted that causality is not based on actual reasoning: only correlation can actually be perceived. David Hume (Stanford Encyclopedia of Philosophy)

In order for a correlation to be established as causal, the cause and the effect must be connected through an impact mechanism in accordance with known laws of nature.

Intuitively, causation seems to require not just a correlation, but a counterfactual dependence. Suppose that a student performed poorly on a test and guesses that the cause was his not studying. To prove this, one thinks of the counterfactual – the same student writing the same test under the same circumstances but having studied the night before. If one could rewind history, and change only one small thing (making the student study for the exam), then causation could be observed (by comparing version 1 to version 2). Because one cannot rewind history and replay events after making small controlled changes, causation can only be inferred, never exactly known. This is referred to as the Fundamental Problem of Causal Inference – it is impossible to directly observe causal effects.Paul W. Holland. 1986. “Statistics and Causal Inference” Journal of the American Statistical Association, Vol. 81, No. 396. (Dec., 1986), pp. 945-960.

A major goal of scientific experiments and statistical methods is to approximate as best as possible the counterfactual state of the world.Judea Pearl. 2000. Causality: Models, Reasoning, and Inference, Cambridge University Press. For example, one could run an experiment on identical twins who were known to consistently get the same grades on their tests. One twin is sent to study for six hours while the other is sent to the amusement park. If their test scores suddenly diverged by a large degree, this would be strong evidence that studying (or going to the amusement park) had a causal effect on test scores. In this case, correlation between studying and test scores would almost certainly imply causation.

Well-designed experimental studies replace equality of individuals as in the previous example by equality of groups. This is achieved by randomization of the subjects to two or more groups. Although not a perfect system, the likeliness of being equal in all aspects rises with the number of subjects placed randomly in the treatment/placebo groups. From the significance of the difference of the effect of the treatment vs. the placebo, one can conclude the likeliness of the treatment having a causal effect on the disease. This likeliness can be quantified in statistical terms by the P-value .

When experimental studies are impossible and only pre-existing data are available, as is usually the case for example in economics, regression analysis can be used. Factors other than the potential causative variable of interest are controlled for by including them as regressors in addition to the regressor representing the variable of interest. False inferences of causation due to reverse causation (or wrong estimates of the magnitude of causation due the presence of bidirectional causation) can be avoided by using explanators (regressors) that are necessarily exogenous, such as physical explanators like rainfall amount (as a determinant of, say, futures prices), lagged variables whose values were determined before the dependent variable’s value was determined, instrumental variables for the explanators (chosen based on their known exogeneity), etc. See Causality#Statistics and Economics. Spurious correlation due to mutual influence from a third, common, causative variable, is harder to avoid: the model must be specified such that there is a theoretical reason to believe that no such underlying causative variable has been omitted from the model; in particular, underlying time trends of both the dependent variable and the independent (potentially causative) variable must be controlled for by including time as another independent variable.

Use of correlation as scientific evidence

Much of scientific evidence is based upon a correlation of variables – they tend to occur together. Scientists are careful to point out that correlation does not necessarily mean causation. The assumption that A causes B simply because A correlates with B is a logical fallacy – it is not a legitimate form of argument. However, sometimes people commit the opposite fallacy – dismissing correlation entirely, as if it does not imply causation. This would dismiss a large swath of important scientific evidence.

In conclusion, correlation is an extremely valuable type of scientific evidence in fields such as medicine, psychology, and sociology. But first correlations must be confirmed as real, and then every possible causational relationship must be systematically explored. In the end correlation can be used as powerful evidence for a cause and effect relationship between a treatment and benefit, a risk factor and a disease, or a social or economic factor and various outcomes. But it is also one of the most abused types of evidence, because it is easy and even tempting to come to premature conclusions based upon the preliminary appearance of a correlation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Equivocation

A

The misleading use of a term with more than one meaning.

Equivocation (“to call by the same name”) is classified as an informal logical fallacy. It is the misleading use of a term with more than one meaning or sense (by glossing over which meaning is intended at a particular time). It generally occurs with polysemic words (words with multiple meanings).

It is often confused with amphibology (amphiboly) (ambiguous sentences.); however, equivocation is ambiguity arising from the misleading use of a word and amphiboly is ambiguity arising from the misleading use of punctuation or syntax.

Examples

Fallacious reasoning

Equivocation is the use in a syllogism (a logical chain of reasoning) of a term several times, but giving the term a different meaning each time. For example:

A feather is light.
What is light cannot be dark.
Therefore, a feather cannot be dark.
In this use of equivocation, the word “light” is first used as the opposite of “heavy”, but then used as a synonym of “bright” (the fallacy usually becomes obvious as soon as one tries to translate this argument into another language). Because the “middle term” of this syllogism is not one term, but two separate ones masquerading as one (all feathers are indeed “not heavy”, but it is not true that all feathers are “bright”), this type of equivocation is actually an example of the fallacy of four terms.

Semantic shift

The fallacy of equivocation is often used with words that have a strong emotional content and many meanings. These meanings often coincide within proper context, but the fallacious arguer does a semantic shift, slowly changing the context by treating, as equivalent, distinct meanings of the term.

In English language, one equivocation is with the word “man”, which can mean both “member of the species, Homo sapiens,” and “male member of the species, Homo sapiens.” The following sentence is a well-known equivocation:

“Do women need to worry about man-eating sharks?”, in which “man-eating” is construed to mean a shark that devours only male human beings.
Switch-referencing

This occurs where the referent of a word or expression in a second sentence is different from that in the immediately preceding sentence, especially where a change in referent has not been clearly identified.

Metaphor

All jackasses have long ears.
Carl is a jackass.
Therefore, Carl has long ears.
Here the equivocation is the metaphorical use of “jackass” to imply a stupid or obnoxious person instead of a male donkey.

“Better than nothing”

Margarine is better than nothing.
Nothing is better than butter.
Therefore, margarine is better than butter.
Specific types of equivocation fallacies

See main articles: False attribution, Fallacy of quoting out of context, No true Scotsman, Shifting ground fallacy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Ecological fallacy

A

Inferences about the nature of specific individuals are based solely upon aggregate statistics collected for the group to which those individuals belong.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Etymological fallacy

A

Which reasons that the original or historical meaning of a word or phrase is necessarily similar to its actual present-day meaning.

The etymological fallacy is a genetic fallacy that holds, erroneously, that the present-day meaning of a word or phrase should necessarily be similar to its historical meaning. This is a linguistic misconception.Kenneth G. Wilson (1993) “The Columbia Guide to Standard American English”, article “Etymological Fallacy” An argument constitutes an etymological fallacy if it makes a claim about the present meaning of a word based exclusively on its etymology. This does not, however, show that etymology is irrelevant in any way, nor does it attempt to prove such.

A variant of the etymological fallacy involves looking for the “true” meaning of words by delving into their etymologies,

or claiming that a word should be used in a particular way because it has a particular etymology. A similar concept is that of false friends.

Prerequisites

An etymological fallacy becomes possible when a word has changed its meaning over time. Such changes can include a shift in scope (narrowing or widening of meanings) or of connotation (amelioration or pejoration). In some cases, meanings can also shift completely, so that the etymological meaning has no evident connection to the current meaning.

For example:

The word hound originally simply meant “dog” in general. This usage is now archaic or poetic only, and hound now almost exclusively refers to dogs bred for hunting in particular.
The meaning of a word may change to connote higher status, as when knight, originally “servant” like German Knecht, came to mean “military knight” and subsequently “someone of high rank”.
Conversely, the word knave originally meant “boy” and only gradually acquired its meaning of “person of low, despicable character”.
The word lady derives from Old English hlæf-dige (“loaf-digger; kneader of bread”), and lord from hlafweard (“loaf-ward; ensurer, provider of bread”). No connection with bread is retained in the current meaning of either word.
Examples

Not every change in meaning provokes an etymological fallacy; but such changes are frequently the basis of inaccurate arguments.

From the fact that logos is Greek for “word”, Stuart Chase concluded in his book The Tyranny of Words that logic was mere manipulation of words.
Some dictionaries of old languages do not distinguish glosses (meanings) from etymologies, as when Old English is defined as “one who sits on the same rowing bench; companion”. Here the only attested meaning is the second one, while the first is simply the word’s etymology.
The word gyp for “cheat” has been described as offensive because it is probably derived from Gypsy.
The word apologize comes from the Greek word ἀπολογία (apologia) which originally only meant “a speech in defence”. Later on it began to carry the sense of expressing remorse or “saying sorry” over something that one may feel regret for, as well as to explain or defend, in some contexts. The word began to be used eventually as only expressing regret mainly because words of remorse would often accompany explanations, or at least some defense or justification along with it. But some feel today that a “full apology”, in keeping with the word’s original etymology, should always include explanations, while others feel that an apology should only be an expression of remorse. 2.1 Core Elements of an Apology - A TIME FOR APOLOGIES: THE LEGAL AND ETHICAL IMPLICATIONS OF APOLOGIES IN CIVIL CASES - Leslie H. Macleod & Associates - Retrieved 24 April 2012.
Phrases such as to grow smaller or to climb down have been criticised for being incoherent, based on the “true” meanings of grow and climb.
Pitfalls

While the assumption that a word may still be used etymologically can be fallacious, the conclusion from such reasoning is not necessarily false. Some words can retain their meaning for many centuries, with extreme cases like mouse, which denoted the same animal in the Proto-Indo-European language several thousand years ago (as ).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Fallacy of composition

A

Assuming that something true of part of a whole must also be true of the whole.

The fallacy of composition arises when one infers that something is true of the whole from the fact that it is true of some part of the whole (or even of every proper part). For example: “This fragment of metal cannot be fractured with a hammer, therefore the machine of which it is a part cannot be fractured with a hammer.” This is clearly fallacious, because many machines can be broken-apart, without any of those parts being fracturable.

This fallacy is often confused with the fallacy of hasty generalization, in which an unwarranted inference is made from a statement about a sample to a statement about the population from which it is drawn.

The fallacy of composition is the converse of the fallacy of division. The fallacy of composition is also known as the “un-ecological fallacy.”

Examples

If someone stands up out of their seat at a baseball game, they can see better. Therefore, if everyone stands up they can all see better.

If a runner runs faster, she can win the race. Therefore if all the runners run faster, they can all win the race.

An important example from economics is the Paradox of thrift: if one household saves money, it can consume more in the future. Therefore if all households save money, they can all consume more in the future.

In voting theory, the Condorcet paradox demonstrates a fallacy of composition: Even if all voters have rational preferences, the collective choice induced by majority rule is not transitive and hence not rational. The fallacy of composition occurs if from the rationality of the individuals one infers that society can be equally rational. The principle generalizes beyond the aggregation via majority rule to any reasonable aggregation rule, demonstrating that the aggregation of individual preferences into a Social welfare function is fraught with severe difficulties (see Arrow’s impossibility theorem and Social choice theory).

Modo hoc fallacy

The modo hoc (or “just this”) fallacy is the informal error of assessing meaning to an existent based on the constituent properties of its material makeup while omitting the matter’s arrangement. For instance, metaphysical naturalism states that while matter and motion are all that comprise man, it cannot be assumed that the characteristics inherent in the elements and physical reactions that make up man ultimately and solely define man’s meaning; for, a cow which is alive and well and a cow which has been chopped up into meat are the same matter but it is obvious that the arrangement of that matter clarifies those different situational meanings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Fallacy of division

A

Assuming that something true of a thing must also be true of all or some of its parts.

A fallacy of division occurs when one reasons logically that something true of a thing must also be true of all or some of its parts.

An example:

A Boeing 747 can fly unaided across the ocean.
A Boeing 747 has jet engines.
Therefore, one of its jet engines can fly unaided across the ocean.
The converse of this fallacy is called fallacy of composition, which arises when one fallaciously attributes a property of some part of a thing to the thing as a whole. Both fallacies were addressed by Aristotle in Sophistical Refutations.

Another example:

Functioning brains think.
Functioning brains are nothing but the neurons that they are composed of.
If functioning brains think, then the individual neurons in them think.
Individual neurons do not think.
Functioning brains do not think. (From 3 & 4)
Functioning brains think and functioning brains do not think. (From 1 & 5)
Since the premises entail a contradiction (6), at least one of the premises must be false. We may diagnose the problem as located in premise 3, which quite plausibly commits the fallacy of division.

An application: Famously and controversially, in the philosophy of the Greek Anaxagoras (at least as it is discussed by the Roman atomist Lucretius), it was assumed that the atoms constituting a substance must themselves have the salient observed properties of that substance: so atoms of water would be wet, atoms of iron would be hard, atoms of wool would be soft, etc. This doctrine is called homeomeria, and it plainly depends on the fallacy of division.

If a system as a whole has some property that none of its constituents has (or perhaps, it has it but not as a result of some constituent having that property), this is sometimes called a strongly emergent property of the system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

False dilemma

A

Two alternative statements are held to be the only possible options when in reality there are more.

A false dilemma (also called the fallacy of the false alternative, false dichotomy, the either-or fallacy, fallacy of false choice, black-and/or-white thinking, or the fallacy of exhaustive hypotheses) is a type of informal fallacy that involves a situation in which limited alternatives are considered, when in fact there is at least one additional option. The options may be a position that is between two extremes (such as when there are shades of grey) or may be completely different alternatives. The opposite of this fallacy is argument to moderation.

False dilemma can arise intentionally, when fallacy is used in an attempt to force a choice (such as, in some contexts, the assertion that “if you are not with us, you are against us”). But the fallacy can also arise simply by accidental omission of additional options rather than by deliberate deception.

In the community of philosophers and scholars, many believe that “unless a distinction can be made rigorous and precise it isn’t really a distinction.”Jacques Derrida (1991) Afterword: Toward An Ethic of Discussion, published in the English translation of Limited Inc., pp.123-4, 126 An exception is analytic philosopher John Searle, who called it an incorrect assumption which produces false dichotomies.Searle, John. (1983) The Word Turned Upside Down. The New York Review of Books, Volume 30, Number 16, October 27, 1983. Searle insists that “it is a condition of the adequacy of a precise theory of an indeterminate phenomenon that it should precisely characterize that phenomenon as indeterminate; and a distinction is no less a distinction for allowing for a family of related, marginal, diverging cases.” Similarly, when two options are presented, they are often, though not always, two extreme points on some spectrum of possibilities; this can lend credence to the larger argument by giving the impression that the options are mutually exclusive, even though they need not be. Furthermore, the options in false dichotomies are typically presented as being collectively exhaustive, in which case the fallacy can be overcome, or at least weakened, by considering other possibilities, or perhaps by considering a whole spectrum of possibilities, as in fuzzy logic.

Examples

Morton’s Fork

Morton’s Fork, a choice between two equally unpleasant options, is often a false dilemma. The phrase originates from an argument for taxing English nobles:

“Either the nobles of this country appear wealthy, in which case they can be taxed for good; or they appear poor, in which case they are living frugally and must have immense savings, which can be taxed for good.”Evans, Ivor H. (1989). Brewer’s Dictionary of Phrase & Fable, 14th edition, Harper & Row. ISBN 0-06-016200-7.
This is a false dilemma and a catch-22, because it fails to allow for the possibility that some members of the nobility may in fact lack liquid assets as well as the possibility that those who appear poor also lack liquid assets.

False choice

The presentation of a false choice often reflects a deliberate attempt to eliminate the middle ground on an issue. A common argument against noise pollution laws involves a false choice. It might be argued that in New York City noise should not be regulated, because if it were, the city would drastically change in a negative way. This argument assumes that, for example, a bar must be shut down for it to not cause disturbing levels of noise after midnight. This ignores the fact that the bar could simply lower its noise levels, and/or install soundproofing structural elements to keep the noise from excessively transmitting onto others’ properties, but this is also a false choice because it ignores the fact that the noise could be emanating from the patrons outside the bar.

Black-and-white thinking

In psychology, a related phenomenon to the false dilemma is black-and-white thinking. Many people routinely engage in black-and-white thinking, an example of which is someone who labels other people as all good or all bad.AJ Giannini. Use of fiction in therapy. Psychiatric Times. 18(7):56-57,2001.

Falsum in uno, falsum in omnibus

The Latin phrase falsum in uno, falsum in omnibus which, roughly translated, means “false in one thing, false in everything”, is fallacious in so far as someone found to be wrong about one thing, is presumed to be wrong about some other thing entirely.Lynch, Jack (2008). Deception and detection in eighteenth-century Britain. Ashgate Publishing, Ltd. p. 73. Arising in Roman courts, this principle meant that if a witness was proved false in some parts of his testimony, any further statements were also regarded as false unless they were independently corroborated. Falsus is thus a fallacy of logic. The description that an initial false statement is a prelude to the making of more false statements is false; however, even one false premise will suffice to disprove an argument. This is a special case of the associatory fallacy.

Falsum in uno, falsum in omnibus status as a fallacy is independent of whether it is wise or unwise to use as a legal rule, with witnesses testifying in courts being held for perjury if part of their statements are false.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

If-by-whiskey

A

An argument that supports both sides of an issue by using terms that are selectively emotionally sensitive.

In political discourse, if-by-whiskey is a relativist fallacy where the response to a question is contingent on the questioner’s opinions and use of words with strong positive or negative connotations (e.g., terrorist as negative and freedom fighter as positive). An if-by-whiskey argument implemented through doublespeak appears to affirm both sides of an issue, and agrees with whichever side the listener supports, in effect, taking a position without taking a position. A similar idiom is “all things to all people”, which is often used as a negative term in politics.

Canonical example

The label if-by-whiskey refers to a 1952 speech by Noah S. “Soggy” Sweat, Jr., a young lawmaker from the U.S. state of Mississippi, on the subject of whether Mississippi should continue to prohibit (which it did until 1966) or finally legalize alcoholic beverages:

The American columnist William Safire popularized the term in his column in The New York Times, but wrongly attributed it to Florida Governor Fuller Warren. He corrected this reference in his book Safire’s Political Dictionary, on page 337.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Fallacy of many questions (loaded question)

A

Someone asks a question that presupposes something that has not been proven or accepted by all the people involved. This fallacy is often used rhetorically, so that the question limits direct replies to those that serve the questioners agenda.

A loaded question is a question which contains a controversial or unjustified assumption ( e.g., a presumption of guilt).

Aside from being an informal fallacy depending on usage, such questions may be used as a rhetorical tool: the question attempts to limit direct replies to be those that serve the questioner’s agenda. The traditional example is the question “Have you stopped beating your wife?” Whether the respondent answers yes or no, he will admit to having a wife, and having beaten her at some time in the past. Thus, these facts are presupposed by the question, and in this case an entrapment, because it narrows the respondent to a single answer, and the fallacy of many questions has been committed. The fallacy relies upon context for its effect: the fact that a question presupposes something does not in itself make the question fallacious. Only when some of these presuppositions are not necessarily agreed to by the person who is asked the question does the argument containing them become fallacious. Hence the same question may be loaded in one context, but not in the other. For example the previous question would not be loaded if it was asked during a trial in which the defendant has already admitted to beating his wife.Douglas N. Walton, Informal logic: a handbook for critical argumentation, Cambridge University Press, 1989, ISBN 0-521-37925-3, Google Print, p.36-37

This fallacy should be distinguished from that of begging the question, Fallacy: Begging the Question The Nizkor Project. Retrieved on: January 22, 2008 which offers a premise the plausibility of which depends on the truth of the proposition asked about, and which is often an implicit restatement of the proposition.

The term “loaded question” is sometimes used to refer to loaded language that is phrased as a question. This type of question does not necessarily contain a fallacious presupposition, but rather this usage refers to the question having an unspoken and often emotive implication. For example, “Are you a murderer?” would be such a loaded question, as “murder” has a very negative connotation. Such a question may be asked merely to harass or upset the respondent with no intention of listening to their reply, or asked with the full expectation that the respondent will predictably deny it.

Defense

A common way out of this argument is not to answer the question (e.g. with a simple ‘yes’ or ‘no’), but to challenge the assumption behind the question. To use an earlier example, a good response to the question “Have you stopped beating your wife?” would be “I have never beaten my wife”. This removes the ambiguity of the expected response, therefore nullifying the tactic. However, the askers of said questions have learned to get around this tactic by accusing the one who answers of dodging the question. A rhetorical question such as “Then please explain, how could I possibly have beaten a wife that I’ve never had?” can be an effective antidote to this further tactic, placing the burden on the deceptive questioner either to expose their tactic or stop the line of inquiry. In many cases a short answer is important. I neither did nor do I now makes a good example on how to answer the question without letting the asker interrupt and misshape the response.

Historical examples

Madeleine Albright (U.S. Ambassador to the U.N.) claims to have answered a loaded question (and later regretted not challenging it instead) on 60 Minutes on 12 May 1996. Lesley Stahl asked, regarding the effects of UN sanctions against Iraq, “We have heard that a half million children have died. I mean, that is more children than died in Hiroshima. And, you know, is the price worth it?”

Madeleine Albright: “I think that is a very hard choice, but the price, we think, the price is worth it.”

She later wrote of this response

I must have been crazy; I should have answered the question by reframing it and pointing out the inherent flaws in the premise behind it. … As soon as I had spoken, I wished for the power to freeze time and take back those words. My reply had been a terrible mistake, hasty, clumsy, and wrong. … I had fallen into a trap and said something that I simply did not mean. That is no one’s fault but my own.

President Bill Clinton, the moderator in a town meeting discussing the topic “Race In America”, in response to a participant argument that the issue was not affirmative action but “racial preferences” asked the participant a loaded question: “Do you favor the United States Army abolishing the affirmative-action program that produced Colin Powell? Yes or no?”

For another example, the New Zealand corporal punishment referendum, 2009 asked:

“Should a smack as part of good parental correction be a criminal offence in New Zealand?”

Murray Edridge, of Barnardos New Zealand, criticized the question as “loaded and ambiguous” and claimed “the question presupposes that smacking is a part of good parental correction”.

On June 13, 2012, National Basketball Association (NBA) commissioner David Stern asked radio personality Jim Rome the loaded question, “Have you stopped beating your wife yet?” in making a point about his feelings about Rome’s interview.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Ludic fallacy

A

The belief that the outcomes of non-regulated random occurrences can be encapsulated by a statistic; a failure to take into account unknown unknowns in determining the probability of an events taking place.

The ludic fallacy is a term coined by Nassim Nicholas Taleb in his 2007 book The Black Swan. “Ludic” is from the Latin ludus, meaning “play, game, sport, pastime.”D.P. Simpson, “Cassell’s Latin and English Dictionary” (New York: Hungry Minds, 1987) p. 134. It is summarized as “the misuse of games to model real-life situations.” Black Swans, the Ludic Fallacy and Wealth Management, François Sicart. Taleb explains the fallacy as “basing studies of chance on the narrow world of games and dice.”Nassim Taleb, The Black Swan (New York: Random House, 2007) p. 309.

It is a central argument in the book and a rebuttal of the predictive mathematical models used to predict the future – as well as an attack on the idea of applying naïve and simplified statistical models in complex domains. According to Taleb, statistics works only in some domains like casinos in which the odds are visible and defined. Taleb’s argument centers on the idea that predictive models are based on platonified forms, gravitating towards mathematical purity and failing to take some key ideas into account:

It is impossible to be in possession of all the information.
Very small unknown variations in the data could have a huge impact. Taleb does differentiate his idea from that of mathematical notions in chaos theory, e.g. the butterfly effect.
Theories/Models based on empirical data are flawed, as events that have not taken place before for which no conclusive explanation or account can be provided.
Examples

Example 1: Suspicious coin

One example given in the book is the following thought experiment. There are two people:

Dr John, who is regarded as a man of science and logical thinking.
Fat Tony, who is regarded as a man who lives by his wits.
A third party asks them, “assume a fair coin is flipped 99 times, and each time it comes up heads. What are the odds that the 100th flip would also come up heads?”

Dr John says that the odds are not affected by the previous outcomes so the odds must still be 50.
Fat Tony says that the odds of the coin coming up heads 99 times in a row are so low (less than 1 in 6.33 × 1029) that the initial assumption that the coin had a 50:50 chance of coming up heads is most likely incorrect.
The ludic fallacy here is to assume that in real life the rules from the purely hypothetical model (where Dr John is correct) apply. Would a reasonable person bet on black on a roulette table that has come up red 99 times in a row (especially as the reward for a correct guess is so low when compared with the probable odds that the game is fixed)?

In classical terms, highly statistically significant (unlikely) events should make one question one’s model assumptions. In Bayesian statistics, this can be modelled by using a prior distribution for one’s assumptions on the fairness of the coin, then Bayesian inference to update this distribution.

Example 2: Job interview

A man considers going to a job interview. He recently studied statistics and utility theory in college and performed well in the exams. Considering whether to take the interview, he tries to calculate the probability he will get the job versus the cost of the time spent.

This young job seeker forgets that real life has more variables than the small set he has chosen to estimate. Even with a low probability of success, a really good job may be worth the effort of going to the interview. Will he enjoy the process of the interview? Will his interview technique improve regardless of whether he gets the job or not? Even the statistics of the job business are non-linear. What other jobs could come the man’s way by meeting the interviewer? Might there be a possibility of a very high pay-off in this company that he has not thought of?

Example 3: Stock returns

Any decision theory based on a fixed universe or model of possible outcomes ignores and minimizes the impact of events which are “outside model.” For instance, a simple model of daily stock market returns may include extreme moves such as Black Monday but might not model the market breakdowns following the 2011 Japanese tsunami and its consequences. A fixed model considers the “known unknowns,” but ignores the “unknown unknowns.”

Relation to Platonicity

The ludic fallacy is a specific case of the more general problem of Platonicity defined by Taleb as:

the focus on those pure, well-defined, and easily discernible objects like triangles, or more social notions like friendship or love, at the cost of ignoring those objects of seemingly messier and less tractable structures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Fallacy of the single cause

A

It is assumed that there is one, simple cause of an outcome when in reality it may have been caused by a number of only jointly sufficient causes.

The fallacy of the single cause, also known as complex cause, causal oversimplification, causal reductionism, and reduction fallacy, is a fallacy of questionable cause that occurs when it is assumed that there is a single, simple cause of an outcome when in reality it may have been caused by a number of only jointly sufficient causes.

It can be logically reduced to: X occurred after Y. Therefore, Y caused X (although A,B,C…etc also caused X.)

Often after a tragedy it is asked, “What was the cause of this?” Such language implies that there is one cause, when instead there were probably a large number of contributing factors. However, having produced a list of several contributing factors, it may be worthwhile to look for the strongest of the factors, or a single cause underlying several of them. A need for simplification may be perceived in order to make the explanation of the tragedy operational, so that responsible authorities can be seen to have taken action.

For instance, after a school shooting, editorialists debate whether it was caused by the shooter’s parents, violence in media, stress on students, or the accessibility of guns. In fact, many different causes—including some of those—may all have necessarily contributed. Similarly, the music industry might claim that peer-to-peer file sharing is the cause of a loss in profit whereas factors such as a growing videogame market and economic depression are also likely to be major factors.

Causal oversimplification is a specific kind of false dilemma where conjoint possibilities are ignored. In other words, the possible causes are assumed to be “A or B or C” when “A and B and C” or “A and B and not C” (etc.) are not taken into consideration.

A notable scientific example of what can happen when this kind of fallacy is identified and resolved is the development in economics of the Coase theorem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

False attribution

A

An advocate appeals to an irrelevant, unqualified, unidentified, biased or fabricated source in support of an argument.

The fallacy of a false attribution occurs when an advocate appeals to an irrelevant, unqualified, unidentified, biased or fabricated source in support of an argument. A contextomy is a type of false attribution.

A more deceptive and difficult to detect version of a false attribution is where a fraudulent advocate goes so far as to fabricate a source, such as creating a fake website, in order to support a claim. For example, the “Levitt Institute” was a fake organisation created in 2009 solely for the purposes of (successfully) fooling the Australian media into reporting that Sydney was Australia’s most naive city. Deception Detection Deficiency, Media Watch.

A particular case of misattribution is the Matthew effect: a quotation is often attributed to someone more famous than the real author. This leads the quotation to be more famous, but the real author to be forgotten (see also: obliteration by incorporation).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Argument to moderation

A

Assuming that the compromise between two positions is always correct.

Argument to moderation (Latin: argumentum ad temperantiam; also known as from middle ground, false compromise, gray fallacy and the golden mean fallacy) is an informal fallacy which asserts that the truth can be found as a compromise between two opposite positions. This fallacy’s opposite is the false dilemma.

As Vladimir Bukovsky puts it, the middle ground between the Big Lie of Soviet propaganda and the truth is a lie, and one should not be looking for a middle ground between disinformation and information.Vladimir Bukovsky, The wind returns. Letters by Russian traveler (Russian edition, Буковский В. К. И возвращается ветер. Письма русского путешественника.) Moscow, 1990, ISBN 5-235-01826, page 345. According to him, people from the Western pluralistic civilization are more prone to this fallacy because they are used to resolving problems by making compromises and accepting alternative interpretations, unlike Russians who are looking for the absolute truth.

An individual demonstrating this false compromise fallacy implies that the positions being considered represent extremes of a continuum of opinions, and that such extremes are always wrong, and the middle ground is always correct. Fallacy: Middle Ground, The Nizkor Project (accessed 29 November 2012) This is not always the case. Sometimes only X or Y is acceptable, with no middle ground possible. Additionally, the middle ground fallacy allows any position to be invalidated, even those that have been reached by previous applications of the same method; all one must do is present yet another, radically opposed position, and the middle-ground compromise will be forced closer to that position. In politics, this is part of the basis behind Overton window theory.

It is important to note that this does not mean the middle ground position is a bad strategy, or even incorrect; only that the fact that it is moderate cannot be used as evidence of its truthfulness.

Examples

“Some would say that hydrogen cyanide is a delicious and necessary part of the human diet, but others claim it is a toxic and dangerous substance. The truth must therefore be somewhere in between.”
“Bob says we should buy a computer. Sue says we shouldn’t. Therefore, the best solution is to compromise and buy half a computer.”
“Should array indices start at 0 or 1? My compromise of 0.5 was rejected without, I thought, proper consideration.” — Stan Kelly-Bootle
“The choice of 48 bytes as the ATM cell payload size, as a compromise between the 64 bytes proposed by parties from the United States and the 32 bytes proposed by European parties; the compromise was made for entirely political reasons, since it did not technically favor either of the parties.”
“The fact that one is confronted with an individual who strongly argues that slavery is wrong and another who argues equally strongly that slavery is perfectly legitimate in no way suggests that the truth must be somewhere in the middle.”Susan T. Gardner (2009). Thinking Your Way to Freedom: A Guide to Owning Your Own Practical Reasoning. Temple University Press.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Gamblers fallacy

A

The incorrect belief that separate, independent events affect the likelihood of another random event. If a coin flip lands on heads (x) times in a row, the belief that it is “due to land on tails” is incorrect.

The Gambler’s fallacy, also known as the Monte Carlo fallacy (because its most famous example happened in a Monte Carlo Casino in 1913), Blog - “Fallacy Files” What happened at Monte Carlo in 1913. and also referred to as the fallacy of the maturity of chances, is the belief that if deviations from expected behaviour are observed in repeated independent trials of some random process, future deviations in the opposite direction are then more likely.

An example: coin-tossing

The gambler’s fallacy can be illustrated by considering the repeated toss of a fair coin. With a fair coin, the outcomes in different tosses are statistically independent and the probability of getting heads on a single toss is exactly (one in two). It follows that the probability of getting two heads in two tosses is (one in four) and the probability of getting three heads in three tosses is (one in eight).

Now suppose that we have just tossed four heads in a row, so that if the next coin toss were also to come up heads, it would complete a run of five successive heads. Since the probability of a run of five successive heads is only (one in thirty-two), a person subject to the gambler’s fallacy might believe that this next flip was less likely to be heads than to be tails. However, this is not correct, and is a manifestation of the gambler’s fallacy; the event of 5 heads in a row and the event of “first 4 heads, then a tails” are equally likely, each having probability . Given that the first four rolls turn up heads, the probability that the next toss is a head is in fact,

While a run of five heads is only = 0.03125, it is only that before the coin is first tossed. After the first four tosses the results are no longer unknown, so their probabilities are 1. Reasoning that it is more likely that the next toss will be a tail than a head due to the past tosses, that a run of luck in the past somehow influences the odds in the future, is the fallacy.

Explaining why the probability is 1/2 for a fair coin

We can see from the above that, if one flips a fair coin 21 times, then the probability of 21 heads is 1 in 2,097,152. However, the probability of flipping a head after having already flipped 20 heads in a row is simply . This is an application of Bayes’ theorem.

This can also be seen without knowing that 20 heads have occurred for certain (without applying of Bayes’ theorem). Consider the following two probabilities, assuming a fair coin:

probability of 20 heads, then 1 tail
0.520 × 0.5

0.521
probability of 20 heads, then 1 head
0.520 × 0.5

0.521
The probability of getting 20 heads then 1 tail, and the probability of getting 20 heads then another head are both 1 in 2,097,152. Therefore, it is equally likely to flip 21 heads as it is to flip 20 heads and then 1 tail when flipping a fair coin 21 times. Furthermore, these two probabilities are equally as likely as any other 21-flip combinations that can be obtained (there are 2,097,152 total); all 21-flip combinations will have probabilities equal to 0.521, or 1 in 2,097,152. From these observations, there is no reason to assume at any point that a change of luck is warranted based on prior trials (flips), because every outcome observed will always have been as likely as the other outcomes that were not observed for that particular trial, given a fair coin. Therefore, just as Bayes’ theorem shows, the result of each trial comes down to the base probability of the fair coin: .

Other examples

There is another way to emphasize the fallacy. As already mentioned, the fallacy is built on the notion that previous failures indicate an increased probability of success on subsequent attempts. This is, in fact, the inverse of what actually happens, even on a fair chance of a successful event, given a set number of iterations. Assume a fair 16-sided die, where a win is defined as rolling a 1. Assume a player is given 16 rolls to obtain at least one win (1−p(rolling no ones)). The low winning odds are just to make the change in probability more noticeable. The probability of having at least one win in the 16 rolls is:

64.39%
However, assume now that the first roll was a loss (93.75% chance of that, ). The player now only has 15 rolls left and, according to the fallacy, should have a higher chance of winning since one loss has occurred. His chances of having at least one win are now:

62.02%
Simply by losing one toss the player’s probability of winning dropped by 2 percentage points. By the time this reaches 5 losses (11 rolls left), his probability of winning on one of the remaining rolls will have dropped to ~50%. The player’s odds for at least one win in those 16 rolls has not increased given a series of losses; his odds have decreased because he has fewer iterations left to win. In other words, the previous losses in no way contribute to the odds of the remaining attempts, but there are fewer remaining attempts to gain a win, which results in a lower probability of obtaining it.

The player becomes more likely to lose in a set number of iterations as he fails to win, and eventually his probability of winning will again equal the probability of winning a single toss, when only one toss is left: 6.25% in this instance.

Some lottery players will choose the same numbers every time, or intentionally change their numbers, but both are equally likely to win any individual lottery draw. Copying the numbers that won the previous lottery draw gives an equal probability, although a rational gambler might attempt to predict other players’ choices and then deliberately avoid these numbers. Low numbers (below 31 and especially below 12) are popular because people play birthdays as their so-called lucky numbers; hence a win in which these numbers are over-represented is more likely to result in a shared payout.

A joke told among mathematicians demonstrates the nature of the fallacy. When flying on an aircraft, a man decides to always bring a bomb with him. “The chances of an aircraft having a bomb on it are very small,” he reasons, “and certainly the chances of having two are almost none!” A similar example is in the book The World According to Garp when the hero Garp decides to buy a house a moment after a small plane crashes into it, reasoning that the chances of another plane hitting the house have just dropped to zero.

Reverse fallacy

The reversal is also a fallacy (not to be confused with the inverse gambler’s fallacy) in which a gambler may instead decide, after a consistent tendency towards tails, that tails are more likely out of some mystical preconception that fate has thus far allowed for consistent results of tails. Believing the odds to favor tails, the gambler sees no reason to change to heads. Again, the fallacy is the belief that the “universe” somehow carries a memory of past results which tend to favor or disfavor future outcomes.

Caveats

In most illustrations of the gambler’s fallacy and the reversed gambler’s fallacy, the trial (e.g. flipping a coin) is assumed to be fair. In practice, this assumption may not hold.

For example, if one flips a fair coin 21 times, then the probability of 21 heads is 1 in 2,097,152 (above). If the coin is fair, then the probability of the next flip being heads is 1/2. However, because the odds of flipping 21 heads in a row is so slim, it may well be that the coin is somehow biased towards landing on heads, or that it is being controlled by hidden magnets, or similar.Martin Gardner, Entertaining Mathematical Puzzles, Dover Publications, 69-70. In this case, the smart bet is “heads” because the Bayesian inference from the empirical evidence — 21 “heads” in a row — suggests that the coin is likely to be biased toward “heads”, contradicting the general assumption that the coin is fair.

The opening scene of the play Rosencrantz and Guildenstern Are Dead by Tom Stoppard discusses these issues as one man continually flips heads and the other considers various possible explanations.

Childbirth

Instances of the gambler’s fallacy being applied to childbirth can be traced all the way back to 1796, in Pierre-Simon Laplace’s A Philosophical Essay on Probabilities. Laplace wrote of the ways in which men calculated their probability of having sons: “I have seen men, ardently desirous of having a son, who could learn only with anxiety of the births of boys in the month when they expected to become fathers. Imagining that the ratio of these births to those of girls ought to be the same at the end of each month, they judged that the boys already born would render more probable the births next of girls.” In short, the expectant fathers feared that if more sons were born in the surrounding community, then they themselves would be more likely to have a daughter.Barron, G. and Leider, S. (2010). The role of experience in the gambler’s fallacy. Journal of Behavioral Decision Making, 23, 117-129.

Some expectant parents believe that, after having multiple children of the same sex, they are “due” to have a child of the opposite sex. While the Trivers–Willard hypothesis predicts that birth sex is dependent on living conditions (i.e. more male children are born in “good” living conditions, while more female children are born in poorer living conditions), the probability of having a child of either gender is still generally regarded as 50/50.

Monte Carlo Casino

The most famous example of the gambler’s fallacy occurred in a game of roulette at the Monte Carlo Casino on August 18, 1913”Roulette”, in The Universal Book of Mathematics: From Abracadabra to Zeno’s Paradoxes, by David Darling (John Wiley & Sons, 2004) p278, when the ball fell in black 26 times in a row. This was an extremely uncommon occurrence, although no more nor less common than any of the other 67,108,863 sequences of 26 red or black. Gamblers lost millions of francs betting against black, reasoning incorrectly that the streak was causing an “imbalance” in the randomness of the wheel, and that it had to be followed by a long streak of red.

Non-examples of the fallacy

There are many scenarios where the gambler’s fallacy might superficially seem to apply, when it actually does not. When the probability of different events is not independent, the probability of future events can change based on the outcome of past events (see statistical permutation). Formally, the system is said to have memory. An example of this is cards drawn without replacement. For example, if an ace is drawn from a deck and not reinserted, the next draw is less likely to be an ace and more likely to be of another rank. The odds for drawing another ace, assuming that it was the first card drawn and that there are no jokers, have decreased from (7.69%) to (5.88%), while the odds for each other rank have increased from (7.69%) to (7.84%). This type of effect is what allows card counting systems to work (for example in the game of blackjack).

The reversed gambler’s fallacy may appear to apply in the story of Joseph Jagger, who hired clerks to record the results of roulette wheels in Monte Carlo. He discovered that one wheel favored nine numbers and won large sums of money until the casino started rebalancing the roulette wheels daily. In this situation, the observation of the wheel’s behavior provided information about the physical properties of the wheel rather than its “probability” in some abstract sense, a concept which is the basis of both the gambler’s fallacy and its reversal. Even a biased wheel’s past results will not affect future results, but the results can provide information about what sort of results the wheel tends to produce. However, if it is known for certain that the wheel is completely fair, then past results provide no information about future ones.

The outcome of future events can be affected if external factors are allowed to change the probability of the events (e.g., changes in the rules of a game affecting a sports team’s performance levels). Additionally, an inexperienced player’s success may decrease after opposing teams discover his weaknesses and exploit them. The player must then attempt to compensate and randomize his strategy. Such analysis is part of game theory.

Non-example: unknown probability of event

When the probability of repeated events are not known, outcomes may not be equally probable. In the case of coin tossing, as a run of heads gets longer and longer, the likelihood that the coin is biased towards heads increases. If one flips a coin 21 times in a row and obtains 21 heads, one might rationally conclude a high probability of bias towards heads, and hence conclude that future flips of this coin are also highly likely to be heads. In fact, Bayesian inference can be used to show that when the long-run proportion of different outcomes are unknown but exchangeable (meaning that the random process from which they are generated may be biased but is equally likely to be biased in any direction) and that previous observations demonstrate the likely direction of the bias, such that the outcome which has occurred the most in the observed data is the most likely to occur again.O’Neill, B. and Puza, B.D. (2004) Dice have no memories but I do: A defence of the reverse gambler’s belief. Reprinted in abridged form as O’Neill, B. and Puza, B.D. (2005) In defence of the reverse gambler’s belief. The Mathematical Scientist 30(1), pp. 13–16.

Psychology behind the fallacy

Origins

Gambler’s fallacy arises out of a belief in the law of small numbers, or the erroneous belief that small samples must be representative of the larger population. According to the fallacy, “streaks” must eventually even out in order to be representative.Burns, B.D. and Corpus, B. (2004). Randomness and inductions from streaks: “Gambler’s fallacy” versus “hot hand.” Psychonomic Bulletin and Review. 11, 179-184 Amos Tversky and Daniel Kahneman first proposed that the gambler’s fallacy is a cognitive bias produced by a psychological heuristic called the representativeness heuristic, which states that people evaluate the probability of a certain event by assessing how similar it is to events they have experienced before, and how similar the events surrounding those two processes are. According to this view, “after observing a long run of red on the roulette wheel, for example, most people erroneously believe that black will result in a more representative sequence than the occurrence of an additional red”,Tversky & Kahneman, 1974. so people expect that a short run of random outcomes should share properties of a longer run, specifically in that deviations from average should balance out. When people are asked to make up a random-looking sequence of coin tosses, they tend to make sequences where the proportion of heads to tails stays closer to 0.5 in any short segment than would be predicted by chance (insensitivity to sample size); Kahneman and Tversky interpret this to mean that people believe short sequences of random events should be representative of longer ones.Tversky & Kahneman, 1971. The representativeness heuristic is also cited behind the related phenomenon of the clustering illusion, according to which people see streaks of random events as being non-random when such streaks are actually much more likely to occur in small samples than people expect.

The gambler’s fallacy can also be attributed to the mistaken belief that gambling (or even chance itself) is a fair process that can correct itself in the event of streaks, otherwise known as the just-world hypothesis.Rogers, P. (1998). The cognitive psychology of lottery gambling: A theoretical review. Journal of Gambling Studies, 14, 111-134 Other researchers believe that individuals with an internal locus of control - that is, people who believe that the gambling outcomes are the result of their own skill - are more susceptible to the gambler’s fallacy because they reject the idea that chance could overcome skill or talent.Sundali, J. and Croson, R. (2006). Biases in casino betting: The hot hand and the gambler’s fallacy. Judgment and Decision Making, 1, 1-12.

Variations of the gambler’s fallacy

Some researchers believe that there are actually two types of gambler’s fallacy: Type I and Type II. Type I is the “classic” gambler’s fallacy, when individuals believe that a certain outcome is “due” after a long streak of another outcome. Type II gambler’s fallacy, as defined by Gideon Keren and Charles Lewis, occurs when a gambler underestimates how many observations are needed to detect a favorable outcome (such as watching a roulette wheel for a length of time and then betting on the numbers that appear most often). Detecting a bias that will lead to a favorable outcome takes an impractically large amount of time and is very difficult, if not impossible, to do, therefore people fall prey to the Type II gambler’s fallacy.Keren, G. and Lewis, C. (1994). The two fallacies of gamblers: Type I and Type II. Organizational Behavior and Human Decision Processes, 60, 75-89. The two types are different in that Type I wrongly assumes that gambling conditions are fair and perfect, while Type II assumes that the conditions are biased, and that this bias can be detected after a certain amount of time.

Another variety, known as the retrospective gambler’s fallacy, occurs when individuals judge that a seemingly rare event must come from a longer sequence than a more common event does. For example, people believe that an imaginary sequence of die rolls is more than three times as long when a set of three 6’s is observed as opposed to when there are only two 6’s. This effect can be observed in isolated instances, or even sequentially. A real world example is when a teenager becomes pregnant after having unprotected sex, people assume that she has been engaging in unprotected sex for longer than someone who has been engaging in unprotected sex and is not pregnant.Oppenheimer, D.M. and Monin, B. (2009). The retrospective gambler’s fallacy: Unlikely events, constructing the past, and multiple universes. Judgment and Decision Making, 4, 326-334.

Relationship to hot-hand fallacy

Another psychological perspective states that gambler’s fallacy can be seen as the counterpart to basketball’s Hot-hand fallacy. In the hot-hand fallacy, people tend to predict the same outcome of the last event (positive recency) - that a high scorer will continue to score. In gambler’s fallacy, however, people predict the opposite outcome of the last event (negative recency) - that, for example, since the roulette wheel has landed on black the last six times, it is due to land on red the next. Ayton and Fischer have theorized that people display positive recency for the hot-hand fallacy because the fallacy deals with human performance, and that people do not believe that an inanimate object can become “hot.” Human performance is not perceived as “random,” and people are more likely to continue streaks when they believe that the process generating the results is nonrandom. Usually, when a person exhibits the gambler’s fallacy, they are more likely to exhibit the hot-hand fallacy as well, suggesting that one construct is responsible for the two fallacies.

The difference between the two fallacies is also represented in economic decision-making. A study by Huber, Kirchler, and Stockl (2010) examined how the hot hand and the gambler’s fallacy are exhibited in the financial market. The researchers gave their participants a choice: they could either bet on the outcome of a series of coin tosses, use an “expert” opinion to sway their decision, or choose a risk-free alternative instead for a smaller financial reward. Participants turned to the “expert” opinion to make their decision 24% of the time based on their past experience of success, which exemplifies the hot-hand. If the expert was correct, 78% of the participants chose the expert’s opinion again, as opposed to 57% doing so when the expert was wrong. The participants also exhibited the gambler’s fallacy, with their selection of either heads or tails decreasing after noticing a streak of that outcome. This experiment helped bolster Ayton and Fischer’s theory that people put more faith in human performance than they do in seemingly random processes.

Neurophysiology

While the representativeness heuristic and other cognitive biases are the most commonly cited cause of the gambler’s fallacy, research suggests that there may be a neurological component to it as well. Functional magnetic resonance imaging has revealed that, after losing a bet or gamble (“riskloss”), the frontoparietal network of the brain is activated, resulting in more risk-taking behavior. In contrast, there is decreased activity in the amygdala, caudate and ventral striatum after a riskloss. Activation in the amygdala is negatively correlated with gambler’s fallacy - the more activity exhibited in the amygdala, the less likely an individual is to fall prey to the gambler’s fallacy. These results suggest that gambler’s fallacy relies more on the prefrontal cortex (responsible for executive, goal-directed processes) and less on the brain areas that control affective decision-making.

The desire to continue gambling or betting is controlled by the striatum, which supports a choice-outcome contingency learning method. The striatum processes the errors in prediction and the behavior changes accordingly. After a win, the positive behavior is reinforced and after a loss, the behavior is conditioned to be avoided. In individuals exhibiting the gambler’s fallacy, this choice-outcome contingency method is impaired, and they continue to make risks after a series of losses.

Possible solutions

The gambler’s fallacy is a deep-seated cognitive bias and therefore very difficult to eliminate. For the most part, educating individuals about the nature of randomness has not proven effective in reducing or eliminating any manifestation of the gambler’s fallacy. Participants in an early study by Beach and Swensson (1967) were shown a shuffled deck of index cards with shapes on them, and were told to guess which shape would come next in a sequence. The experimental group of participants was informed about the nature and existence of the gambler’s fallacy, and were explicitly instructed not to rely on “run dependency” to make their guesses. The control group was not given this information. Even so, the response styles of the two groups were similar, indicating that the experimental group still based their choices on the length of the run sequence. Clearly, instructing individuals about randomness is not sufficient in lessening the gambler’s fallacy.

It does appear, however, that an individual’s susceptibility to the gambler’s fallacy decreases with age. Fischbein and Schnarch (1997) administered a questionnaire to five groups: students in grades 5, 7, 9, 11, and college students specializing in teaching mathematics. None of the participants had received any prior education regarding probability. The question was, “Ronni flipped a coin three times and in all cases heads came up. Ronni intends to flip the coin again. What is the chance of getting heads the fourth time?” The results indicated that as the older the students got, the less likely they were to answer with “smaller than the chance of getting tails,” which would indicate a negative recency effect. 35% of the 5th graders, 35% of the 7th graders, and 20% of the 9th graders exhibited the negative recency effect. Only 10% of the 11th graders answered this way, however, and none of the college students did. Fischbein and Schnarch therefore theorized that an individual’s tendency to rely on the representativeness heuristic and other cognitive biases can be overcome with age.

Another possible solution that could be seen as more proactive comes from Roney and Trick, Gestalt psychologists who suggest that the fallacy may be eliminated as a result of grouping. When a future event (ex: a coin toss) is described as part of a sequence, no matter how arbitrarily, a person will automatically consider the event as it relates to the past events, resulting in the gambler’s fallacy. When a person considers every event as independent, however, the fallacy can be greatly reduced.

In their experiment, Roney and Trick told participants that they were betting on either two blocks of six coin tosses, or on two blocks of seven coin tosses. The fourth, fifth, and sixth tosses all had the same outcome, either three heads or three tails. The seventh toss was grouped with either the end of one block, or the beginning of the next block. Participants exhibited the strongest gambler’s fallacy when the seventh trial was part of the first block, directly after the sequence of three heads or tails. Additionally, the researchers pointed out how insidious the fallacy can be - the participants that did not show the gambler’s fallacy showed less confidence in their bets and bet fewer times than the participants who picked “with” the gambler’s fallacy. However, when the seventh trial was grouped with the second block (and was therefore perceived as not being part of a streak), the gambler’s fallacy did not occur.

Roney and Trick argue that a solution to gambler’s fallacy could be, instead of teaching individuals about the nature of randomness, training people to treat each event as if it is a beginning and not a continuation of previous events. This would prevent people from gambling when they are losing in the vain hope that their chances of winning are due to increase.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Historians fallacy

A

Occurs when one assumes that decision makers of the past viewed events from the same perspective and having the same perspective and having the same information as those subsequently analyzing the decision.

The historian’s fallacy is an informal fallacy that occurs when one assumes that decision makers of the past viewed events from the same perspective and having the same information as those subsequently analyzing the decision. It is not to be confused with presentism, a mode of historical analysis in which present-day ideas (such as moral standards) are projected into the past.

The fallacy was outlined in 1970 by David Hackett Fischer, who suggested it was analogous to William James’s psychologist’s fallacy. Fischer did not suggest that historians should refrain from retrospective analysis in their work, but he reminded historians that their subjects were not able to see into the future. As an example, he cited the well-known argument that Japan’s surprise attack on Pearl Harbor should have been predictable in the United States because of the many indications that an attack was imminent. What this argument overlooks, says Fischer, citing the work of Roberta Wohlstetter, is that there were innumerable conflicting signs which suggested possibilities other than an attack on Pearl Harbor. Only in retrospect do the warning signs seem obvious; signs which pointed in other directions tend to be forgotten. (See also: hindsight bias.)

In the field of military history, historians sometimes use what is known as the “fog of war technique” in hopes of avoiding the historian’s fallacy. In this approach, the actions and decisions of the historical subject (such as a military commander) are evaluated primarily on the basis of what that person knew at the time, and not on future developments that the person could not have known. According to Fischer, this technique was pioneered by the American historian Douglas Southall Freeman in his influential biographies of Robert E. Lee and George Washington.

39
Q

Homunculus fallacy

A

Where a “middle-man” used for explanation, this sometimes leads to regressive middle-man. Explanations without actually explaining the real nature of a function or a process. Instead, it explains the concept in terms of the concept itself, without first defining or explaining the original concept.

The homunculus argument is a fallacy arising most commonly in the theory of vision. One may explain (human) vision by noting that light from the outside world forms an image on the retinas in the eye and something (or someone) in the brain looks at these images as if they are images on a movie screen (this theory of vision is sometimes termed the theory of the Cartesian Theater: it is most associated, nowadays, with the psychologist David Marr). The question arises as to the nature of this internal viewer. The assumption here is that there is a ‘little man’ or ‘homunculus’ inside the brain ‘looking at’ the movie.

The reason why this is a fallacy may be understood by asking how the homunculus ‘sees’ the internal movie. The obvious answer is that there is another homunculus inside the first homunculus’s ‘head’ or ‘brain’ looking at this ‘movie’. But how does this homunculus see the ‘outside world’? In order to answer this, we are forced to posit another homunculus inside this other homunculus’s head and so forth. In other words, we are in a situation of infinite regress. The problem with the homunculus argument is that it tries to account for a phenomenon in terms of the very phenomenon that it is supposed to explain.

Homunculus arguments in terms of rules

Another example is with cognitivist theories that argue that the human brain uses ‘rules’ to carry out operations (these rules often conceptualised as being like the algorithms of a computer program). For example, in his work of the ’50s, ’60s and ’70s Noam Chomsky argued that (in the words of one of his books) human beings use Rules and Representations (or to be more specific, rules acting on representations) in order to cognise (more recently Chomsky has abandoned this view: c.f. the Minimalist Program).

Now, in terms of (say) chess, the players are given ‘rules’ (i.e. the rules of chess) to follow. So: who uses these rules? The answer is self-evident: the players of the game (of chess) use the rules: it’s not the case (obviously) that the rules themselves play chess. The rules themselves are merely inert marks on paper until a human being reads, understands and uses them. But what about the ‘rules’ that are, allegedly, inside our head (brain)? Who reads, understands and uses them? Again, the implicit answer is (and, some would argue, must be) a ‘homunculus’: a little man who reads the rules of the world and then gives orders to the body to act on them. But again we are in a situation of infinite regress, because this implies that the homunculus has cognitive process that are also rule bound, which presupposes another homunculus inside its head, and so on and so forth. Therefore, so the argument goes, theories of mind that imply or state explicitly that cognition is rule bound cannot be correct unless some way is found to ‘ground’ the regress.

This is important because it is often assumed in cognitive science that rules and algorithms are essentially the same: in other words, the theory that cognition is rule bound is often believed to imply that thought (cognition) is essentially the manipulation of algorithms, and this is one of the key assumptions of some varieties of artificial intelligence.

Homunculus arguments are always fallacious unless some way can be found to ‘ground’ the regress. In psychology and philosophy of mind, ‘homunculus arguments’ (or the ‘homunculus fallacies’) are extremely useful for detecting where theories of mind fail or are incomplete.

The homunculus fallacy is closely related to Ryle’s regress.

Counterarguments

The above cited regress argument does not necessarily invalidate all forms of the homunculus argument. The regress relies on the idea that the homunculus ‘inside’ the brain must itself have a homunculus inside it but it is not clear that this is a necessary condition. A dualist might argue that the homunculus inside the brain is an imaterial one (such as the cartesian soul), or a mystic might argue that the homunculus is a recharacterization of the infinite consciousness of God (or true self), and thereby does not require a homunculus (or spirit) to have sensory experience or for an end to the regress at this point (or some later one) in another manner. A non-dualist may argue that a human life form (or any organism) is coterminous and indivisible from its environment, with this unified field of awareness (i.e., universal consciousness) mistaking itself for a homunculus. Thus, the regress argument is only valid if there can be no other explanation of the homunculus’s cognition supplied, and the arguments of a First Cause or a transcendental consciousness (i.e., awareness underlying and beyond form) are rejected.

40
Q

Incomplete comparison

A

Where not enough information is provided to make a complete comparison.

An incomplete comparison is a misleading argument popular in advertising. For example, an advertiser might say “product X is better”. This is an incomplete assertion, so it can’t be refuted. A complete assertion, such as “product X sells for a lower price than product Y” or “the new product X lasts longer than the old product X” could be tested and possibly refuted.

In grammar, incomplete comparison, is a comparison that leaves out one of the item being compared.

Unacceptable:

Learning Chinese is more difficult. difficult than what?

Acceptable:

Learning Chinese is more difficult than learning Spanish.

41
Q

Inconsistent comparison

A

Where different methods of comparison are used leaving one with a false impression of whole comparison.

An inconsistent comparison is a misleading argument popular in advertising. For example, an advertisement might say “product X is less expensive than product A, has better quality than product B, and has more features than product C”. This is designed to give the impression that product X is better than products A, B, and C in all respects, but doesn’t actually make that claim. In fact, product A may be the most expensive, product B may be the lowest quality, and product C may have the fewest features of the three. So, the original statement really only means “product X is not the most expensive, lowest quality, fewest feature product on the market”. That would hardly be as effective of an advertisement, however.

42
Q

Ignoratio elenchi

A

An argument that may in itself be valid, but does not address the issue in question.

Ignoratio elenchi, also known as irrelevant conclusion,Bishop Whately, cited by John Stuart Mill: A System of Logic. London Colchester 1959 (first: 1843), pp. 542. is the informal fallacy of presenting an argument that may or may not be logically valid, but fails nonetheless to address the issue in question.

Ignoratio elenchi falls into the broad class of relevance fallacies. It is one of the fallacies identified by Aristotle in his Organon. In a broader sense he asserted that all fallacies are a form of ignoratio elenchi.

Ignoratio Elenchi, according to Aristotle, is a fallacy which arises from “ignorance of the nature of refutation.” In order to refute an assertion, Aristotle says we must prove it’s contradictory; the proof, consequently, of a proposition which stood in any other relation than that to the original, would be an ignoratio elenchi… Since Aristotle, the scope of the fallacy has been extended to include all cases of proving the wrong point… “I am required to prove a certain conclusion; I prove, not that, but one which is likely to be mistaken for it; in that lies the fallacy… For instance, instead of proving that ‘this person has committed an atrocious fraud,’ you prove that ‘this fraud he is accused of is atrocious;’” … The nature of the fallacy, then, consists in substituting for a certain issue another which is more or less closely related to it, and arguing the substituted issue. The fallacy does not take into account whether the arguments do or do not really support the substituted issue, it only calls attention to the fact that they do not constitute a proof of the original one… It is a particularly prevalent and subtle fallacy and it assumes a great variety of forms. But whenever it occurs and whatever form it takes, it is brought about by an assumption that leads the person guilty of it to substitute for a definite subject of inquiry another which is in close relation with it.

The phrase ignoratio elenchi is Latin meaning “ignorance of refutation”. Here elenchi is the genitive singular of the Latin noun elenchus, which is from the Greek ἔλεγχος elenchos, meaning an argument of disproof or refutation. The translation in English of the Latin expression has varied a fair bit. Hamblin proposed “misconception of refutation” or “ignorance of refutation” as a literal translation, Oesterle preferred “ignoring the issue”, Irving Copi, Christopher Tindale and others used “irrelevant conclusion”.

An example might be a situation where A and B are debating whether the law permits A to do something. If A attempts to support his position with an argument that the law ought to allow him to do the thing in question, then he is guilty of ignoratio elenchi.H. W. Fowler, A Dictionary of Modern English Usage. Entry for ignoratio elenchi. Dr Johnson’s unique “refutation” of Bishop Berkeley’s immaterialism, his claim that matter did not actually exist but only seemed to exist, has been described as Ignoratio elenchi:

Bagnall, Nicholas. Books: Paperbacks, The Sunday Telegraph 3 March 1996 during a conversation with Boswell, Johnson powerfully kicked a nearby stone and proclaimed of Berkeley’s theory, “I refute it thus!

A related concept is that of the red herring, which is a deliberate attempt to divert a process of enquiry by changing the subject. Ignoratio elenchi is sometimes confused with straw man argument. For example, it has been described as “attacking what the other fellow never said” by Peter Jay in an article in a 1996 article in New Statesman.Jay, Peter, Counterfeit coin, New Statesman, 23 August 1996

43
Q

Kettle logic

A

Using multiple inconsistent arguments to defend a position.

Kettle Logic (la logique du chaudron in the original French) is a type of informal fallacy wherein one uses multiple arguments to defend a point, but the arguments themselves are inconsistent.

The name derives from an example used by Sigmund Freud in The Interpretation of Dreams’‘The Interpretation of Dreams, in Standard Edition of the Complete Psychological Works of Sigmund Freud, (trans. A. A. Brill), 4:119-20 and in his Jokes and Their Relation to the Unconscious.Jokes and Their Relation to the Unconscious, Standard Edition 13:62 and 206 Freud relates the story that a man who was accused by his neighbour of having returned a kettle in a damaged condition offered three arguments.

That he had returned the kettle undamaged;
That it was already damaged when he borrowed it;
That he had never borrowed it in the first place.
The three arguments are inconsistent, and Freud notes that it would have been better if he had only used one.

The Kettle logic of the dream-work is related to the what Freud calls the embarrassment-dream of being naked, in which contradictory opposites are yoked together in the dream.Mills, Jon (2004) Rereading Freud: psychoanalysis through philosophy p.14 The peculiarities of the logic of the dream-work can be seen taking place almost from the beginning of The Interpretation of Dreams. This “kettle logic,” as Derrida calls it,11 exemplifies the logic of the dream-work. It is likewise with that found in what Freud calls the embarrassment-dream of being naked. Here, then, there is a logic that yokes contradictory opposites together in the dream. Freud said that in a dream, incompatible (contradictory) ideas are simultaneously admitted. Kabbalah and postmodernism: a dialogue By Sanford L. Drob p.139 and notes at p.292Elliot R. Wolfson (2007) Oneiric Imagination and Mystical Annihilation in Habad Hasidism in ARC, The Journal of the Faculty of Religious Studies, McGill University 35 (2007): 131-157.Sigmund Freud The Interpretation of Dreams, translated by A. A. Brill, pp.367-373 quotation: Contradictory thoughts Gedanken do not try to eliminate one another, but continue side by side, and often combine to form condensation- products, as though no contradiction existed. The suppressed psychic material, which in the waking state has been prevented from expression and cut off from internal perception by the mutual neutralization of contradictory attitudes die gegensätzliche Erledigung der Widersprüche, finds ways and means, under the sway of compromise-formations, of obtruding itself on consciousness during the night. Flectere si nequeo superos, Acheronta movebo.

At any rate, the interpretation of dreams is the via regia to a knowledge of the unconscious element in out psychic life. If I cannot influence the gods, I will stir up Acheron. Freud also presented various examples of how a symbol in a dream can bear in itself contradictory sexual meanings.Jane Marie Todd - 1990 Autobiographics in Freud and Derrida p.109 quotation: For the flower is one of the examples that Freud chooses to demonstrate the contradictory (sexual) meanings that a single dream symbol can bear. In fact, this example from The Interpretation of Dreams plays a prominent role in Derrida’s article on metaphor, “La Mythologie Blanche” (1971).

44
Q

Mind projection fallacy

A

When one considers the way he sees the world as the way the world really is.

The mind projection fallacy is a logical fallacy first described by physicist and Bayesian philosopher E.T. Jaynes. It occurs when someone thinks that the way they see the world reflects the way the world really is, going as far as assuming the real existence of imagined objects. That is, someone’s subjective judgements are “projected” to be inherent properties of an object, rather than being related to personal perception. One consequence is that others may be assumed to share the same perception, or that they are irrational or misinformed if they do not.

For example, someone might informally claim that broccoli tastes bad, even though “taste” depends on one’s taste buds and prior experience. Alternatively, we might conclude that intelligent aliens will value the same things we do or be driven by similar motivations, even though their minds may not be constructed in the same way.

A second form of the fallacy, as described by Jaynes, is when someone assumes that their own lack of knowledge about a phenomenon (which is a fact about their state of mind) as meaning that the phenomenon is not or cannot be understood (a fact about reality). (See also Map and territory.)

Jaynes used this concept to argue against Copenhagen interpretation of quantum mechanics. He described the fallacy as follows:I n studying probability theory, it was vaguely troubling to see reference to “gaussian random variables”, or “stochastic processes”, or “stationary time series”, or “disorder”, as if the property of being gaussian, random, stochastic, stationary, or disorderly is a real property, like the property of possessing mass or length, existing in Nature. Indeed, some seek to develop statistical tests to determine the presence of these properties in their data…

45
Q

Moving the goalposts

A

Argument in which evidence presented in response to a specific claim is dismissed and some other evidence is demanded.

Moving the goalposts (or shifting the goalposts) is a metaphor meaning to change the criterion (goal) of a process or competition while still in progress, in such a way that the new goal offers one side an intentional advantage or disadvantage.

Etymology

This phrase is a straightforward derivation from sports that use goalposts, such as football. The figurative use alludes to the perceived unfairness in changing the goal one is trying to achieve after the process one is engaged in (e.g. a game of football) has already started. The phrase came into wide use in the UK during the 1980s. The first known attested use is in 1987.

Logical fallacy

Moving the goalposts, also known as raising the bar, is an informal logically fallacious argument in which evidence presented in response to a specific claim is dismissed and some other (often greater) evidence is demanded.

In other words, after an attempt has been made to score a goal, the goalposts are moved to exclude the attempt.Clark, Jef et al. (2005). “Moving the goalposts,” Humbug! The Skeptic’s Field Guide to Spotting Fallacies in Thinking, p. 101. The problem with changing the rules of the game is that the meaning of the end result is changed too. It counts for less.Hobbs, Jeremy. “Moving the Goal Posts,” New York Times, November 21, 2011; retrieved 2013-2-19.

Bullying

The tactics of bullying behaviour include moving the goalposts by setting objectives which subtly change in ways in ways that cannot be reached.Royal College of Psychiatrists, “On Bullying and Harassment” retrieved 2012-2-19.

In workplace bullying, shifting the goalposts is a conventional tactic in the process of destabilization.Field, Tim. (1995). Bully in Sight: How to Predict, Resist, Challenge and Combat Workplace Bullying, p. 60.

Feature creep

Moving the goalposts may also refer to feature creep, in which the completion of a product like software is not acknowledged because an evolving list of required features changes over time, which may require entire remaking of the program. Thus, the goal of “completing” the product for a client may never occur.

Other uses

The term is often used in business to imply bad faith on the part of those setting goals for others to meet, by arbitrarily making additional demands just as the initial ones are about to be met. Accusations of this form of abuse tend to occur when there are unstated assumptions that are obvious to one party but not to another.

46
Q

Nirvana fallacy

A

When solutions to problems are rejected because they are not perfect.

The nirvana fallacy is the informal fallacy of comparing actual things with unrealistic, idealized alternatives. It can also refer to the tendency to assume that there is a perfect solution to a particular problem. A closely related concept is the perfect solution fallacy.

By creating a false dichotomy that presents one option which is obviously advantageous—while at the same time being completely implausible—a person using the nirvana fallacy can attack any opposing idea because it is imperfect. The choice is not between real world solutions and utopia; it is, rather, a choice between one realistic possibility and another which is merely better.

History

The nirvana fallacy was given its name by economist Harold Demsetz in 1969, who said:H. Demsetz, “Information and Efficiency: Another Viewpoint,” Journal of Law and Economics 12 (April 1969): 1, quoted in

A related quotation from Voltaire is:

Le mieux est l’ennemi du bien
often translated as:

The perfect is the enemy of the good
though it is better rendered as:

Leave well aloneLiterally: The better is the enemy of the well. or Leave well enough alone
from “La Bégueule” (1772).

Perfect solution fallacy

The perfect solution fallacy is an informal fallacy that occurs when an argument assumes that a perfect solution exists and/or that a solution should be rejected because some part of the problem would still exist after it were implemented. This is a classic example of black and white thinking, in which a person fails to see the complex interplay between multiple component elements of a situation or problem, and as a result, reduces complex problems to a pair of binary extremes.

It is common for arguments which commit this fallacy to omit any specifics about exactly how, or how badly, a proposed solution is claimed to fall short of acceptability, expressing the rejection in vague terms only. Alternatively, it may be combined with the fallacy of misleading vividness, when a specific example of a solution’s failure is described in emotionally powerful detail but base rates are ignored (see availability heuristic).

The fallacy is a type of false dilemma.

Examples

;Posit (fallacious)

These anti-drunk driving ad campaigns are not going to work. People are still going to drink and drive no matter what.
;Rebuttal

Complete eradication of drunk driving is not the expected outcome. The goal is reduction.
;Posit (fallacious)

Seat belts are a bad idea. People are still going to die in car crashes.
;Rebuttal

While seat belts cannot make driving 100% safe, they do reduce one’s likelihood of dying in a car crash.
Buddhist interpretations

Dukkha, a Buddhist notion of unease
Wabi-sabi, a Japanese aesthetic of imperfection

47
Q

Post hoc ergo propter hoc. After this, therefore because of this

A

X happened then Y happened; therefore X caused Y.

Post hoc ergo propter hoc, Latin for “after this, therefore because of this”, is a logical fallacy (of the questionable cause variety) that states “Since that event followed this one, that event must have been caused by this one.” It is often shortened to simply post hoc. It is subtly different from the fallacy cum hoc ergo propter hoc, in which two things or events occur simultaneously or the chronological ordering is insignificant or unknown, also referred to as false cause, coincidental correlation, or correlation not causation.

Post hoc is a particularly tempting error because temporal sequence appears to be integral to causality. The fallacy lies in coming to a conclusion based solely on the order of events, rather than taking into account other factors that might rule out the connection.

Pattern

The form of the post hoc fallacy can be expressed as follows:

  • A occurred, then B occurred.
  • Therefore, A caused B.
    When B is undesirable, this pattern is often extended in reverse: Avoiding A will prevent B.

Examples

From Attacking Faulty Reasoning by T. Edward Damer:

From With Good Reason by S. Morris Engel:

One class of examples is sometimes called the “Rooster Syndrome”: “believing that the rooster’s crowing causes the sun to rise”.

48
Q

Proof by verbosity

A

Submission of others to an argument too complex and verbose to reasonably in all its intimate details

Proof by intimidation (or argumentum verbosium) is a jocular phrase used mainly in mathematics to refer to a style of presenting a purported mathematical proof by giving an argument loaded with jargon and appeal to obscure results, so that the audience is simply obliged to accept it, lest they have to admit their ignorance and lack of understanding.

The phrase is also used when the author is an authority in his field presenting his proof to people who respect a priori his insistence that the proof is valid or when the author claims that his statement is true because it is trivial or because he simply says so. Usage of this phrase is for the most part in good humour, though it also appears in serious criticism.

More generally, “proof by intimidation” has also been used by critics of junk science to describe cases in which scientific evidence is thrown aside in favour of a litany of tragic individual cases presented to the public by articulate advocates who pose as experts in their field.

Gian-Carlo Rota claimed in a memoir that the expression “proof by intimidation” was coined by Mark Kac to describe a technique used by William Feller in his lectures.He took umbrage when someone interrupted his lecturing by pointing out some glaring mistake. He became red in the face and raised his voice, often to full shouting range. It was reported that on occasion he had asked the objector to leave the classroom. The expression “proof by intimidation” was coined after Feller’s lectures (by Mark Kac). During a Feller lecture, the hearer was made to feel privy to some wondrous secret, one that often vanished by magic as he walked out of the classroom at the end of the period. Like many great teachers, Feller was a bit of a con man.

49
Q

Prosecutors fallacy

A

A low probability of false matches does not mean a low probability of some false match being found

The prosecutor’s fallacy is a fallacy of statistical reasoning made in law. In this fallacy, the context in which the accused has been brought to court is falsely assumed to be irrelevant to judging how confident a jury can be in evidence against them with a statistical measure of doubt. If the defendant was selected from a large group merely because of the evidence under consideration, then this fact should be included in weighing how incriminating that evidence is. Not doing so is a base rate fallacy. This fallacy usually results in assuming that the prior probability that a piece of evidence would implicate a randomly chosen member of the population is equal to the probability that it would implicate the defendant.

One form of the fallacy results from misunderstanding conditional probability and neglecting the prior odds of a defendant being guilty before that evidence was introduced. When a prosecutor has collected some evidence (for instance a DNA match) and has an expert testify that the probability of finding this evidence if the accused were innocent is tiny, the fallacy occurs if it is concluded that the probability of the accused being innocent must be comparably tiny. If the DNA match is used to confirm guilt which is otherwise suspected then it is indeed strong evidence. However if the DNA evidence is the sole evidence against the accused and the accused was picked out of a large database of DNA profiles, the jurors should consider the very likely possibility that the match is just a coincidence and the accused is innocent. The odds in this scenario do not relate to the odds of being guilty, they relate to the odds of being picked at random.

The terms “prosecutor’s fallacy” and “defense attorney’s fallacy” were originated by William C. Thompson and Edward Schumann in the 1987 article Interpretation of Statistical Evidence in Criminal Trials, subtitled The Prosecutor’s Fallacy and the Defense Attorney’s Fallacy.

Examples of prosecutor’s fallacies

Conditional probability

Argument from rarity – Consider this case: a lottery winner is accused of cheating, based on the improbability of winning. At the trial, the prosecutor calculates the (very small) probability of winning the lottery without cheating and argues that this is the chance of innocence. The logical flaw is that the prosecutor has failed to account for the large number of people who play the lottery.

Berkson’s paradox - mistaking conditional probability for unconditional - led to several wrongful convictions of British mothers, accused of murdering two of their children in infancy, where the primary evidence against them was the statistical improbability of two children dying accidentally in the same household (under “Meadow’s law”). Though multiple accidental (SIDS) deaths are rare, so are multiple murders; with only the facts of the deaths as evidence, it is the ratio of these (prior) improbabilities that gives the correct “posterior probability” of murder.

Multiple testing

In another scenario, a crime-scene DNA sample is compared against a database of 20,000 men. A match is found, that man is accused and at his trial, it is testified that the probability that two DNA profiles match by chance is only 1 in 10,000. This does not mean the probability that the suspect is innocent is 1 in 10,000. Since 20,000 men were tested, there were 20,000 opportunities to find a match by chance.

Legal impact

In the courtroom, the prosecutor’s fallacy typically happens by mistake,

but deliberate use of the prosecutor’s fallacy is prosecutorial misconduct and can subject the prosecutor to official reprimand, disbarment or criminal punishment.

In the adversarial system, lawyers are usually free to present statistical evidence as best suits their case; retrials are more commonly the result of the prosecutor’s fallacy in expert witness testimony or in the judge’s summation.

Defense attorney’s fallacy

Suppose there is a one-in-a-million chance of a match given that the accused is innocent. The prosecutor says this means there is only a one-in-a-million chance of innocence. But if everyone in a community of 10 million people is tested, one expects 10 matches even if all are innocent. The defense fallacy would be to reason that “10 matches were expected, so the accused is no more likely to be guilty than any of the other matches, thus the evidence suggests a 90% chance that the accused is innocent.” and “As such, this evidence is irrelevant.” The first part of the reasoning would be correct only in the case where there is no further evidence pointing to the defendant. On the second part, Thompson & Schumann wrote that the evidence should still be highly relevant because it “drastically narrows the group of people who are or could have been suspects, while failing to exclude the defendant” (page 171).

Possible examples of fallacious defense arguments

Some authors have cited defense arguments in the O. J. Simpson murder trial as an example of this fallacy regarding the context in which the accused had been brought to court: crime scene blood matched Simpson’s with characteristics shared by 1 in 400 people. The defense argued that a football stadium could be filled with Angelenos matching the sample and that the figure of 1 in 400 was useless.Robertson, B., & Vignaux, G. A. (1995). Interpreting evidence: Evaluating forensic evidence in the courtroom. Chichester: John Wiley and Sons.Rossmo, D. Kim (2009). Criminal Investigative Failures. CRC Press Taylor & Francis Group.

Also at the O. J. Simpson murder trial, the prosecution presented evidence that Simpson had been violent toward his wife, while the defense argued that there was only one woman murdered for every 2500 women who were subjected to spousal abuse, and that any history of Simpson being violent toward his wife was irrelevant to the trial. However, some regard the reasoning behind the defense’s calculation as fallacious. According to author Gerd Gigerenzer, the correct probability requires the context—that Simpson’s wife had not only been subjected to domestic violence, but subjected to domestic violence and murdered—to be taken into account. Gigerenzer writes “the chances that a batterer actually murdered his partner, given that she has been killed, is about 8 in 9 or approximately 90%”.Gigerenzer, G., Reckoning with Risk: Learning to Live with Uncertainty, Penguin, (2003)

The Sally Clark case

Sally Clark, a British woman who was accused in 1998 of having killed her first child at 11 weeks of age, then conceived another child and allegedly killed it at 8 weeks of age. The prosecution had expert witness Sir Roy Meadow testify that the probability of two children in the same family dying from SIDS is about 1 in 73 million. That was much less frequent than the actual rate measured in historical data - Meadow estimated it from single-SIDS death data, and the assumption that the probability of such deaths should be uncorrelated between infants.

The population-wide probability of a SIDS fatality was about 1 in 1,303; Meadow generated his 1-in-73 million estimate from the lesser probability of SIDS death in the Clark household, which had lower risk factors (e.g. non-smoking). In this sub-population he estimated the probability of a single death at 1 in 8,500. See: . Professor Ray Hill questioned even this first step (1/8,500 vs 1/1,300) in two ways: firstly, on the grounds that it was biased, excluding those factors that increased risk (especially that both children were boys) and (more importantly) because reductions in SIDS risk factors will proportionately reduce murder risk factors, so that the relative frequencies of Münchausen syndrome by proxy and SIDS will remain in the same ratio as in the general population:

Meadow acknowledged that 1-in-73 million is not an impossibility, but argued that such accidents would happen “once every hundred years” and that, in a country of 15 million 2-child families, it is vastly more likely that the double-deaths are due to Münchausen syndrome by proxy than to such a rare accident. However, there is good reason to suppose that the likelihood of a death from SIDS in a family is significantly greater if a previous child has already died in these circumstances (a genetic predisposition to SIDS is likely to invalidate that assumed statistical independence Gene find casts doubt on double ‘cot death’ murders. The Observer; July 15, 2001) making some families more susceptible to SIDS and the error an outcome of the ecological fallacy. The likelihood of two SIDS deaths in the same family cannot be soundly estimated by squaring the likelihood of a single such death in all otherwise similar families.

1-in-73 million greatly underestimated the chance of two successive accidents, but, even if that assessment were accurate, the court seems to have missed the fact that the 1-in-73 million number meant nothing on its own. As an a priori probability, it should have been weighed against the a priori probabilities of the alternatives. Given that two deaths had occurred, one of the following explanations must be true, and all of them are a priori extremely improbable:

Two successive deaths in the same family, both by SIDS
Double homicide (the prosecution's case)
Other possibilities (including one homicide and one case of SIDS)
It's unclear that an estimate for the second possibility was ever proposed during the trial, or that the comparison of the first two probabilities was understood to be the key estimate to make in the statistical analysis assessing the prosecution's case against the case for innocence.

Mrs. Clark was convicted in 1999, resulting in a press release by the Royal Statistical Society which pointed out the mistakes.

In 2002, Ray Hill (Mathematics professor at Salford) attempted to accurately compare the chances of these two possible explanations; he concluded that successive accidents are between 4.5 and 9 times more likely than are successive murders, so that the a priori odds of Clark’s guilt were between 4.5 to 1 and 9 to 1 against.The uncertainty in this range is mainly driven by uncertainty in the likelihood of killing a second child, having killed a first, see:

A higher court later quashed Sally Clark’s conviction, on other grounds, on 29 January 2003. However, Sally Clark, a practising solicitor before the conviction, developed a number of serious psychiatric problems including serious alcohol dependency and died in 2007 from alcohol poisoning.

50
Q

Psychologists fallacy

A

An observer presupposes the objectivity of his own perspective when analyzing a behavioral event.

The psychologist’s fallacy is a fallacy that occurs when an observer assumes that his/her subjective experience reflects the true nature of an event. The fallacy was named by William James in the 19th century:

The great snare of the psychologist is the confusion of his own standpoint with that of the mental fact about which he is making his report. I shall hereafter call this the ‘psychologist’s fallacy’ par excellence.William James, Principles of Psychology volume I. chapter vii. p. 196, 1890.

A classic example of the Psychologist’s Fallacy is the experience of the geocentric model of the solar system. An observer on Earth experiences the sun moving across the sky. This experience, however, does not reveal the true nature that the Earth revolves around the sun.

Alternative statements of the fallacy

Some sources state the Psychologist’s Fallacy as if it were about two people — the observer and the observed — rather than about one observer and a fact. For example,

Psychologist’s fallacy, the fallacy, to which the psychologist is peculiarly liable, of reading into the mind he is examining what is true of his own; especially of reading into lower minds what is true of higher.James Mark Baldwin, Dictionary of Philosophy and Psychology volume II. p. 382/2, 1902.

A danger to be avoided known as the ‘psychologist’s fallacy’. This arises from the fact that the experimenter is apt to suppose that the subject will respond to a stimulus or an order in the same way as he himself would respond in the circumstances.British Journal of Psychology. XXI. p. 243, 1931.

In this alternative form, the fallacy is described as a specific form of the “similar to me” stereotype: what is unknown about another person is assumed, for simplicity, using things the observer knows about himself or herself. Such a bias leads the observer to presuppose knowledge or skills, or lack of such, possessed by another person. For example, “I (or everyone I know or most people I know) don’t know very much about chemistry. Therefore I can assume that this other person knows very little about chemistry.” This assumption may be true in any number of specific cases, making inductive reasoning based on this assumption cogent, but is not applicable in the general case (there are many people who are very knowledgeable in the field of chemistry), and therefore deductive reasoning based on this assumption may be invalid.

These alternative statements, however, do not match what William James characterized when he named the fallacy.

51
Q

Red herring

A

A speaker attempts to distract an audience by deviating from the topic at hand by introducing a separate argument which the speaker believes will be easier to speak to.

Ignoratio elenchi, also known as irrelevant conclusion,Bishop Whately, cited by John Stuart Mill: A System of Logic. London Colchester 1959 (first: 1843), pp. 542. is the informal fallacy of presenting an argument that may or may not be logically valid, but fails nonetheless to address the issue in question.

Ignoratio elenchi falls into the broad class of relevance fallacies. It is one of the fallacies identified by Aristotle in his Organon. In a broader sense he asserted that all fallacies are a form of ignoratio elenchi.

Ignoratio Elenchi, according to Aristotle, is a fallacy which arises from “ignorance of the nature of refutation.” In order to refute an assertion, Aristotle says we must prove it’s contradictory; the proof, consequently, of a proposition which stood in any other relation than that to the original, would be an ignoratio elenchi… Since Aristotle, the scope of the fallacy has been extended to include all cases of proving the wrong point… “I am required to prove a certain conclusion; I prove, not that, but one which is likely to be mistaken for it; in that lies the fallacy… For instance, instead of proving that ‘this person has committed an atrocious fraud,’ you prove that ‘this fraud he is accused of is atrocious;’” … The nature of the fallacy, then, consists in substituting for a certain issue another which is more or less closely related to it, and arguing the substituted issue. The fallacy does not take into account whether the arguments do or do not really support the substituted issue, it only calls attention to the fact that they do not constitute a proof of the original one… It is a particularly prevalent and subtle fallacy and it assumes a great variety of forms. But whenever it occurs and whatever form it takes, it is brought about by an assumption that leads the person guilty of it to substitute for a definite subject of inquiry another which is in close relation with it.

The phrase ignoratio elenchi is Latin meaning “ignorance of refutation”. Here elenchi is the genitive singular of the Latin noun elenchus, which is from the Greek ἔλεγχος elenchos, meaning an argument of disproof or refutation. The translation in English of the Latin expression has varied a fair bit. Hamblin proposed “misconception of refutation” or “ignorance of refutation” as a literal translation, Oesterle preferred “ignoring the issue”, Irving Copi, Christopher Tindale and others used “irrelevant conclusion”.

An example might be a situation where A and B are debating whether the law permits A to do something. If A attempts to support his position with an argument that the law ought to allow him to do the thing in question, then he is guilty of ignoratio elenchi.H. W. Fowler, A Dictionary of Modern English Usage. Entry for ignoratio elenchi. Dr Johnson’s unique “refutation” of Bishop Berkeley’s immaterialism, his claim that matter did not actually exist but only seemed to exist, has been described as Ignoratio elenchi:

Bagnall, Nicholas. Books: Paperbacks, The Sunday Telegraph 3 March 1996 during a conversation with Boswell, Johnson powerfully kicked a nearby stone and proclaimed of Berkeley’s theory, “I refute it thus!”

A related concept is that of the red herring, which is a deliberate attempt to divert a process of enquiry by changing the subject. Ignoratio elenchi is sometimes confused with straw man argument. For example, it has been described as “attacking what the other fellow never said” by Peter Jay in an article in a 1996 article in New Statesman.Jay, Peter, Counterfeit coin, New Statesman, 23 August 1996

52
Q

Regression fallacy

A

Ascribes cause where none exists. The flaw is failing to account for natural fluctuations. It is frequently a special kind of post hoc fallacy.

The regression (or regressive) fallacy is an informal fallacy. It ascribes cause where none exists. The flaw is failing to account for natural fluctuations. It is frequently a special kind of the post hoc fallacy.

Explanation

Things like golf scores, the earth’s temperature, and chronic back pain fluctuate naturally and usually regress towards the mean. The logical flaw is to make predictions that expect exceptional results to continue as if they were average (see Representativeness heuristic). People are most likely to take action when variance is at its peak. Then after results become more normal they believe that their action was the cause of the change when in fact it was not causal.

This use of the word “regression” was coined by Sir Francis Galton in a study from 1885 called “Regression Toward Mediocrity in Hereditary Stature”. He showed that the height of children from very short or very tall parents would move towards the average. In fact, in any situation where two variables are less than perfectly correlated, an exceptional score on one variable may not be matched by an equally exceptional score on the other variable. The imperfect correlation between parents and children (height is not entirely heritable) means that the distribution of heights of their children will be centered somewhere between the average of the parents and the average of the population as whole. Thus, any single child can be more extreme than the parents, but the odds are against it.

Examples

When his pain got worse, he went to a doctor, after which the pain subsided a little. Therefore, he benefited from the doctor’s treatment.

The pain subsiding a little after it has gotten worse is more easily explained by regression towards the mean. Assuming the pain relief was caused by the doctor is fallacious.

The student did exceptionally poorly last semester, so I punished him. He did much better this semester. Clearly, punishment is effective in improving students’ grades.

Often exceptional performances are followed by more normal performances, so the change in performance might better be explained by regression towards the mean. Incidentally, some experiments have shown that people may develop a systematic bias for punishment and against reward because of reasoning analogous to this example of the regression fallacy.Schaffner, 1985; Gilovich, 1991 pp. 27–28

The frequency of accidents on a road fell after a speed camera was installed. Therefore, the speed camera has improved road safety.

Speed cameras are often installed after a road incurs an exceptionally high number of accidents, and this value usually falls (regression to mean) immediately afterwards. Many speed camera proponents attribute this fall in accidents to the speed camera, without observing the overall trend.

Some authors have claimed that the alleged “Sports Illustrated Cover Jinx” is a good example of a regression effect: extremely good performances are likely to be followed by less extreme ones, and athletes are chosen to appear on the cover of Sports Illustrated only after extreme performances. Assuming athletic careers are partly based on random factors, attributing this to a “jinx” rather than regression, as some athletes reportedly believed, would be an example of committing the regression fallacy.Gilovich, 1991 pp. 26–27; Plous, 1993 p. 118

Misapplication

On the other hand, dismissing valid explanations can lead to a worse situation. For example:

After the Western Allies invaded Normandy, creating a second major front, German control of Europe waned. Clearly, the combination of the Western Allies and the USSR drove the Germans back.

Fallacious evaluation: “Given that the counterattacks against Germany occurred only after they had conquered the greatest amount of territory under their control, regression to the mean can explain the retreat of German forces from occupied territories as a purely random fluctuation that would have happened without any intervention on the part of the USSR or the Western Allies.” This is clearly not the case. The reason is that political power and occupation of territories is not primarily determined by random events, making the concept of regression to the mean inapplicable (on the large scale).

In essence, misapplication of regression to the mean can reduce all events to a “just so” story, without cause or effect. (Such misapplication takes as a premise that all events are random, as they must be for the concept of regression to the mean to be validly applied.)

53
Q

Reification

A

A fallacy of ambiguity, when an abstraction is treated as if it were a concrete, real event or physical entity. In other words, it is the error of treating as a “real thing” something that is not a real thing, but merely an idea.

Reification (also known as concretism, or the fallacy of misplaced concreteness) is a fallacy of ambiguity, when an abstraction (abstract belief or hypothetical construct) is treated as if it were a concrete, real event, or physical entity.

Logical Fallacies, Formal and Informal
In other words, it is the error of treating as a concrete thing something which is not concrete, but merely an idea. For example: if the phrase “fighting for justice” is taken literally, justice would be reified.

Another common manifestation is the confusion of a model with reality. Mathematical or simulation models may help understand a system or situation but real life may differ from the model.

Note that reification is generally accepted in literature and other forms of discourse where reified abstractions are understood to be intended metaphorically, but the use of reification in logical arguments is usually regarded as a fallacy. In rhetoric, it may be sometimes difficult to determine if reification was used correctly or incorrectly.

Etymology

From Latin res thing + facere to make, reification can be loosely translated as thing-making; the turning of something abstract into a concrete thing or object.

Theory

Reification often takes place when natural or social processes are misunderstood and/or simplified; for example when human creations are described as “facts of nature, results of cosmic laws, or manifestations of divine will”.David K. Naugle, Worldview: the history of a concept, Wm. B. Eerdmans Publishing, 2002, ISBN 0-8028-4761-7, Google Print, p.178 Reification can also occur when a word with a normal usage is given an invalid usage, with mental constructs or concepts referred to as live beings. When human-like qualities are attributed as well, it is a special case of reification, known as pathetic fallacy (or anthropomorphic fallacy).

Nature provides empathy that we may have insight into the mind of others.
Reification may derive from an inborn tendency to simplify experience by assuming constancy as much as possible.David Galin in B. Alan Wallace, editor, Buddhism & Science: Breaking New Ground. Columbia University Press, 2003, page 132.

Difference between reification and hypostatization

Sometimes a distinction is drawn between reification and hypostatization based on the kinds of abstractions involved. In reification they are usually philosophical or ideological, such as existence, good, and justice.

Fallacy of misplaced concreteness

In the philosophy of Alfred North Whitehead, one commits the fallacy of misplaced concreteness when one mistakes an abstract belief, opinion or concept about the way things are for a physical or “concrete” reality.

There is an error; but it is merely the accidental error of mistaking the abstract for the concrete. It is an example of what I will call the ‘Fallacy of Misplaced Concreteness.’

Whitehead proposed the fallacy in a discussion of the relation of spatial and temporal location of objects. He rejects the notion that a concrete physical object in the universe can be ascribed a simple spatial or temporal extension, that is, without reference of its relations to other spatial or temporal extensions.

…among the primary elements of nature as apprehended in our immediate experience, there is no element whatever which possesses this character of simple location. … I hold that by a process of constructive abstraction we can arrive at abstractions which are the simply located bits of material, and at other abstractions which are the minds included in the scientific scheme. Accordingly, the real error is an example of what I have termed: The Fallacy of Misplaced Concreteness.

The use of constructs in science

The concept of a “construct” has a long history in science; it is used in many, if not most, areas of science. A construct is a hypothetical explanatory variable that is not directly observable. For example, the concepts of motivation in psychology and center of gravity in physics are constructs – they are not directly observable. The degree to which a construct is useful and accepted in the scientific community depends on empirical research that has demonstrated that a scientific construct has construct validity (especially, predictive validity).Kaplan, R. M., & Saccuzzo, D. P. (1997). Psychological Testing. Chapter 5. Pacific Grove: Brooks-Cole. Thus, if properly understood and empirically corroborated, the “reification fallacy” applied to scientific constructs is not a fallacy at all—it is one part of theory creation and evaluation in normal science.

Relation to other fallacies

Pathetic fallacy (also known as anthropomorphic fallacy or anthropomorphization) is a specific type of reification. Just as reification is the attribution of concrete characteristics to an abstract idea, a pathetic fallacy is committed when those characteristics are specifically human characteristics, especially thoughts or feelings. pathetic fallacy. Pathetic fallacy is also related to personification, which is a direct and explicit ascription of life and sentience to the thing in question, whereas the pathetic fallacy is much broader and more allusive.

The animistic fallacy involves attributing personal intention to an event or situation.

Reification fallacy should not be confused with other fallacies of ambiguity:

Accentus, where the ambiguity arises from the emphasis (accent) placed on a word or phrase
Amphiboly, a verbal fallacy arising from ambiguity in the grammatical structure of a sentence
Composition, when one assumes that a whole has a property solely because its various parts have that property
Division, when one assumes that various parts have a property solely because the whole has that same property
Equivocation, the misleading use of a word with more than one meaning
As a rhetorical device

Reification is commonly found in rhetorical devices such as metaphor and personification. In those cases we are usually not dealing with a fallacy but with rhetorical applications of language. The distinction is that the fallacy occurs during an argument that results in false conclusions. This distinction is often difficult to make, particularly when the fallacious use is intentional.

Concrete abstractions in philosophy

Many philosophers considered abstract concepts to be concrete entities. For example, Hegel wrote “Truth is its own self–movement.…” and “Science exists solely in the self–movement of the Notion.…” The Phenomenology of Spirit, Preface, 48 and 71

54
Q

Retrospective determinism

A

The argument that because some event has occurred, it’s occurrence must have been inevitable beforehand.

Retrospective determinism is the informal fallacy that because something happened, it was therefore bound to happen; the term was coined by the French philosopher Henri Bergson. For example:

When he declared himself dictator of the Roman Republic, Julius Caesar was bound to be assassinated.
Were this an argument, it would give no rational grounds on which to conclude that Caesar’s assassination was the only possible outcome, or even the most likely outcome under the circumstances. This type of fallacy can preceed a hasty generalization: because something happened in given circumstances, it was not only bound to happen, but will in fact always happen given those circumstances. For example:

Caesar was assassinated when he declared himself dictator. Sic semper tyrannis: this goes to show that all dictators will eventually be assassinated.
Not only is this irrational, it is factually false.

55
Q

Shotgun argumentation

A

The arguer off such a large number of arguments for their position that the opponent can’t possibly respond to all of them.

Duane Tolbert Gish (February 17, 1921 – March 5, 2013) was an American biochemist and a prominent member of the creationist movement. Gish is a former vice-president of the Institute for Creation Research (ICR) and the author of numerous publications on the subject of scientific creationism. Gish was called “creationism’s T.H. Huxley” for the way he “relished the confrontations” of formal debates with prominent evolutionary biologists, usually held on university campuses.”

Gish, a twin, was born in White City, Kansas, the youngest of nine children. He received a B.S. degree from UCLA in 1949 and a Ph.D. in biochemistry from the University of California, Berkeley in 1953. He worked as an Assistant Research Associate at Berkeley, and Assistant Professor at Cornell University Medical College performing biomedical and biochemical research for eighteen years, joining the Upjohn Company as a Research Associate in 1960.

A Methodist from age ten, and later a fundamentalist Baptist, Gish has long held that the Biblical creation story is a historical fact. After reading Evolution: Science Falsely So-Called in the late 1950s, Gish became persuaded that science had produced falsifying evidence against biological evolutionary theory, and that various fields of science offered corroborating evidence in support of Biblical creation.”Dr. Duane Gish: Crusader”, Creation Matters, Volume 1, Number 1 January/February 1996 He joined the American Scientific Affiliation (ASA), an association of Christian scientists, mistakenly assuming the group to be aligned with creationism. Through his affiliation at the ASA, Gish met geneticist and creationist, William J. Tinkle, who in 1961 invited Gish to join his newly formed anti-evolution caucus within the ASA.

In 1971 Gish became a member of the faculty at San Diego Christian College working in their research division, before accepting a position at the Institute for Creation Research (independent since 1981). He is the author of several books and articles espousing the tenets of creationism. His best known work, Evolution: The Fossils Say No!, published in 1978, has been widely accepted by antievolutionists as an authoritative reference for creationist concepts.

Gish initially “assigned low priority to the question of age the Earth”. Up to the end of his life, Gish held the position of Senior Vice-President Emeritus at the ICR. Gish died on March 5, 2013.

Debates

Gish was characterized as using a rapid-fire approach during a debate, presenting arguments and changing topics very quickly. Eugenie Scott, executive director of the National Center for Science Education, has dubbed this approach the “Gish Gallop,” describing it as “where the creationist is allowed to run on for 45 minutes or an hour, spewing forth torrents of error that the evolutionist hasn’t a prayer of refuting in the format of a debate” and criticized Gish for failing to answer objections raised by his opponents. The phrase has also come to be used as a pejorative to describe similar debate styles employed by proponents of other, usually fringe beliefs, such as homeopathy or the moon landing hoax.

Gish was also criticised for using a standardized presentation during debates. While undertaking research for a debate with Gish, Michael Shermer noted that for several debates Gish’s opening, assumptions about his opponent, slides and even jokes remained identical. In the debate itself, Shermer has written that while he stated he was not an atheist and willing to accept the existence of a divine creator, Gish’s rebuttal concerned itself primarily with proving that Shermer was an atheist and therefore immoral. Massimo Pigliucci, who has debated Gish five times, said that he ignores evidence contrary to his religious beliefs. Others have accused Gish of stonewalling arguments with fabricated facts or figures.

56
Q

Special pleading

A

Where a proponent of a position attempts to cite something as an exemption to a generally accepted rule or principle without justifying the exemption.

Special pleading, also known as stacking the deck, ignoring the counterevidence, slanting, and one-sided assessment, is a form of spurious argument where a position in a dispute introduces favourable details or excludes unfavourable details by alleging a need to apply additional considerations without proper criticism of these considerations. Essentially, this involves someone attempting to cite something as an exemption to a generally accepted rule, principle, etc. without justifying the exemption.

The lack of criticism may be a simple oversight (e.g., a reference to common sense) or an application of a double standard.

Examples

A more difficult case is when a possible criticism is made relatively immune to investigation. This immunity may take the forms of:

unexplained claims of exemption from principles commonly thought relevant to the subject matter
Example: I’m not relying on faith in small probabilities here. These are slot machines, not roulette wheels. They are different.
claims to data that are inherently unverifiable, perhaps because too remote or impossible to define clearly
Example: Cocaine use should be legal. Like all drugs, it does have some adverse health effects, but cocaine is different from other drugs. Many have benefited from the effects of cocaine.
appeals to “common knowledge” that bypass supporting data
Example: Everyone knows you can catch a common cold from exposure to a chill.
assertion that the opponent lacks the qualifications necessary to comprehend a point of view
Example: I know you think that quantum mechanics does not always make sense. There are things about quantum mechanics that you don’t have the education to understand.
assertion that nobody has the qualifications necessary to comprehend a point of view
Example: I know the idea that ball lightning is caused by ghosts makes no sense to you, but that’s only because you’re human. Humans cannot understand supernatural phenomena.
In the classic distinction among informal , psychological, and formal fallacies, special pleading most likely falls within the category of psychological fallacy, as it would seem to relate to “lip service”, rationalization and diversion (abandonment of discussion). Special pleading also often resembles the “appeal to” logical fallacies.This division is found in introductory texts such as Fallacy: The Counterfeit of Argument, W. Ward Fearnside, Prentice-Hall, Inc., 1959.

In medieval philosophy, it was not assumed that wherever a distinction is claimed, a relevant basis for the distinction should exist and be substantiated. Special pleading subverts an assumption of existential import.

57
Q

Wrong direction

A

Cause and effect are reversed. The cause is said to be the effect and vice versa.

Wrong direction is an informal fallacy of questionable cause where cause and effect are reversed. The cause is said to be the effect and vice versa.

For instance, the statement:

Driving a wheelchair is dangerous, because most people who drive them have had an accident.
In other cases it may simply be unclear which is the cause and which is the effect. For example:

Children that watch a lot of TV are the most violent. Clearly, TV makes children more violent.
This could easily be the other way round; that is, violent children like watching more TV than less violent ones.

Likewise, a correlation between recreational drug use and psychiatric disorders might be either way around: perhaps the drugs cause the disorders, or perhaps people use drugs to self medicate for preexisting conditions.

Gateway drug theory may argue that marijuana usage leads to usage of harder drugs, but hard drug usage may lead to marijuana usage (see also confusion of the inverse). Indeed in the social sciences where controlled experiments often cannot be used to discern the direction of causation, this fallacy can fuel long-standing scientific arguments. One such example can be found in education economics, between the screening/signaling and human capital models: it could either be that having innate ability enables one to complete an education, or that completing an education builds one’s ability.

A historical example of this is that Europeans in the Middle Ages believed that lice were beneficial to your health, since there would rarely be any lice on sick people. The reasoning was that the people got sick because the lice left. The real reason however is that lice are extremely sensitive to body temperature. A small increase of body temperature, such as in a fever, will make the lice look for another host. The medical thermometer had not yet been invented, so this increase in temperature was rarely noticed. Noticeable symptoms came later, giving the impression that the lice left before the person got sick.

In other cases, two phenomena can each be a partial cause of the other; consider poverty and lack of education, or procrastination and poor self-esteem. One making an argument based on these two phenomena must however be careful to avoid the fallacy of circular cause and consequence. Poverty is a cause of lack of education, but it is not the sole cause, and vice versa.

Apperances in TV

Wrong Direction is often used as a parody band of One Direction.

58
Q

Personal attacks

A

The evasion of the actual topic by directing the attack at your opponent.

An ad hominem (Latin for “to the man” or “to the person”), short for argumentum ad hominem, is an argument made personally against an opponent instead of against their argument. Ad hominem reasoning is normally described as an informal fallacy, more precisely an irrelevance.

Types

Abusive

Abusive ad hominem usually involves attacking the traits of an opponent as a means to invalidate their arguments. Equating someone’s character with the soundness of their argument is a logical fallacy. Mere verbal abuse in the absence of an argument, however, is not ad hominem nor any kind of logical fallacy.

Ad hominem abuse is not to be confused with slander or libel, which employ falsehoods and are not necessarily leveled to undermine otherwise sound stands with character attacks.

Circumstantial

Ad hominem circumstantial points out that someone is in circumstances such that they are disposed to take a particular position. Ad hominem circumstantial constitutes an attack on the bias of a source. This is fallacious because a disposition to make a certain argument does not make the argument false; this overlaps with the genetic fallacy (an argument that a claim is incorrect due to its source).

The circumstantial fallacy applies only where the source taking a position is only making a logical argument from premises that are generally accepted. Where the source seeks to convince an audience of the truth of a premise by a claim of authority or by personal observation, observation of their circumstances may reduce the evidentiary weight of the claims, sometimes to zero.

Examples:

Mandy Rice-Davies’s famous testimony during the Profumo Affair, “Well, he would that, wouldn’t he?”, is an example of a valid circumstantial argument. Her point was that a man in a prominent position, accused of an affair with a callgirl, would deny the claim whether it was true or false. His denial, in itself, carries little evidential weight against the claim of an affair. Note, however, that this argument is valid only insofar as it devalues the denial; it does not bolster the original claim. To construe evidentiary invalidation of the denial as evidentiary validation of the original claim is fallacious (on several different bases, including that of argumentum ad hominem); however likely the man in question would be to deny an affair that did in fact happen, he could only be more likely to deny an affair that never happened.

Conflict of Interest: Where a source seeks to convince by a claim of authority or by personal observation, identification of conflicts of interest are not ad hominem – it is generally well accepted that an “authority” needs to be objective and impartial, and that an audience can only evaluate information from a source if they know about conflicts of interest that may affect the objectivity of the source. Identification of a conflict of interest is appropriate, and concealment of a conflict of interest is a problem.

Ad hominem

In Latin, the word homō (of which hominem is the accusative case) has the gender-neutral meaning of “a human being”, “a person” (unlike some of the words in Romance languages it gave rise to, such as French homme and Italian uomo). A translation of ad hominem that preserves this gender-neutrality is “to the person”. Ad hominem is an attack on the person, not the person’s arguments.

Tu quoque

Ad hominem tu quoque (literally: “You also”) refers to a claim that the source making the argument has spoken or acted in a way inconsistent with the argument. In particular, if Source A criticizes the actions of Source B, a tu quoque response is that Source A has acted in the same way. This argument is fallacious because it does not disprove the argument; if the premise is true then Source A may be a hypocrite, but this does not make the statement less credible from a logical perspective. Indeed, Source A may be in a position to provide personal testimony to support the argument.

For example, a father may tell his son not to start smoking as he will regret it when he is older, and the son may point out that his father is or was a smoker. This does not alter the fact that his son may regret smoking when he is older.

Guilt by association

Guilt by association can sometimes also be a type of ad hominem fallacy if the argument attacks a source because of the similarity between the views of someone making an argument and other proponents of the argument.

This form of the argument is as follows:

Source S makes claim C.
Group G, which is currently viewed negatively by the recipient, also makes claim C.
Therefore, source S is viewed by the recipient of the claim as associated to the group G and inherits how negatively viewed it is.
Halo effect

See also; List of cognitive biases

Ad hominem arguments work via the halo effect, a human cognitive bias in which the perception of one trait is influenced by the perception of an unrelated trait, e.g. treating an attractive person as more intelligent or more honest. People tend to see others as tending to all good or tending to all bad. Thus, if you can attribute a bad trait to your opponent, others will tend to doubt the quality of their arguments, even if the bad trait is irrelevant to the arguments.

Questions about the notion of an ad hominem fallacy

Doug Walton, Canadian academic and author, has argued that ad hominem reasoning is not always fallacious, and that in some instances, questions of personal conduct, character, motives, etc., are legitimate and relevant to the issue, as when it directly involves hypocrisy, or actions contradicting the subject’s words.

The philosopher Charles Taylor has argued that ad hominem reasoning is essential to understanding certain moral issues, and contrasts this sort of reasoning with the apodictic reasoning of philosophical naturalism.

Olavo de Carvalho, a Brazilian philosopher, has argued that ad hominem reasoning not only has rhetorical, but also logical value. As an example, he cites Karl Marx’s idea that only the proletariat has an objective view of history. If that were to be taken rigorously, an ad hominem argument would effectively render Marx’s general theory as incoherent: as Marx was not a proletarian, his own view of history couldn’t be objective.

59
Q

Accident

A

An exception to a generalization is ignored.

The informal fallacy of accident (also called destroying the exception or a dicto simpliciter ad dictum secundum quid) is a deductively valid but unsound argument occurring in statistical syllogisms (an argument based on a generalization) when an exception to a rule of thumb is ignored. It is one of the thirteen fallacies originally identified by Aristotle. The fallacy occurs when one attempts to apply a general rule to an irrelevant situation.

For example:

Cutting people with knives is a crime. →
Surgeons cut people with knives. →
Surgeons are criminals.
It is easy to construct fallacious arguments by applying general statements to specific incidents that are obviously exceptions.

Generalizations that are weak generally have more exceptions (the number of exceptions to the generalization need not be a minority of cases) and vice versa.

This fallacy may occur when we confuse particulars (“some”) for categorical statements (“always and everywhere”). It may be encouraged when no qualifying words like “some”, “many”, “rarely” etc. are used to mark the generalization.

Related inductive fallacies include: overwhelming exception, hasty generalization. See faulty generalization.

The opposing kind of a dicto simpliciter fallacy is the converse accident.

60
Q

Cherry picking

A

Act of pointing at individual cases or data that seem to confirm a particular position, while ignoring a significant portion of related cases or data that may contradict that position.

Cherry picking, suppressing evidence, or the fallacy of incomplete evidence is the act of pointing to individual cases or data that seem to confirm a particular position, while ignoring a significant portion of related cases or data that may contradict that position. It is a kind of fallacy of selective attention, the most common example of which is the confirmation bias. Cherry picking may be committed unintentionally. The Internet Encyclopedia of Philosophy, “Fallacies”, Bradley Dowden (2010)

The term is based on the perceived process of harvesting fruit, such as cherries. The picker would be expected to only select the ripest and healthiest fruits. An observer who only sees the selected fruit may thus wrongly conclude that most, or even all, of the fruit is in such good condition.

Cherry picking can be found in many logical fallacies. For example, the “fallacy of anecdotal evidence” tends to overlook large amounts of data in favor of that known personally, “selective use of evidence” rejects material unfavorable to an argument, while a false dichotomy picks only two options when more are available.

In statistics

Cherry picking can refer to the selection of data or data sets so a study or survey will give desired, predictable results which may be misleading or even completely contrary to actuality.

In politics

In his book The Years of Talking Dangerously, linguist Geoffrey Nunberg wrote that, “By cherry-picking his dictionaries and definitions, Attorney General Jay Bybee managed to come up with a definition of torture that ruled out any practice that doesn’t cause lasting impairment or inflict pain that rises to the level of death or organ damage.”

In science

Science-Based Medicine » A Skeptic In Oz
In medicine

In a 2002 study, researchers “reviewed 31 antidepressant efficacy trials to identify the primary exclusion criteria used in determining eligibility for participation. Their findings suggest that patients in current antidepressant trials represent only a minority of patients treated in routine clinical practice for depression. Excluding potential clinical trial subjects with certain profiles means that the ability to generalize the results of antidepressant efficacy trials lacks empirical support, according to the authors.”

61
Q

Argumentum ad baculum

A

An argument made through coercion or threats of force to support.

Argumentum ad baculum (Latin for argument to the cudgel or appeal to the stick), also known as appeal to force, is an argument where force, coercion, or the threat of force, is given as a justification. It is a specific case of the negative form of an argument to the consequences.

As a logical argument

A fallacious argument based on argumentum ad baculum generally proceeds as follows:

If x accepts P as true, then Q.
Q is a punishment on x.
Therefore, P is not true.
This form of argument is an informal fallacy, because the attack Q may not necessarily reveal anything about the truth value of the premise P. This fallacy has been identified since the Middle Ages by many philosophers. This is a special case of argumentum ad consequentiam, or “appeal to consequences”.

Examples

Employee: I do not think the company should invest its money into this project.
Employer: Be quiet or you will be fired.
Student: I do not think it is fair that the deadline for our essay is so soon.
Teacher: Do not argue with me or I will send you to detention.
In both of these examples, the authority figures ended the argument with a threat, but this does not automatically mean they are correct. They did not win the argument because they did not refute the other person’s contention.

The Non-fallacious Ad Baculum

An ad baculum argument is fallacious when the punishment is not meaningfully related to the conclusion being drawn. Many ad baculum arguments are not fallacies.Woods, John. Irvine, Andrew. Walton, Douglas. Argument, Critical Thinking, Logic and the Fallacies For example:

If you drive while drunk, you will be put in jail.
You want to avoid going to jail.
Therefore you should not drive while drunk.
This is called a non-fallacious ad baculum. The inference is valid because the existence of the punishment is not being used to draw conclusions about the nature of drunk driving itself, but about people for whom the punishment applies. It would become a fallacy if one proceeded from the first premise to argue, for example, that drunk driving is immoral or bad for society. Specifically, the above argument would become a fallacious ad baculum if the conclusion stated:

Therefore you will not drive while drunk.
if using the form as above:

If x does P, then Q.
Q is a punishment on x.
Therefore, x should not do P.

62
Q

Argumentum ad populum

A

Where a proposition is claimed to be true or good solely because many people believe it to be so.

“Ad populum” redirects here. For the Catholic liturgical term, see Versus populum.

In argumentation theory, an argumentum ad populum (Latin for “appeal to the people”) is a fallacious argument that concludes a proposition to be true because many or most people believe it. In other words, the basic idea of the argument is: “If many believe so, it is so.”

This type of argument is known by several names,Austin Cline. Argumentum ad Populum including appeal to the masses, appeal to belief, appeal to the majority, appeal to democracy, argument by consensus, consensus fallacy, authority of the many, and bandwagon fallacy, and in Latin as argumentum ad numerum (“appeal to the number”), and consensus gentium (“agreement of the clans”). It is also the basis of a number of social phenomena, including communal reinforcement and the bandwagon effect. The Chinese proverb “three men make a tiger” concerns the same idea.

Examples

This fallacy is sometimes committed while trying to convince a person that a widely popular thought is true.

Nine out of ten of my constituents oppose the bill, therefore it is a bad idea.
Nine out of ten of my fellow congressmen favor the bill, therefore it is a good idea.
Other examples:

Fifty million Elvis fans can’t be wrong.
Everyone’s doing it.
In a court of law, the jury vote by majority; therefore they will always make the correct decision.
Google gives more hits when this spelling is applied, therefore this has to be the correct spelling.
Many people buy extended warranties, therefore it is wise to buy them.
Explanation

The argumentum ad populum is a red herring and genetic fallacy. It appeals on probabilistic terms; given that 75% of a population answer A to a question where the answer is unknown, the argument states that it is reasonable to assume that the answer is indeed A. In cases where the answer can be known but is not known by a questioned entity, the appeal to majority provides a possible answer with a relatively high probability of correctness.

There is the problem of determining just how many are needed to have a majority or consensus. Is merely greater than 50% significant enough and why? Should the percentage be larger, such as 80 or 90 percent, and how does that make a real difference? Is there real consensus if there are one or even two people who have a different claim that is proven to be true?

It is logically fallacious because the mere fact that a belief is widely-held is not necessarily a guarantee that the belief is correct; if the belief of any individual can be wrong, then the belief held by multiple persons can also be wrong. The argument that because 75% of people polled think the answer is A implies that the answer is A, this argument fails, because if opinion did determine truth, then there be no way to deal with the discrepancy between the 75% of the sample population that believe the answer is A and 25% who are of the opinion that the answer is not A. However small the percentage of those polled is distributed among any remaining answers, this discrepancy by definition disproves any guarantee of the correctness of the majority. In addition, this would be true even if the answer given by those polled were unanimous, as the sample size may be insufficient, or some fact may be unknown to those polled that, if known, would result in a different distribution of answers.

This fallacy is similar in structure to certain other fallacies that involve a confusion between the justification of a belief and its widespread acceptance by a given group of people. When an argument uses the appeal to the beliefs of a group of supposed experts, it takes on the form of an appeal to authority; if the appeal is to the beliefs of a group of respected elders or the members of one’s community over a long period of time, then it takes on the form of an appeal to tradition.

One who commits this fallacy may assume that individuals commonly analyze and edit their beliefs and behaviors. This is often not the case (see conformity).

The argumentum ad populum can be a valid argument in inductive logic; for example, a poll of a sizeable population may find that 90% prefer a certain brand of product over another. A cogent (strong) argument can then be made that the next person to be considered will also prefer that brand, and the poll is valid evidence of that claim. However, it is unsuitable as an argument for deductive reasoning as proof, for instance to say that the poll proves that the preferred brand is superior to the competition in its composition or that everyone prefers that brand to the other.

Evidence

One could claim that smoking is a healthy pastime, since millions of people do it. However, knowing the dangers of smoking, we instead say that smoking is not a healthy pastime despite the fact that millions do it.
One could claim Angelina Jolie is the best-looking woman in the world, because she is regularly voted as such, although the sample she is part of (celebrities) is insufficient, and ideals of beauty are arguably culturally determined and thus arbitrary to a significant degree. For instance, overweight bodies have been considered more beautiful in some cultures (Such as Mauritania) because only the wealthy could afford to eat enough to become overweight or obese. By contrast, contemporary high fashion generally involves women who have been criticized for not eating enough.
At a time in history when most people believed the world was flat, one could have claimed the world is flat because most believed it.
Advocates of heliocentrism, such as Galileo Galilei were strongly suppressed, despite scientific evidence, now recognized as factual, that supported heliocentrism at the expense of geocentrism.
Exceptions

Appeal to belief is valid only when the question is whether the belief exists.

Appeal to popularity is therefore valid only when the questions are whether the belief is widespread and to what degree. I.e., ad populum only proves that a belief is popular, not that it is true. In some domains, however, it is popularity rather than other strengths that makes a choice the preferred one, for reasons related to network effects.

Social convention

Matters of social convention, such as etiquette or polite manners, depend upon the wide acceptance of the convention. As such, argumentum ad populum is not fallacious when referring to the popular belief about what is polite or proper:

“Most people in Russia think that it is polite for men to kiss each other in greeting. Therefore, it is polite for men to kiss each other in greeting in Russia.”
Social conventions can change, however, and sometimes very quickly. Thus, the fact that everyone in Russia this year thinks that it is polite to kiss can not be used as evidence that everyone always believed that, or that they should always believe it.

The philosophical question of moral relativism asks whether such arguments apply to statements of morals.

Safety

Whether to follow a tenet decided by popularity rather than logical design may be a matter of safety or convenience:

“Nearly all Americans think that you should drive on the right side of the road. Therefore, you should drive on the right side of the road in the United States.”
In this case, the choice of which side to drive on is basically arbitrary. However, to avoid head-on collisions, everyone on the road must agree on it. In many cases, what is safe to do depends on what others expect one will do, and thus on the “popularity” of that choice.

Language

Linguistic descriptivists argue that correct grammar, spelling, and expressions are defined by the language’s speakers, especially in languages which do not have a central governing body. According to this viewpoint, if an incorrect expression is commonly used, it becomes correct. In contrast, linguistic prescriptivists believe that incorrect expressions are incorrect regardless of how many people use them.

Reversals

In some circumstances, a person may argue that the fact that most people believes X implies that X is false. This line of thought is closely related to the ad hominem, appeal to emotion, poisoning the well, and guilt by association fallacies given that it invokes a person’s contempt for the general populace or something about the general populace in order to persuade them that most are wrong about X. The ad populum reversal commits the same logical flaw as the original fallacy given that the idea “X is true” is inherently separate from the idea that “Most people believe X”.

For example, consider the arguments:

“Are you going to be a mindless conformist drone drinking milk and water like everyone else, or will you wake up and drink my product?”See: “MTN DEW is a non-conformist brand that’s all about taking life to the next level.” PowerPoint Presentation
“Everyone likes The Beatles and that probably means that they didn’t have nearly as much talent as , which didn’t sell out.”These ideas are paraphrased from this presentation by authors Andrew Potter and Joseph Heath in which they state:
For example, everybody would love to listen to fabulous underground bands that nobody has ever head of before, but virtually not all of us can do this. Once too many people find out about this great band, then they are no longer underground. And so we say that it’s sold out or ‘mainstream’ or even ‘co-opted by the system’. What is really happened is simply that too many people have started buying their albums so that listening to them no longer serves as a source of distinction. The real rebels therefore have to go off and find some new band to listen to that nobody else knows about in order to preserve this distinction and their sense of superiority over others.
“The German people today consists of the Auschwitz generation, with every person in power being guilty in some way. How on earth can we buy the generally held propaganda that the Soviet Union is imperialistic and totalitarian? Clearly, it must not be.”These ideas are paraphrased from the ‘Baader Meinhof Gang’ article at the True Crime Library, which states:
Gudrun Ensslin may have been wrong about many or most things, she was not speaking foolishly when she spoke of the middle-aged folk of her era as “the Auschwitz generation.” Not all of them had been Nazis, of course, but a great many had supported Hitler. Many had been in the Hitler Youth and served in the armed forces, fighting Nazi wars of conquest. A minority had ineffectively resisted Nazism but, as a whole, it was a generation coping with an extraordinary burden of guilt and shame… many of the people who joined what would come to be known as the Baader-Meinhof Gang were motivated by an unconscious desire to prove to themselves that they would have risked their lives to defeat Nazism… West Germans well knew. Many of them had relatives in East Germany and were well aware that life under communism was regimented and puritanical at best and often monstrously oppressive.
“Most people still either hate gays or just barely tolerate their existence. How can you still buy their other line that claims that pederasty is wrong?”These ideas are paraphrased from Feminism, Homosexuality and Pedophilia” by Harris Mirkin. See also Pro-pedophile activism#Strategies for promoting acceptance.
“Everyone loves . must be nowhere near as talented as the devoted and serious method actors that aren’t so popular like .”
In general, the reversal usually goes: Most people believe A and B are both true. B is false. Thus, A is false. The similar fallacy of chronological snobbery is not to be confused with the ad populum reversal. Chronological snobbery is the claim that if belief in both X and Y was popularly held in the past and if Y was recently proved to be untrue then X must also be untrue. That line of argument is based on a belief in historical progress and not—like the ad populum reversal is—on whether or not X and/or Y is currently popular.

63
Q

Association fallacy

A

Arguing that because two things share a property they are the same.

An association fallacy is an inductive informal fallacy of the type hasty generalization or red herring which asserts that qualities of one thing are inherently qualities of another, merely by an irrelevant association. The two types are sometimes referred to as guilt by association and honor by association. Association fallacies are a special case of red herring, and can be based on an appeal to emotion.

Form

In notation of first-order logic, this type of fallacy can be expressed as (∃x ∈ S : φ(x)) → (∀x ∈ S : φ(x)), meaning “if there exists any x in the set S so that a property φ is true for x, then for all x in S the property φ must be true.”

Premise A is a B
Premise A is also a C
Conclusion Therefore, all Bs are Cs
The fallacy in the argument can be illustrated through the use of an Euler diagram: “A” satisfies the requirement that it is part of both sets “B” and “C”, but if one represents this as an Euler diagram, it can clearly be seen that it is possible that a part of set “B” is not part of set “C”, refuting the conclusion that “all Bs are Cs”.

Guilt by association

Examples

Some syllogistic examples of guilt by association:

John is a con artist. John has black hair. Therefore, all people with black hair are con artists.
Jane is good at mathematics. Jane is dyslexic. Therefore, all dyslexic people are good at mathematics.
Simon, Karl, Jared, and Brett are all friends of Josh, and they are all petty criminals. Jill is a friend of Josh; therefore, Jill is a petty criminal.
All dogs have four legs; my cat has four legs. Therefore, my cat is a dog. (This argument is made by the wordplay-prone Sir Humphrey Appleby in the BBC sitcom Yes, Prime Minister).
Guilt by association as an ad hominem fallacy

Guilt by association can sometimes also be a type of ad hominem fallacy, if the argument attacks a person because of the similarity between the views of someone making an argument and other proponents of the argument.

This form of the argument is as follows:

Source S makes claim C.
Group G, which is currently viewed negatively by the recipient, also makes claim C.
Therefore, source S is viewed by the recipient of the claim as associated to the group G and inherits how negatively viewed it is.
An example of this fallacy would be “My opponent for office just received an endorsement from the Puppy Haters Association. Is that the sort of person you would want to vote for?”

Honor by association

The logical inverse of “guilt by association” is honor by association, where one claims that someone or something must be reputable because of the people or organizations that are related to it or otherwise support it. For example:

Examples

Citizens of Country X won more Nobel Prizes, gold medals, and literary awards than citizens of Country Y. Therefore, a citizen of Country X is superior to a citizen of Country Y.
In many advertisements, businesses heavily use the principle of honor by association. For example, an attractive woman will say that a specific product is good. Her attractiveness gives the product good associations.
Guilt by association as an ad hominem fallacy

Guilt by association can sometimes also be a type of ad hominem fallacy, if the argument attacks a person because of the similarity between the views of someone making an argument and other proponents of the argument.

This form of the argument is as follows:

Source S makes claim C.
Group G, which is currently viewed negatively by the recipient, also makes claim C.
Therefore, source S is viewed by the recipient of the claim as associated to the group G and inherits how negatively viewed it is.
An example of this fallacy would be “My opponent for office just received an endorsement from the Puppy Haters Association. Is that the sort of person you would want to vote for?”

Honor by association

The logical inverse of “guilt by association” is honor by association, where one claims that someone or something must be reputable because of the people or organizations that are related to it or otherwise support it. For example:

Examples

Citizens of Country X won more Nobel Prizes, gold medals, and literary

64
Q

Appeal to authority

A

Where an assertion is deemed true because of the position or authority of the person asserting it.

Argument from authority (argumentum ad verecundiam), also authoritative argument and appeal to authority, is an inductive-reasoning argument that often takes the form of a statistical syllogism. Although certain classes of argument from authority can constitute strong inductive arguments, the appeal to authority is often applied fallaciously: either the authority is not a subject-matter expert, or there is no consensus among experts in the subject matter, or both.

Forms

The argument from authority (argumentum ad verecundiam) can take several forms. As a statistical syllogism, the argument has the following basic structure:

Most of what authority A has to say on subject matter S is correct.
A says P about subject matter S.
Therefore, P is correct.
The strength of this authoritative argument depends upon two factors:

The authority is a legitimate expert on the subject.
There exists consensus among legitimate experts in the subject matter under discussion.
The two factors — legitimate expertise and expert consensus — can be incorporated to the structure of the statistical syllogism, in which case, the argument from authority can be structured thus:

X holds that A is true.
X is a legitimate expert on the subject matter.
The consensus of subject-matter experts agrees with X.
Therefore, there exists a presumption that A is true.
Fallacious appeal to authority

Fallacious arguments from authority often are the result of failing to meet at least one of the required two conditions (legitimate expertise and expert consensus) structurally required in the forms of a statistical syllogism. First, when the inference fails to meet the first condition (inexpert authority), it is an appeal to inappropriate authority, which occurs when an inference relies upon a person or a group without relevant expertise or knowledge of the subject matter under discussion.

Second, because the argument from authority is an inductive-reasoning argument — wherein is implied that the truth of the conclusion cannot be guaranteed by the truth of the premises — it also is fallacious to assert that the conclusion must be true. Such a determinative assertion is a logical non sequitur, because, although the inductive argument might have merit — either probabilistic or statistical — the conclusion does not follow unconditionally, in the sense of being logically necessary.Marguerite H. Foster, Michael L. Martin (eds) Probability, Confirmation, and Simplicity: Readings in the Philosophy of Inductive Logic Pub. Odyssey Press 1966Peirce, C. S. et al., Studies in Logic by members of the Johns Hopkins University (1883)

65
Q

Appeal to consequences

A

The conclusion is supported by a premises a that asserts positive or negative consequences from some course of action in an attempt to distract from the initial discussion.

Appeal to consequences, also known as argumentum ad consequentiam (Latin for “argument to the consequences”), is an argument that concludes a hypothesis (typically a belief) to be either true or false based on whether the premise leads to desirable or undesirable consequences. This is based on an appeal to emotion and is a type of informal fallacy, since the desirability of a consequence does not make it true. Moreover, in categorizing consequences as either desirable or undesirable, such arguments inherently contain subjective points of view.

In logic, appeal to consequences refers only to arguments that assert a conclusion’s truth value (true or false) without regard to the formal preservation of the truth from the premises; appeal to consequences does not refer to arguments that address a premise’s desirability (good or bad, or right or wrong) instead of its truth value. Therefore, an argument based on appeal to consequences is valid in ethics, and in fact such arguments are the cornerstones of many moral theories, particularly related to consequentialism.

General form

An argument based on appeal to consequences generally has one of two forms: FallacyFiles.org - Appeal to Consequences

Positive form

If P, then Q will occur.
Q is desirable.
Therefore, P is true.
It is closely related to wishful thinking in its construction.

Examples

“Pi is probably a rational number: being rational would make it more elegant.”
“Real estate markets will continue to rise this year: home owners enjoy the capital gains.”
“Humans will travel faster than light: faster-than-light travel would be beneficial for space travel.”
“An afterlife must exist; I want to exist forever”.
Negative form

If P, then Q will occur.
Q is undesirable.
Therefore, P is false.
Appeal to force (argumentum ad baculum) is a special instance of this form.

This form somewhat resembles modus tollens but is both different and fallacious, since “Q is undesirable” is not equivalent to “Q is false”.

Examples

“The axiom of choice must be wrong because it implies the Banach-Tarski paradox, meaning that geometry contradicts common sense.”
“Free will must exist: if it didn’t, we would all be machines.” (This is also a false dilemma.)
“If the six men win, it will mean that the police are guilty of perjury, that they are guilty of violence and threats, that the confessions were invented and improperly admitted in evidence and the convictions were erroneous… This is such an appalling vista that every sensible person in the land would say that it cannot be right that these actions should go any further.” Lord Denning in his judgment on the Birmingham Six.
“Objective morality must exist; if it didn’t, then it could be considered acceptable to commit atrocities.”
Negative form

If P, then Q will occur.
Q is undesirable.
Therefore, P is false.
Appeal to force (argumentum ad baculum) is a special instance of this form.

This form somewhat resembles modus tollens but is both different and fallacious, since “Q is undesirable” is not equivalent to “Q is false”.

Examples

“The axiom of choice must be wrong because it implies the Banach-Tarski paradox, meaning that geometry contradicts common sense.”
“Free will must exist: if it didn’t, we would all be machines.” (This is also a false dilemma.)
“If the six men win, it will mean that the police are guilty of perjury, that they are guilty of violence and threats, that the confessions were invented and improperly admitted in evidence and the convictions were erroneous… This is such an appalling vista that every sensible person in the land would say that it cannot be right that these actions should go any further.” Lord Denning in his judgment on the Birmingham Six.
“Objective morality must exist; if it didn’t, then it could be considered acceptable to commit atrocities.”
In law

In law, an argument from inconvenience or argumentum ab inconvenienti, is a valid type of appeal to consequences. Such an argument would seek to show that a proposed action would have unreasonably inconvenient consequences, as for example a law that would require a person wishing to lend money against a security to first ascertain the borrower’s title to the property by inquiring in every single courthouse in the country.

66
Q

Appeal to emotion

A

Where an argument is made due to the manipulation of emotions, rather than use of valid reasoning.

Appeal to emotion or argumentum ad passiones is a logical fallacy which uses the manipulation of the recipient’s emotions, rather than valid logic, to win an argument. The appeal to emotion fallacy uses emotions as the basis of an argument’s position without factual evidence that logically supports the major ideas endorsed by the elicitor of the argument. Also, this kind of thinking may be evident in one who lets emotions and/or other subjective considerations influence one’s reasoning process. This kind of appeal to emotion is a type of red herring and encompasses several logical fallacies, including:

Appeal to consequences
Appeal to fear
Appeal to flattery
Appeal to pity
Appeal to ridicule
Appeal to spite
Wishful thinking
Analytical assumptions

Instead of facts, persuasive language is used to develop the foundation of an appeal to emotion-based argument. Thus, the validity of the premises that establish such an argument does not prove to be verifiable.

Unjustifiable

Conclusively, the appeal to emotion fallacy presents a perspective intended to be superior to reason. Appeals to emotion are intended to draw visceral feelings from the acquirer of the information. And in turn, the acquirer of the information is intended to be convinced that the statements that were presented in the fallacious argument are true; solely on the basis that the statements may induce emotional stimulation such as fear, pity and joy. Though these emotions may be provoked by an appeal to emotion fallacy, substantial proof of the argument is not offered, and the argument’s premises remain invalid.Kimball, Robert H. “A Plea for Pity.” Philosophy and Rhetoric. Vol. 37, Issue 4. (2004): 301–316. Print.Wheater, Isabella “Philosophy.” Vol.79, Issue 308. (2004): 215–245. Print.Moore, Brooke N., and Kenneth Bruder. Philosophy: The Power of Ideas. New York: McGraw-Hill, 2008. Print.

Examples

“For the children”
Reductio ad Hitlerum

67
Q

Appeal to fear

A

When an argument is made to appeal to the emotion, specifically fear.

An appeal to fear (also called argumentum ad metum or argumentum in terrorem) is a fallacy in which a person attempts to create support for an idea by using deception and propaganda in attempts to increase fear and prejudice toward a competitor. The appeal to fear is common in marketing and politics. Fallacies: alphabetic list (full list)

Logic

This fallacy has the following argument form:

Either P or Q is true.
Q is frightening.
Therefore, P is true.
The argument is invalid. The appeal to emotion is used in exploiting existing fears to create support for the speaker’s proposal, namely P. Also, often the false dilemma fallacy is involved, suggesting Q is the proposed idea’s sole alternative.

Example

“If you continue to drink, you will die early as your father did.”
“If you cannot graduate from high school, you will live in poverty for the rest of your life.”
“Voting for him is the same as voting for the terrorists.”
“If you tell a lie, then no one will ever believe what you say again.” Changing Minds Appeal to Fear”
“If you hold your breath for a long time, you will die”
Fear, uncertainty and doubt

Fear, uncertainty and doubt (FUD) is the appeal to fear in sales or marketing; in which a company disseminates negative (and vague) information on a competitor’s product. The term originated to describe misinformation tactics in the computer hardware industry and has since been used more broadly. FUD is “implicit coercion” by “any kind of disinformation used as a competitive weapon.” FUD creates a situation in which buyers are encouraged to purchase by brand, regardless of the relative technical merits. Opponents of certain large computer corporations state that the spreading of fear, uncertainty, and doubt is an unethical marketing technique that these corporations consciously employ.

As persuasion

Fear appeals are often used in marketing and social policy, as a method of persuasion. Fear is an effective tool to change attitudes, which are moderated by the motivation and ability to process the fear message. Examples of fear appeal include reference to social exclusion, and getting laid-off from one’s job,Solomon. Zaichkowsky, Polegato. Consumer Behaviour Pearson, Toronto. 2005 getting cancer from smoking or involvement in car accidents and driving.

Fear appeals are nonmonotonic, meaning that the level of persuasion does not increase in proportion to the amount of fear that is used. A study of public service messages on AIDS found that if the messages were too aggressive or fearful, they were rejected by the subject; a moderate amount of fear is the most effective attitude changer.

Others argue that it is not the level of fear that is decisive changing attitudes via the persuasion process. Rather, as long as a scare-tactics message includes a recommendation to cope with the fear, it can work.

68
Q

Appeal to flattery

A

When an argument is made under the disguise of flattery.

Appeal to flattery (also apple polishing or wheel greasing) is a fallacy in which a person uses flattery, excessive compliments, in an attempt to win support for their side.

Flattery is often used to hide the true intent of an idea or proposal. Praise offers a momentary personal distraction that can often weaken judgment. Moreover, it is usually a cunning form of appeal to consequences, since the audience is subject to be flattered as long as they comply with the flatterer.

Example:

“Surely a man as smart as you can see this is a brilliant proposal.” (failing to accept the proposal is a tacit admission of stupidity)
Appeal to flattery is a specific kind of appeal to emotion.

69
Q

Appeal to pity

A

An argument made using pity as a tool.

An appeal to pity (also called argumentum ad misericordiam or the Galileo argument) is a fallacy in which someone tries to win support for an argument or idea by exploiting his or her opponent’s feelings of pity or guilt. It is a specific kind of appeal to emotion.

Examples

“You must have graded my exam incorrectly. I studied very hard for weeks specifically because I knew my career depended on getting a good grade. If you give me a failing grade I’m ruined!”
“Ladies and gentlemen of the jury, look at this miserable man, in a wheelchair, unable to use his legs. Could such a man really be guilty of embezzlement?”
Analysis

Recognizing an argument as an appeal to pity does not necessarily invalidate the conclusion or the

factual assertions. There may be other reasons to accept the invited conclusion, but an appeal to pity is not one of them (see also, argument from fallacy).

70
Q

Appeal to Ridicule

A

Where an argument is made appealing to the thought of the opponent argument being ridiculous.

Appeal to ridicule (also called appeal to mockery or the horse laughMoore and Parker, Critical Thinking), is an informal fallacy which presents an opponent’s argument as absurd, ridiculous, or in any way humorous, to the specific end of a foregone conclusion that the argument lacks any substance which would merit consideration.

Appeal to ridicule is often found in the form of comparing a nuanced circumstance or argument to a laughably commonplace occurrence or to some other irrelevancy on the basis of comedic timing, wordplay, or making an opponent and their argument the object of a joke. For example, following criticism during the 2008 United States general elections that Barack Obama’s policies were “socialist”, Obama responded by saying “Next they’ll be calling me a communist because I shared my toys in kindergarten.”, pushing the “socialist” label to its extreme and presenting a flippant response to the argument, rejecting it as unworthy of serious consideration.

This is a rhetorical tactic that mocks an opponent’s argument or standpoint, attempting to inspire an emotional reaction (making it a type of appeal to emotion) in the audience and to highlight any counter-intuitive aspects of that argument, making it appear foolish and contrary to common sense. This is typically done by making a mockery of the argument’s foundation that represents it in an uncharitable and overly simplified way.

71
Q

Appeal to spite

A

Attempting to win favor in an argument by the usage of bitterness or spite of the opposing party.

An appeal to spite (also called argumentum ad odium) is a fallacy in which someone attempts to win favor for an argument by exploiting existing feelings of bitterness, spite, or schadenfreude in the opposing party. - The speaker tries to appeal to hatred, spite and other negative emotions in the listener. It is an attempt to sway the audience emotionally by associating a hate-figure with opposition to the speaker’s argument.

Fallacious ad hominem arguments which attack villains holding the opposing view are often confused with appeals to spite. The ad hominem can be a similar appeal to a negative emotion, but differs from it in directly criticizing the villain —that is unnecessary in an appeal to spite, where hatred for the villain is assumed.

Examples

“If you vote for this tax cut, it will mean that the fat cats will get even more money to spend on their expensive luxury yachts, while you and I keep struggling to pay the bills.”
“Stop recycling! Aren’t you tired of Hollywood celebrities preaching to everyone about saving the Earth?”
Why should student benefits be reinstated, when I got nothing from the state and had to sacrifice to pay for my studies?

72
Q

Appeal to wishful thinking

A

When you appeal to what might be pleasing to imagine instead of appealing to evidence.

Wishful thinking is the formation of beliefs and making decisions according to what might be pleasing to imagine instead of by appealing to evidence, rationality, or reality. Studies have consistently shown that holding all else equal, subjects will predict positive outcomes to be more likely than negative outcomes (see valence effect).

On the other hand, some psychologists believe that positive thinking is able to positively influence behavior and so bring about better results. They call it “Pygmalion Effect”.

Christopher Booker described wishful thinking in terms of

“the fantasy cycle” … a pattern that recurs in personal lives, in politics, in history – and in storytelling. When we embark on a course of action which is unconsciously driven by wishful thinking, all may seem to go well for a time, in what may be called the “dream stage”. But because this make-believe can never be reconciled with reality, it leads to a “frustration stage” as things start to go wrong, prompting a more determined effort to keep the fantasy in being. As reality presses in, it leads to a “nightmare stage” as everything goes wrong, culminating in an “explosion into reality”, when the fantasy finally falls apart. The Telegraph, 23 April 2011
Notable examples

Prominent examples of wishful thinking include:

Economist Irving Fisher said that “stock prices have reached what looks like a permanently high plateau” a few weeks before the Stock Market Crash of 1929, which was followed by the Great Depression.
The Kennedy administration maintained that, if overpowered by Cuban forces, the CIA-backed rebels could “escape destruction by melting into the countryside” in the Bay of Pigs Invasion.https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/csi-studies/studies/winter98_99/art08.html
As a fallacy

In addition to being a cognitive bias and a poor way of making decisions, wishful thinking is commonly held to be a specific informal fallacy in an argument when it is assumed that because we wish something to be true or false, it is actually true or false. This fallacy has the form “I wish that P is true/false, therefore P is true/false.” The Fallacy Files: Wishful Thinking Wishful thinking, if this were true, would rely upon appeals to emotion, and would also be a red herring.

Wishful thinking may cause blindness to unintended consequences.

Reverse wishful thinking

Reverse wishful thinking is where someone assumes that because something is bad it is likely to happen. This may be to fulfill a prediction made by the speaker or because they are generally pessimistic.

73
Q

Appeal to motive

A

Where a premise is dismissed by calling into question the motives of it proposer.

Appeal to motive is a pattern of argument which consists in challenging a thesis by calling into question the motives of its proposer. It can be considered as a special case of the ad hominem circumstantial argument. As such, this type of argument may be an informal fallacy.

A common feature of appeals to motive is that only the possibility of a motive (however small) is shown, without showing the motive actually existed or, if the motive did exist, that the motive played a role in forming the argument and its conclusion. Indeed, it is often assumed that the mere possibility of motive is evidence enough.

Examples

“That website recommended ACME’s widget over Megacorp’s widget. But the website also displays ACME advertising on their site, so they were probably biased in their review.” The thesis in this case is the website’s evaluation of the relative merits of the two products.
“The referee comes from the same place as (a sports team), so his refereeing was obviously biased towards them.” In this case, the thesis consists of the referee’s rulings.
“My opponent argues on and on in favor of allowing that mall to be built in the center of town. What he won’t tell you is that his daughter and her friends plan to shop there once it’s open.”

74
Q

Appeal to novelty

A

Where a proposal is claimed to be superior or better solely because it is new or modern.

The appeal to novelty (also called argumentum ad novitatem) is a fallacy in which someone prematurely claims that an idea or proposal is correct or superior, exclusively because it is new and modern. In a controversy between status quo and new inventions, an appeal to novelty argument isn’t in itself a valid argument. The fallacy may take two forms: overestimating the new and modern, prematurely and without investigation assuming it to be best-case, or underestimating status quo, prematurely and without investigation assuming it to be worst-case.

Investigation may prove these claims to be true, but it is a fallacy to prematurely conclude this only from the general claim that all novelty is good.

The opposite of an appeal to novelty is an appeal to tradition, in which one argues that the “old ways” are always superior to new ideas.

Appeals to novelty are often successful in a modern world where everyone is eager to be on the “cutting edge” of technology. The so-called “Dot-com bust” of the early 2000s could easily be interpreted as a sign of the dangers of naïvely embracing new ideas without first viewing them with a critical eye. Also, advertisers frequently extoll the newness of their products as a reason to buy. Conversely, this is satirised as bleeding edge technology by skeptics (this may itself be an example of the appeal to tradition fallacy).

The appeal to novelty is based on the reasoning that in general people will tend to try to improve the outputs resulting from their efforts. Thus, for example, a company producing a product might be assumed to know about existing flaws and to be seeking to correct them in a future revision. This line of reasoning is obviously flawed for many reasons, most notably that: it ignores motive (a new product may be released that is functionally identical to previous products but which is cheaper to produce); it ignores cyclicality (the fashion industry continually rediscovers old styles and markets them as the next new thing); and it ignores population dynamics (the previous product may have been created by an expert who has since been replaced by a neophyte).

Examples

“If you want to lose weight, your best bet is to follow the latest diet.”
“The department will become more profitable because it has been reorganized.”
“Upgrading all your software to the most recent versions will make your system more reliable.”
“Things are bad with party A in charge, thus party B will bring an improvement if they’re elected.”
Appeal to novelty fallacy: Designation pitfalls

In some cases, there may exist one or more unnamed - but still universally acknowledged - correlations between novelty and positive traits. For example, newer technology has a tendency to be more complex and advanced than older. A correlation may for example exist between newness of a virus definition file and the security of a computer, or between the newness of a computer and its speed and performance. In these precise cases, something is more probable to be superior whenever it is new and modern, though not exclusively because they are new and modern. Thus, what may seem like Appeal to novelty isn’t a fallacy in every case. It is only a fallacy if this correlation is disputed or if no such correlation has been examined.

In aesthetics, for example in some arts and musics, novelty - though not all forms of novelty - is used as a criterion for acclaim. This may look like the fallacy, but in some circles there may be an unnamed consensus that people eventually grow tired of what they’re used to. In these cases, the aforementioned criterion and justification isn’t based exclusively on Appeal to novelty, and thus is no fallacy.

Appeals to emotion

Genetic fallacies

75
Q

Appeal to poverty

A

Supporting a conclusion because the arguer is poor.

Argumentum ad lazarum or appeal to poverty is the informal fallacy of thinking a conclusion is correct because the speaker is poor, or it’s incorrect because the speaker is rich. It is named after Lazarus, a beggar in a New Testament parable who receives his reward in the afterlife.

This is popularly exploited as the statement, “Poor, but honest.”

Examples

Family farms are struggling to get by so when they say we need to protect them, they must be on to something.

The homeless tell us it’s hard to find housing. Thus it must be.

The monks have forsworn all material possessions. They must have achieved enlightenment.

All you need to know about the civil war in that country is that the rebels live in mud huts, while the general who sends troops against them sits in a luxurious, air-conditioned office.

The opposite is the argumentum ad crumenam.

76
Q

Appeal to tradition

A

A conclusion supported solely because it has long been held to be true.

Appeal to tradition (also known as argumentum ad antiquitatem, appeal to antiquity, or appeal to common practice) is a common fallacy in which a thesis is deemed correct on the basis that it correlates with some past or present tradition. The appeal takes the form of “this is right because we’ve always done it this way.”Trufant 1917.

An appeal to tradition essentially makes two assumptions that are not necessarily true:

The old way of thinking was proven correct when introduced, i.e. since the old way of thinking was prevalent, it was necessarily correct.
In actuality this may be false—the tradition might be entirely based on incorrect grounds.
The past justifications for the tradition are still valid at present.
In actuality, the circumstances may have changed; this assumption may also therefore be untrue.
The opposite of an appeal to tradition is an appeal to novelty, claiming something is good because it is new.

77
Q

Appeal to nature

A

Wherein judgement is based solely on whether the subject of judgment is ‘natural’ or ‘unnatural’.

An appeal to nature is an argument or rhetorical tactic in which it is proposed that “a thing is good because it is ‘natural’, or bad because it is ‘unnatural’“.Moore, George E.: Principia Ethica, Barnes and Noble Publishing, Inc (1903, 2005) p. 47

Forms

General form of this type of argument:

Julian Baggini explains that “Even if we can agree that some things are natural and some are not, what follows from this? The answer is: nothing. There is no factual reason to suppose that what is natural is good (or at least better) and what is unnatural is bad (or at least worse).”

In some contexts, the use of the terms of “nature” and “natural” can be vague, leading to unintended associations with other concepts. The word “natural” can also be a loaded term — much like the word “normal”, in some contexts, it can carry an implicit value judgement. An appeal to nature would thus beg the question, because the conclusion is entailed by the premise.

Opinions differ regarding appeal to nature in rational argument. Sometimes, it can be taken as a rule of thumb that admits some exceptions, but nonetheless proves to be of use in one or more specific topics, (or in general). As a rule of thumb, natural or unnatural facts provide presumptively reliable good or bad values, barring evidence to the contrary. Failure to consider such evidence commits a fallacy of accident under this view.

History

The meaning and importance of various understandings and concepts of “nature” has been a persistent topic of discussion historically in both science and philosophy. In Ancient Greece, “the laws of nature were regarded not as generalized descriptions of what actually happens in the natural world… but rather as norms that people ought to follow… Thus the appeal to nature tended to mean an appeal to the nature of man treated as a source for norms of conduct. To Greeks this… represented a conscious probing and exploration into an area wherein, according to their whole tradition of thought, lay the true source for norms of conduct.”

In modern times, philosophers have challenged the notion that human beings’ status as natural beings should determine or dictate their normative being. For example, Rousseau famously suggested that “We do not know what our nature permits us to be.”Jean-Jacques Rousseau, or, On education, USA: Basic Books, 1979, p. 62. More recently, Nikolas Kompridis has applied Rousseau’s axiom to debates about genetic intervention (or other kinds of intervention) into the biological basis of human life, writing:here is a domain of human freedom not dictated by our biological nature, but is somewhat unnerving because it leaves uncomfortably open what kind of beings human beings could become… Put another way: What are we prepared to permit our nature to be? And on what basis should we give our permission?

Kompridis writes that the naturalistic view of living things, articulated by one scientist as that of “machines whose components are biochemicals”“The current scientific view of living things is that they are machines whose components are biochemicals.” Rodney Brooks, “The relationship between matter and life”, Nature 409 (2010), p. 410. (Rodney Brooks), threatens to make a single normative understanding of human being the only possible understanding. He writes, “When we regard ourselves as ‘machines whose components are biochemicals,’ we not only presume to know what our nature permits us to be, but also that this knowledge permits us to answer the question of what is to become of us… This is not a question we were meant to answer, but, rather, a question to which we must remain answerable.” Nikolas Kompridis, “Technology’s Challenge to Democracy: What of the Human?”, Parrhesia Number 8 (2009), pp. 23–31.

Philosophers such as Jacques Derrida, Bruno Latour and others have also questioned inherited understandings of nature in their work.

Examples

Some popular examples of the appeal to nature can be found on labels and advertisements for food, clothing, and alternative herbal remedies. Labels may use the phrase “all-natural”, to imply that products are environmentally friendly and/or safe. However, many toxic substances are found in nature, including in common plant sources and herbs such as hemlock, nightshade, belladonna, and poisonous mushrooms, and these may have serious side effects.

It has therefore been suggested that whether or not a product is “natural” is irrelevant, in itself, in determining its safety or effectiveness.

78
Q

Appeal to wealth

A

Supporting a conclusion because the arguer is wealthy.

An argumentum ad crumenam argument, also known as an argument to the purse, is the informal fallacy of concluding that a statement is correct because the speaker is rich (or that a statement is incorrect because the speaker is poor).

The opposite is the argumentum ad lazarum.

Usage

If you’re so smart, why aren’t you rich?
This new law is a good idea. Most of the people against it are riff-raff who make less than $20,000 a year.
Warren Buffett is hosting a seminar. This seminar is better than others, because Warren Buffett is richer than most people.
From Tristram Shandy:Laurence Sterne. The Life and Opinions of Tristram Shandy, Gentleman. Everyman’s library: New York, 1991. “Then, added my father, making use of the argument Ad Crumenam, ‘I will lay twenty guineas to a single crown-piece, (which will serve to give away to Obadiah when he gets back) that this same Stevinus was some engineer or other, or has wrote something or other, either directly or indirectly, upon the science of fortification.’”

79
Q

Argument from silence

A

A conclusion based on silence or lack of contrary evidence.

An argument from silence (also called argumentum a silentio in Latin) is generally a conclusion drawn based on the absence of statements in historical documents.”argumentum e silentio noun phrase” The Oxford Essential Dictionary of Foreign Terms in English. Ed. Jennifer Speake. Berkley Books, 1999.John Lange, The Argument from Silence, History and Theory, Vol. 5, No. 3 (1966), pp. 288-301 In the field of classical studies, it often refers to the deduction from the lack of references to a subject in the available writings of an author to the conclusion that he was ignorant of it.”silence, the argument from”. The Concise Oxford Dictionary of the Christian Church. Ed. E. A. Livingstone. Oxford University Press, 2006.

Thus in historical analysis with an argument from silence, the absence of a reference to an event or a document is used to cast doubt on the event not mentioned. While most historical approaches rely on what an author’s works contain, an argument from silence relies on what the book or document does not contain. This approach thus uses what an author “should have said” rather than what is available in the author’s extant writings.Historical evidence and argument by David P. Henige 2005 ISBN 978-0-299-21410-4 page 176.Seven Pillories of Wisdom by David R. Hall 1991 ISBN 0-86554-369-0 pages 55-56.

Historical analysis

An argument from silence can be convincing when mentioning a fact can be seen as so natural that its omission is a good reason to assume ignorance. For example, while the editors of Yerushalmi and Bavli mention the other community, most scholars believe these documents were written independently. Louis Jacobs writes, “If the editors of either had had access to an actual text of the other, it is inconceivable that they would not have mentioned this. Here the argument from silence is very convincing.”“Talmud”. A Concise Companion to the Jewish Religion. Louis Jacobs. Oxford University Press, 1999.

Errietta Bissa, professor of Classics at University of Wales flatly state that arguments from silence are not valid.Governmental intervention in foreign trade in archaïc and classical Greece by Errietta M. A. Bissa ISBN 90-04-17504-0 page 21: “This is a fundamental methodological issue on the validity of arguments from silence, where I wish to make my position clear: arguments from silence are not valid.” David Henige states that, although risky, such arguments can at times shed light on historical events. Yifa has pointed out the perils of arguments from silence, in that although no references appear to the “Rules of purity” codes of monastic conduct of 1103 in the Transmission of the Lamp, or any of the Pure Land documents, a copy of the code in which the author identifies himself exists.The origins of Buddhist monastic codes in China by Yifa, Zongze 2002 ISBN 0-8248-2494-6 page 32: “an argumentum ex silencio is hardly conclusive”

Yifa points out that arguments from silence are often less than conclusive, e.g. the lack of references to a compilation of a set of monastic codes by contemporaries or even by disciples does not mean that it never existed. This is well as illustrated by the case of Changlu Zongze’s “Rules of purity” which he wrote for the Chan monastery in 1103.

One of his contemporaries wrote a preface to a collection of his writings neglected to mention his code. And none of his biographies nor the documents of the Transmission of the Lamp, nor the Pure Land documents (which exalt him) refer to Zongze’s collection of a monastic code. However a copy of the code in which the author identifies himself

exists.The origins of Buddhist monastic codes in China by Yifa, Zongze 2002 ISBN 0-8248-2494-6 page 32.

Frances Wood based her controversial book Did Marco Polo go to China? on arguments from silence. Woods argued that Marco Polo never went to China and fabricated his accounts because he failed to mention elements from the visual landscape such as tea, did not record the Great Wall and neglected to record practices such as foot-binding. She argued that no outsider could spend 15 years in China and not observe and record these elements. Most historians disagree with Wood’s reasoning.Historical evidence and argument by David P. Henige 2005 ISBN 978-0-299-21410-4 page 176.

Legal aspects

Jed Rubenfeld, professor of Law at Yale Law School, has shown an example of the difficulty in applying arguments from silence in constitutional law, stating that although arguments from silence can be used to draw conclusions about the intent of the Framers of the US Constitution, their application can lead to two different conclusions and hence they can not be used to settle the issues.Jed Rubenfeld Rights of Passage: Majority Rule in Congress Duke Law Journal, 1996 Section B: Arguments from silence, “From this silence one can draw clear plausible inferences about the Framers’ intent. The only difficulty is that one can draw two

different inferences…. The truth is that the argument from silence is not dispositive”.

In the context of Morocco’s Truth Commission of 1999 regarding torture and secret detentions, Wu and Livescu state that the fact that someone remained silent is no proof of their ignorance about a specific piece of information. They point out that the absence of records about the torture of prisoners under the secret detention program is no proof that such detentions did not involve torture, or that some detentions did not take place.Human Rights, Suffering, and Aesthetics in Political Prison Literature by Yenna Wu, Simona Livescu 2011 ISBN 0-7391-6741-3 pages 86-90.

80
Q

Bulverism

A

Inferring why an argument is being used, associating it to some psychological reason, then assuming it is invalid as a result. It is wrong to assume that if the origin of an idea comes from a biased mind, then the idea itself must also be a false.

Bulverism is a logical fallacy in which, rather than proving that an argument in favour of an opinion is wrong, a person instead assumes that the opinion is wrong, and then goes on to explain why the other person held it. It is essentially a circumstantial ad hominem argument. The term “Bulverism” was coined by C. S. Lewis.Lewis C.S. Undeceptions: Essays on Theology and Ethics ed. Hooper, Walter. London: Geoffrey Bles (1971) p.225 It is very similar to Antony Flew’s “Subject/Motive Shift”.

Source of the concept

Lewis wrote about this in a 1941 essayThe title was “Notes on the Way” in Time and Tide, vol XXII (29th March, 1941) — Undeceptions p.xv. which was later expanded and published in The Socratic Digest under the title “Bulverism”.No. 2 (June 1944) pp.16-20— Undeceptions p.xv This was reprinted both in Undeceptions and the more recent anthology God in the Dock. He explains the origin of this term:Lewis C.S. Undeceptions: Essays on Theology and Ethics ed. Hooper, Walter. London: Geoffrey Bles (1971) p.223

You must show that a man is wrong before you start explaining why he is wrong. The modern method is to assume without discussion that he is wrong and then distract his attention from this (the only real issue) by busily explaining how he became so silly.

In the course of the last fifteen years I have found this vice so common that I have had to invent a name for it. I call it “Bulverism”. Some day I am going to write the of its imaginary inventor, Ezekiel Bulver, whose destiny was determined at the age of five when he heard his mother say to his father — who had been maintaining that two sides of a triangle were together greater than a third — “Oh you say that because you are a man.” “At that moment”, E. Bulver assures us, “there flashed across my opening mind the great truth that refutation is no necessary part of argument. Assume that your opponent is wrong, and explain his error, and the world will be at your feet. Attempt to prove that he is wrong or (worse still) try to find out whether he is wrong or right, and the national dynamism of our age will thrust you to the wall.” That is how Bulver became one of the makers of the Twentieth Century.

Pattern

The form of the Bulverism fallacy can be expressed as follows:

You claim that A is true.
Because of B, you personally desire that A should be true.
Therefore, A is false.
or

You claim that A is false.
Because of B, you personally desire that A should be false.
Therefore, A is true.
Examples

From “Bulverism” by C.S. Lewis:Lewis C.S. Undeceptions: Essays on Theology and Ethics ed. Hooper, Walter. London: Geoffrey Bles (1971) p.224,5

Suppose I think, after doing my accounts, that I have a large balance at the bank. And suppose you want to find out whether this belief of mine is “wishful thinking.” You can never come to any conclusion by examining my psychological condition. Your only chance of finding out is to sit down and work through the sum yourself. When you have checked my figures, then, and then only, will you know whether I have that balance or not. If you find my arithmetic correct, then no amount of vapouring about my psychological condition can be anything but a waste of time. If you find my arithmetic wrong, then it may be relevant to explain psychologically how I came to be so bad at my arithmetic, and the doctrine of the concealed wish will become relevant — but only after you have yourself done the sum and discovered me to be wrong on purely arithmetical grounds. It is the same with all thinking and all systems of thought. If you try to find out which are tainted by speculating about the wishes of the thinkers, you are merely making a fool of yourself. You must first find out on purely logical grounds which of them do, in fact, break down as arguments. Afterwards, if you like, go on and discover the psychological causes of the error.

From A Reply to Professor Haldane by C. S. Lewis:

The Professor has his own explanation … he thinks that I am unconsciously motivated by the fact that I ‘stand to lose by social change’. And indeed it would be hard for me to welcome a change which might well consign me to a concentration camp. I might add that it would likewise be easy for the Professor to welcome a change which might place him in the highest rank of an omnicompetent oligarchy. That is why the motive game is so uninteresting. Each side can go on playing ad nauseam, but when all the mud has been flung every man’s views still remain to be considered on their merits. I decline the motive game and resume the discussion.

81
Q

Chronological snobbery

A

Where a thesis is deemed incorrect because it was commonly held when something else, clearly false, was also commonly held.

Chronological snobbery, a term coined by friends C. S. Lewis and Owen Barfield, is a logical argument (and usually when thus termed, considered an outright fallacy) describing the erroneous argument that the thinking, art, or science of an earlier time is inherently inferior to that of the present, simply by virtue of its temporal priority. As Barfield explains it, it is the belief that “intellectually, humanity languished for countless generations in the most childish errors on all sorts of crucial subjects, until it was redeemed by some simple scientific dictum of the last century.”History in English Words p. 164 The subject came up between them when Barfield had converted to Anthroposophy and was persuading Lewis (an atheist at that time) to join him. One of Lewis’s objections was that religion was simply outdated, and in Surprised by Joy (chapter 13, p. 207-208) he describes how this was fallacious:

Barfield never made me an Anthroposophist, but his counterattacks destroyed forever two elements in my own thought. In the first place he made short work of what I have called my “chronological snobbery,” the uncritical acceptance of the intellectual climate common to our own age and the assumption that whatever has gone out of date is on that account discredited. You must find why it went out of date. Was it ever refuted (and if so by whom, where, and how conclusively) or did it merely die away as fashions do? If the latter, this tells us nothing about its truth or falsehood. From seeing this, one passes to the realization that our own age is also “a period,” and certainly has, like all periods, its own characteristic illusions. They are likeliest to lurk in those widespread assumptions which are so ingrained in the age that no one dares to attack or feels it necessary to defend them.

Pattern

The form of the chronological snobbery fallacy can be expressed as follows:

It is argued that A.
A is an old argument, dating back to the times when people also believed B.
B is clearly false.
Therefore, A is false.
Examples

C. S. Lewis in Surprised by Joy (Chapter 13, p. 206) recounts his story:

The usage in general of the word “medieval” to mean “backwards” is also an example – as is the use of the term “backwards” to mean “unsophisticated.”

G. B. Tennyson in his book Owen Barfield: Man and Meaning offers the following firsthand account:

82
Q

Genetic fallacy

A

Where a conclusion is suggested based solely on something or someone’s origin rather than its current meaning or context.

The genetic fallacy, also known as fallacy of origins, fallacy of virtue, is a fallacy of irrelevance where a conclusion is suggested based solely on something or someone’s origin rather than its current meaning or context. This overlooks any difference to be found in the present situation, typically transferring the positive or negative esteem from the earlier context.

The fallacy therefore fails to assess the claim on its merit. The first criterion of a good argument is that the premises must have bearing on the truth or falsity of the claim in question.Attacking Faulty Reasoning: A Practical Guide to Fallacy-Free Arguments (Third Edition) by T. Edward Damer, chapter II, subsection “The Relevance Criterion” (pg. 12) Genetic accounts of an issue may be true, and they may help illuminate the reasons why the issue has assumed its present form, but they are irrelevant to its merits.With Good Reason: An Introduction to Informal Fallacies (Fifth Edition) by S. Morris Engel, chapter V, subsection 1 (pg. 198)

According to the Oxford Companion to Philosophy (1995), the term originated in Morris Raphael Cohen and Ernest Nagel’s book Logic and Scientific Method (1934).

Argument from age (“Wisdom of the Ancients”)

This is a common version of the genetic fallacy where the thing in question is very new or old, so it must be better. Examples include products advertised as “New!” or “Old Fashioned Hamburgers”.

“Not invented here”

Another variation is to dismiss outside ideas because they did not originate from here. “This is the way we’ve always done it.” The converse of this is also false. Just because something is foreign made, does not necessarily mean it’s better.

Examples

From Attacking Faulty Reasoning by T. Edward Damer, Third Edition p. 36:

There are numerous motives explaining why people choose to wear wedding rings, but it would be a logical fallacy to presume those who continue the tradition are doing so with the intent of promoting sexism.

From With Good Reason: An Introduction to Informal Fallacies by S. Morris Engel, Fifth Edition, pg. 196:

A commonly occurring example of this style of reasoning can be called the “etymological fallacy”. This presents arguments based on the supposed real meaning of certain words, where that “real” meaning is in fact what the word meant centuries ago, or what its root word (in Latin, Greek etc.) meant. A popular tactic, it is easily shown to be fallacious and misleading. Thus:

This is not merely a non-sequitur. It reflects that fact that the first speaker simply accepts the contemporary meaning of “arrive”, whereas the second recalls the Latin origin: ripa meaning “shore” (compare also the words “river” and “Riviera”), whereby the English word “arrive” contains within it the idea of disembarkation.

83
Q

Judgmental language

A

Insulting or pejorative language to influence the recipients’s judgment.

Judgmental language is a subset of red herring fallacies. It employs insultive, compromising or pejorative language to influence the recipient’s judgment.

Examples

The Surgeon general says that smoking is harmful to your health. Nowhere in the Bible is it said that you shouldn’t smoke. So who are you gonna listen to, some quack or the Lord God Almighty?
This argument combines judgmental language also with Non sequitur and appeal to authority.

Conscription is the only working way to have a reliable and efficient army. We are far safer when we are defended by our very own sons than by some mercenaries, who will just fight for pay.
Here the judgmental words are “our very own sons” (suppose you are childless or have only daughters?) and “mercenaries”, which imply not only professional soldiers but rather soldiers of fortune. This argument is also a false dilemma: nothing implies that coercion and fear of punishment produces better soldiers than voluntarity, and that a professional army could not be assembled from the nation’s own citizens.

84
Q

Naturalistic fallacy 1

A

Claims what ought to be on the basis of statements about what is.

In philosophical ethics, the naturalistic fallacy was introduced by British philosopher G. E. Moore in his 1903 book Principia Ethica. Moore argues it would be fallacious to explain that which is good reductively, in terms of natural properties such as “pleasant” or “desirable”.

The naturalistic fallacy is close to but not identical with the fallacious appeal to nature, the claim that what is natural is inherently good or right, and that what is unnatural is inherently bad or wrong. The fallacious appeal to nature would be the reverse of a moralistic fallacy: that what is good or right is thus natural.

Furthermore, Moore’s naturalistic fallacy is very close to (and even confused with) the is–ought problem, which comes from Hume’s Treatise. However, unlike Hume’s view of the is–ought problem, Moore (and other proponents of ethical non-naturalism) did not consider the naturalistic fallacy to be at odds with moral realism.

Different common uses

The is–ought problem

The term “naturalistic fallacy” is sometimes used to describe the deduction of an “ought” from an “is” (the Is–ought problem).

In using his categorical imperative Kant deduced that experience was necessary for their application. But experience on its own or the imperative on its own could not possibly identify an act as being moral or immoral. We can have no certain knowledge of morality from them, being incapable of deducing how things ought to be from the fact that they happen to be arranged in a particular manner in experience.

Bentham, in discussing the relations of law and morality, found that when people discuss problems and issues they talk about how they wish it would be as opposed to how it actually is. This can be seen in discussions of natural law and positive law. Bentham criticized natural law theory because in his view it was a naturalistic fallacy, claiming that it described how things ought to be instead of how things are.

Moore’s discussion

According to G. E. Moore’s Principia Ethica, when philosophers try to define “good” reductively in terms of natural properties like “pleasant” or “desirable”, they are committing the naturalistic fallacy.

In defense of ethical non-naturalism, Moore’s argument is concerned with the semantic and metaphysical underpinnings of ethics. In general, opponents of ethical naturalism reject ethical conclusions drawn from natural facts.

Moore argues that good, in the sense of intrinsic value, is simply ineffable: it cannot be defined because it is not a natural property, being “one of those innumerable objects of thought which are themselves incapable of definition, because they are the ultimate terms by reference to which whatever is capable of definition must be defined”.Moore, G.E. Principia Ethica § 10 ¶ 1 On the other hand, ethical naturalists eschew such principles in favor of a more empirically accessible analysis of what it means to be good: for example, in terms of pleasure in the context of hedonism.

In §7, Moore argues that a property is either a complex of simple properties, or else it is irreducibly simple. Complex properties can be defined in terms of their constituent parts but a simple property has no parts. In addition to “good” and “pleasure”, Moore suggests that colour qualia are undefined: if one wants to understand yellow, one must see examples of it. It will do no good to read the dictionary and learn that “yellow” names the colour of egg yolks and ripe lemons, or that “yellow” names the primary colour between green and orange on the spectrum, or that the perception of yellow is stimulated by electromagnetic radiation with a wavelength of between 570 and 590 nanometers, because yellow is all that and more, by the open question argument.

Bernard Williams called Moore’s use of the term ‘naturalistic fallacy’ a “spectacular misnomer”, the question being metaphysical, as opposed to rational.

Appeal to nature

Some people use the phrase “naturalistic fallacy” or “appeal to nature” to characterize inferences of the form “This behaviour is natural; therefore, this behaviour is morally acceptable” or “This property is unnatural; therefore, this property is undesireable.” Such inferences are common in discussions of homosexuality, environmentalism and veganism.

Criticism

Some philosophers reject the naturalistic fallacy and/or suggest solutions for the proposed is-ought problem.

Sam Harris argues that it is possible to derive “ought” from “is”, and even that it has already been done to some extent. He sees morality as a budding science. This view is critical of Moore’s “simple indefinable terms” (which amount to qualia), arguing instead that such terms actually can be broken down into constituents.

Ralph McInerny suggests that “ought” is already bound up in “is”, in so far as the very nature of things have ends/goals within them. For example, a clock is a device used to keep time. When one understands the function of a clock, then a standard of evaluation is implicit in the very description of the clock, i.e., because it IS a clock, it OUGHT to keep the time. Thus, if one can’t pick a good clock from a bad clock, then one does not really know what a clock is. In like manner, if one can’t determine good human action from bad, then one does not really know what the human person is.

See also Responses to the

85
Q

Reductio ad Hitlerum

A

Comparing an opponent or their argument to Hitler or Nazism in an attempt to associate a position with one that is universally reviled.

Godwin’s law (also known as Godwin’s Rule of Nazi Analogies or Godwin’s Law of Nazi Analogies) is an observation made by Mike Godwin in 1990 that has become an Internet adage. It states: “As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1.” In other words, Godwin observed that, given enough time, in any online discussion—regardless of topic or scope—someone inevitably makes a comparison to Hitler or the Nazis.

Although in one of its early forms Godwin’s law referred specifically to Usenet newsgroup discussions, the law is now often applied to any threaded online discussion, such as forum, chat rooms and blog comment threads, and has been invoked for the inappropriate use of Nazi analogies in articles or speeches.

In 2012, “Godwin’s Law” became an entry in the third edition of the Oxford English Dictionary.

Corollaries and usage

There are many corollaries to Godwin’s law, some considered more canonical (by being adopted by Godwin himself) than others. For example, there is a tradition in many newsgroups and other Internet discussion forums that once such a comparison is made, the thread is finished and whoever mentioned the Nazis has automatically lost whatever debate was in progress. “Internet rules and laws: the top 10, from Godwin to Poe” The Daily Telegraph, 23 October 2009 This principle is itself frequently referred to as Godwin’s law. It is considered poor form to raise such a comparison arbitrarily with the motive of ending the thread. There is a widely recognized corollary that any such ulterior-motive invocation of Godwin’s law will be unsuccessful.

Godwin’s law applies especially to inappropriate, inordinate, or hyperbolic comparisons of other situations (or one’s opponent) with Nazis. The law and its corollaries would not apply to discussions covering known mainstays of Nazi Germany such as genocide, eugenics or racial superiority, nor, more debatably, to a discussion of other totalitarian regimes or ideologies, if that was the explicit topic of conversation, since a Nazi comparison in those circumstances may be appropriate, in effect committing the fallacist’s fallacy. Whether it applies to humorous use or references to oneself is open to interpretation, since this would not be a fallacious attack against a debate opponent.

While falling foul of Godwin’s law tends to cause the individual making the comparison to lose their argument or credibility, Godwin’s law itself can be abused as a distraction, diversion or even as censorship, fallaciously miscasting an opponent’s argument as hyperbole when the comparisons made by the argument are actually appropriate.David Weigel, “Hands Off Hitler! It’s time to repeal Godwin’s Law” Reason Magazine, July 14, 2005 Similar criticisms of the “law” (or “at least the distorted version which purports to prohibit all comparisons to German crimes”) have been made by Glenn Greenwald.Greenwald, Glenn (2010-07-01) The odiousness of the distorted Godwin’s Law, Salon.com

History

Godwin has stated that he introduced Godwin’s law in 1990 as an experiment in memetics.

Godwin’s law does not claim to articulate a fallacy; it is instead framed as a memetic tool to reduce the incidence of inappropriate hyperbolic comparisons. “Although deliberately framed as if it were a law of nature or of mathematics, its purpose has always been rhetorical and pedagogical: I wanted folks who glibly compared someone else to Hitler or to Nazis to think a bit harder about the Holocaust”, Godwin has written.

86
Q

Straw man

A

An argument based on misrepresentation of an opponents position.

A straw man or straw person, also known in the UK as an Aunt Sally, is a type of argument and is an informal fallacy based on misrepresentation of an opponent’s position. To “attack a straw man” is to create the illusion of having refuted a proposition by replacing it with a superficially similar yet unequivalent proposition (the “straw man”), and to refute it, without ever having actually refuted the original position. This technique has been used throughout history in polemical debate, particularly in arguments about highly charged, emotional issues.

Origin

As a fallacy, the identification and name of straw man arguments are of relatively recent date, although Aristotle makes remarks that suggest a similar concern; Douglas Walton identified “the first inclusion of it we can find in a textbook as an informal fallacy” in Stuart Chase’s Guides to Straight Thinking from 1956 (p. 40).Douglas Walton, “ The straw man fallacy”. In Logic and Argumentation, ed. Johan van Bentham, Frans H. van Eemeren, Rob Grootendorst and Frank Veltman. Amsterdam, Royal Netherlands Academy of Arts and Sciences, North-Holland, 1996. pp. 115-128 Oddly enough Hamblin’s classic text Fallacies (1970), neither mentions it as a distinct type, nor even as a historical term.

The origins of the term are unclear. The usage of the term in rhetoric suggests a human figure made of straw which is easily knocked down or destroyed, such as a military training dummy, scarecrow, or effigy. The rhetorical technique is sometimes called an Aunt Sally in the UK, with reference to a traditional fairground game in which objects are thrown at a fixed target. One common folk etymology is that it refers to men who stood outside courthouses with a straw in their shoe in order to indicate their willingness to be a false witness.

Structure

The straw man fallacy occurs in the following pattern of argument:

Person 1 has position X.
Person 2 disregards certain key points of X and instead presents the superficially similar position Y. The position Y is a distorted version of X and can be set up in several ways, including:
Presenting a misrepresentation of the opponent’s position.
Quoting an opponent’s words out of context—i.e., choosing quotations that misrepresent the opponent’s actual intentions (see fallacy of quoting out of context).
Presenting someone who defends a position poorly as the defender, then refuting that person’s arguments—thus giving the appearance that every upholder of that position (and thus the position itself) has been defeated.
Inventing a fictitious persona with actions or beliefs which are then criticized, implying that the person represents a group of whom the speaker is critical.
Oversimplifying an opponent’s argument, then attacking this oversimplified version.
Person 2 attacks position Y, concluding that X is false/incorrect/flawed.
This reasoning is fallacious because attacking a distorted version of a position does not address the actual position. The ostensible argument that Person 2 makes has the form:

“Don’t support ‘X, because X’ has an unacceptable (or absurd or contradictory or terrible) consequence.”
However, the actual form of the argument is:

“Don’t support ‘X, because Y’ has an unacceptable (or absurd or contradictory or terrible) consequence.”
This argument doesn’t make sense; it is a non sequitur. Person 2 relies on the audience not noticing this.

Examples

A: Sunny days are good.
B: If all days were sunny, we’d never have rain, and without rain, we’d have famine and death.
In this case, B falsely frames A’s claim to imply that A believes only sunny days are good, and B argues against that assertion. A actually asserts that sunny days are good and, in fact, says nothing about rainy days.

C: We should give children ice cream after every school day.
D: That would be rather bad for their health.
C: Do you want our children to starve?
Person C says that children should be given ice cream after every school day. D replies to that statement assuming that children would be getting this in addition to their regular meals, and states that this would be unhealthy. Person C replies with the unreasonable suggestion that if children were not given ice cream, they would starve. Person C does this because it is harder for Person D to argue that children should starve than to argue that children should not be unhealthy.

Christopher Tindale presents, as an example, the following passage from a draft of a bill (HCR 74) considered by the Louisiana State Legislature in 2001:

Tindale comments that “the portrait painted of Darwinian ideology is a caricature, one not borne out by any objective survey of the works cited. That similar misrepresentations of Darwinian thinking have been used to justify and approve racist practices is beside the point: the position that the legislation is attacking and dismissing is a Straw Man. In subsequent debate this error was recognized, and the eventual bill omitted all mention of Darwin and Darwinist ideology.”

87
Q

Texas sharpshooter fallacy

A

Improperly asserting a cause to explain a cluster of data.

The Texas sharpshooter fallacy is an informal fallacy in which pieces of information that have no relationship to one another are called out for their similarities, and that similarity is used for claiming the existence of a pattern. This fallacy is the philosophical/rhetorical application of the multiple comparisons problem (in statistics) and apophenia (in cognitive psychology). It is related to the clustering illusion, which refers to the tendency in human cognition to interpret patterns in randomness where none actually exist.

Etymology

The name comes from a joke about a Texan who fires some shots at the side of a barn, then paints a target centered on the biggest cluster of hits and claims to be a sharpshooter.

Structure

The Texas sharpshooter fallacy often arises when a person has a large amount of data at his disposal, but only focuses on a small subset of that data. Random chance may give all the elements in that subset some kind of common property (or pair of common properties, when arguing for correlation). If the person fails to account for the likelihood of finding some subset in the large data with some common property strictly by chance alone, that person is likely committing a Texas Sharpshooter fallacy.

To illustrate, if we pay attention to a cluster of cancer cases in a certain sub-population and then draw our “circle” around the smallest area that includes this cluster, this sample will appear to be suffering an unusually high rate of cancer, but if we included the rest of the population, the incidence would regress to the average.

The fallacy is characterized by a lack of specific hypothesis prior to the gathering of data, or the formulation of a hypothesis only after data has already been gathered and examined. Thus, it typically does not apply if one had an ex ante, or prior, expectation of the particular relationship in question before examining the data. For example one might, prior to examining the information, have in mind a specific physical mechanism implying the particular relationship. One could then use the information to give support or cast doubt on the presence of that mechanism. Alternatively, if additional information can be generated using the same process as the original information, one can use the original information to construct a hypothesis, and then test the hypothesis on the new data. See hypothesis testing. What one cannot do is use the same information to construct and test the same hypothesis (see hypotheses suggested by the data) — to do so would be to commit the Texas sharpshooter fallacy.

Examples

A Swedish study in 1992 tried to determine whether or not power lines caused some kind of poor health effects. The researchers surveyed everyone living within 300 meters of high-voltage power lines over a 25-year period and looked for statistically significant increases in rates of over 800 ailments. The study found that the incidence of childhood leukemia was four times higher among those that lived closest to the power lines, and it spurred calls to action by the Swedish government. The problem with the conclusion, however, was that the number of potential ailments, i.e. over 800, was so large that it created a high probability that at least one ailment would exhibit statistically significant difference just by chance alone. Subsequent studies failed to show any links between power lines and childhood leukemia, neither in causation nor even in correlation.
Attempts to find cryptograms in the works of William Shakespeare, which tended to report results only for those passages of Shakespeare for which the proposed decoding algorithm produced an intelligible result. This could be explained as an example of the fallacy because passages which do not match the algorithm have not been accounted for.
Attempts to find cryptograms in the Bible, and the Quran Code.
This fallacy is often found in modern-day interpretations of the quatrains of Nostradamus. Nostradamus’ quatrains are often liberally translated from the original (archaic) French, stripped of their historical context, and then applied to support the conclusion that Nostradamus predicted a given modern-day event, after the event actually occurred. The Nostradamus quatrains that supposedly predicted 9/11 were not even original to Nostradamus, but were either cobbled together from parts of separate quatrains or created from whole cloth.
Related fallacies

Cum hoc ergo propter hoc
Post hoc ergo propter hoc
Correlative based fallacies
Moving the goalpost, a related fallacy used to obtain the opposite conclusion.

88
Q

To quoque

A

The argument states that a certain position is false or wrong and/or should be disregarded because its proponent fails to act consistently in accordance with that position.

Tu quoque,OED (Latin for “you, too” or “you, also”) or the appeal to hypocrisy, is a logical fallacy that attempts to discredit the opponent’s position by asserting the opponent’s failure to act consistently in accordance with that position; it attempts to show that a criticism or objection applies equally to the person making it. This dismisses someone’s point of view based on criticism of the person’s inconsistency, and not the position presented, whereas a person’s inconsistency should not discredit their position. Thus, it is a form of the ad hominem argument. Logical Fallacy: Tu Quoque To clarify, although the person being attacked might indeed be acting inconsistently or hypocritically, this does not invalidate their argument.

Fallacious use

In many cases tu quoque arguments are used in a logically fallacious way, to draw a conclusion which is not supported by the premises of the argument.

This form of the argument, familiar from everyday disagreements, is as follows:

A makes criticism P.
A is also guilty of P.
Therefore, P is dismissed.
Examples:

“He cannot accuse me of libel because he was just successfully sued for libel.”
Person 1: It should be illegal to make clothing out of animals.
Person 2: But, you are wearing a leather jacket.
Person 1: People shouldn’t drink. It’s a very damaging habit.
Person 2: But you’re drunk.
Legal aspects

Common law

In common law, a legal maxim exists stating a person cannot approach the courts of equity with unclean hands. If there is a nexus between the applicant’s wrongful act and the rights the applicant wishes to enforce, the court may not grant the applicant’s request.

International law

This argument has been unsuccessfully used before the International Criminal Tribunal for the former Yugoslavia in several cases when the accused tried to justify their crimes by insisting that the opposing side had also committed such crimes. However, the argument tu quoque, from the basis of international humanitarian law is completely irrelevant, as the ICTY has stated in these cases.Judgment of the Trial Chamber in Case Kupreškić et al.. (January 2000), para. 765.Judgment of the Trial Chamber in Case Kunarac et al.. (February 2001), para. 580.Judgment of the Appeals Chamber in Case Kunarac et al.. (January 2002), para. 87.Judgment of the Trial Chamber in Case Limaj et al. (November 2005), para. 193.

Historically, however, at the Nuremberg trial of Karl Dönitz, tu quoque was accepted not as a defense to the crime itself, or to the prosecution proceedings, but only as a mitigation of punishment.Yee, Sienho (2004), “The Tu Quoque Argument as a defence to International Crimes, Prosecution, or Punishment”, Chinese Journal of International Law, 3, pp. 87–133. At the Dachau trials, Otto Skorzeny and officers of Panzer Brigade 150 successfully used tu quoque evidence to be acquitted of violating the laws of war by using American uniforms to infiltrate Allied lines in the false flag Operation Greif in the Battle of the Bulge. Evidence was introduced that the Allies themselves had on at least one occasion worn German uniforms, demonstrating that the prosecution was not clean with regards to this particular crime.

In these cases of international law, the argument is generally used to challenge the validity of the proceedings rather than as a defense of the acts in question or challenging the law as a whole. This is on the basis that if all sides committed such acts then, if the court is fair and seeking justice rather than vengeance (a common criticism against earlier international courts), all sides must be indicted and punished for their actions, and if this does not happen then the court is unjust, and thus illegitimate.

In the case of the Nuremberg trials, charges were only being leveled at the former Nazi regime, and as such it was felt that the tu quoque argument in some cases to be relevant. In fact both at the time and in the subsequent decades many jurists and scholars have argued that the whole proceedings should be seen as illegitimate because Allied crimes were not indicted or even acknowledged even when they overlapped with Nazi crimes; most obviously many German leaders were charged with a conspiracy to wage a war of aggression in the Nazi-Soviet pact, meaning that if a criminal conspiracy existed then the Soviets must have been party to it but no charges were brought against them.

In the case of the ICTY, charges were leveled against all sides and all persons that were believed to have been complicit in crimes against humanity, and since the crimes of another cannot be held to permit reciprocal crimes, the tu quoque argument was seen as being irrelevant.

89
Q

Two wrongs make a right

A

Occurs when it is assumed that if one wrong is committed, another wrong will cancel it out.

In rhetoric and ethics, two wrongs make a right and two wrongs don’t make a right are phrases that denote philosophical norms. “Two wrongs make a right” is a fallacy of relevance, in which an allegation of wrongdoing is countered with a similar allegation. Its antithesis, “two wrongs don’t make a right”, is a proverb used to rebuke or renounce wrongful conduct as a response to another’s transgression.

Two wrongs make a right

This is an informal fallacy that occurs when assuming that, if one wrong is committed, then another wrong will cancel it out.

Speaker A: You shouldn’t embezzle from your employer. It’s against the law.
Speaker B: My employer cheats on their taxes. That’s against the law, too!
The unstated premise is that breaking the law (or the wrong) is justified, as long as the other party also does so. It is often used as a red herring, or an attempt to change or distract from the issue. For example:

Speaker A: President Williams lied in his testimony to Congress. He should not do that.
Speaker B: But you are ignoring the fact that President Roberts lied in his Congressional testimony!
Even if President Roberts lied in his Congressional testimony, this does not establish a precedent that makes it acceptable for President Williams to do so as well. (At best, it means Williams is no worse than Roberts.) By invoking the fallacy, the contested issue of “lying” is ignored.

The tu quoque fallacy is a specific type of “two wrongs make a right”. Accusing another of not practicing what they preach, while appropriate in some situations, does not in itself invalidate an action or statement that is perceived as contradictory.

Two wrongs do not make one right

Two wrongs don’t make a right is a proverb that contradicts this fallacy – a wrongful action is not a morally appropriate way to correct or cancel a previous wrongful action.

Criticism

Common use of the term, in the realm of business ethics, has been criticized by scholar Gregory S. Kavka writing in the Journal of Business Ethics. Kavka refers back to philosophical concepts of retribution by Thomas Hobbes. He states that if something supposedly held up as a moral standard or common social rule is violated enough in society, then an individual or group within society can break that standard or rule as well since this keeps them from being unfairly disadvantaged. As well, in specific circumstances violations of social rules can be defensible if done as direct responses to other violations. For example, Kavka states that it is wrong to deprive someone of their property but it is right to take property back from a criminal who takes another’s property in the first place. He also states that one should be careful not to use this ambiguity as an excuse recklessly to violate ethical rules.

Conservative journalist Victor Lasky wrote in his book It Didn’t Start With Watergate that while two wrongs don’t make a right, if a set of immoral things are done and left un-prosecuted, this creates a legal precedent. Thus, people who do the same wrongs in the future should rationally expect to get away as well. Lasky uses as an analogy the situation between John F. Kennedy’s wiretapping of Martin Luther King, Jr. (which led to nothing) and Richard Nixon’s actions in Watergate (which Nixon thought would also lead to nothing).It Didn’t Start With Watergate. Victor Lasky.

History

The phrase “two wrongs infer one right” appears in a poem dated to 1734, published in The London Magazine.

90
Q

Broken window fallacy

A

An argument which disregards lost opportunity costs associated with destroying property of others, or other ways of externalizing cost onto others.

The parable of the broken window was introduced by Frédéric Bastiat in his 1850 essay (That Which Is Seen and That Which Is Unseen) to illustrate why destruction, and the money spent to recover from destruction, is actually not a net-benefit to society. The parable, also known as the broken window fallacy or glazier’s fallacy, demonstrates how opportunity costs, as well as the law of unintended consequences, affect economic activity in ways that are “unseen” or ignored.

The parable

Bastiat’s original parable of the broken window from Ce qu’on voit et ce qu’on ne voit pas (1850):

Differing interpretations

The implications of the fallacy can also be extended to the glazier. The onlookers assume that this needed window will have a positive effect for the glazier, but in order for this to be true, the glazier must currently have time and supplies available for use. If the glazier has other jobs which demand his time and supplies, this additional job now represents a negative constraint for the glazier in that he may not be able to complete his other jobs on time.

In this case, the broken window (and the boy who broke it) did not provide any net benefit to the town, but rather made the town poorer in future benefits by (at least) the value of one window.

Bastiat’s argument

Austrian theorists, and Bastiat himself, apply the parable of the broken window in a different way. Suppose it was discovered that the little boy was actually hired by the glazier, and paid a franc for every window he broke. Suddenly the same act would be regarded as theft: the glazier was breaking windows in order to force people to hire his services. Yet the facts observed by the onlookers remain true: the glazier benefits from the business at the expense of the baker, the tailor, and so on.

Bastiat argues that society endorses activities which are morally equivalent to the glazier hiring a boy to break windows for him:

Bastiat is not addressing production – he is addressing the stock of wealth. In other words, Bastiat does not merely look at the immediate but at the longer effects of breaking the window. Moreover, Bastiat does not only take into account the consequences of breaking the window for one group but for all groups, for society as a whole.Chapter two of ‘Economics in one lesson’ by Henry Hazlitt

Theorists of the Austrian School frequently cite this fallacy and claim that it is a common element of popular thinking (e.g., the “Cash for Clunkers” program, “Cash for Clunkers Is Just a Broken Windshield”, Caroline Baum, Bloomberg.com, August 4, 2009 etc.). The 20th century American economist Henry Hazlitt devoted a chapter to it in his book Economics in One Lesson.Henry Hazlitt. “Preface”, Economics in One Lesson, online version, referenced 2009-05-15.

The opportunity cost of war

The argument can be made that war is a benefactor, since historically it often has focused the use of resources and triggered advances in technology and other areas while reducing unemployment. The increased production and employment associated with war often leads some to claim that “war is good for the economy.” However, this belief is often given as an example of the broken window fallacy. The money spent on the war effort, for example, is money that cannot be spent on food, clothing, health care, consumer electronics or other areas. The stimulus felt in one sector of the economy comes at a direct – but hidden –cost to other sectors.

Bastiat himself argued against the claim that hiring men to be soldiers was inherently beneficial to the economy in the second chapter of That Which is Seen, and That Which is Not Seen, “The Disbanding of Troops”:

In addition, war destroys property and lives. The economic stimulus to one nation’s defense sector is offset not only by immediate opportunity costs, but also by the costs of the damage and devastation of war to the country it attacks. This forms the basis of a second application of the broken window fallacy: rebuilding what war destroys stimulates the economy, particularly the construction sector. However, immense resources are spent merely to restore pre-war conditions. After a war, there is only a rebuilt city. Without a war, there are opportunities for the same resources to be applied to more fruitful purposes. Instead of rebuilding a destroyed city, the resources could have been used to improve and enlarge the city or build a second one.

One example of the costs of war sometimes given is the many projects postponed or not started until after the end of World War II in the United States. The pent-up demand for roads, bridges, houses, cars, and even radios led to massive inflation in the late 1940s. The war delayed the commercial introduction of television, among other things, and the resources sent overseas to rebuild the rest of the world after the war were not available to the American people for their direct benefit; neither did the war enrich any of these other nations.

According to Hazlitt:

The cost of special interests and government

Bastiat, Hazlitt, and others equated the glazier with special interests, and the little boy with government. Special interests request money from the government (in the form of subsidies, grants, etc.), and the government then forces the taxpayer to provide the funds. The recipients certainly do benefit, so the government action is often regarded by the people as benefiting everyone. But the people are failing to consider the hidden costs: the taxpayers are now poorer by exactly that much money. The food, clothing or other items they might have purchased with that money will now not be purchased – but since there is no way to count “non-purchases,” this is a hidden cost, sometimes called opportunity cost. Bastiat referred to this in his essay as “what is not seen”. Because the costs are hidden, there is an illusion that the benefits cost nothing. Hazlitt summarized the principle by saying, “Everything we get, outside the free gifts of nature, must in some way be paid for.” Robert A. Heinlein popularized a summarization/acronym of the concept called “TANSTAAFL” (There Ain’t No Such Thing As A Free Lunch).

Common examples of special interest groups practicing the broken window fallacy might be:

Arguments for public works projects as a way to reduce unemployment. The hidden cost here is of course the tax payer’s money, and the special interests are the jobs created by the public works. This is analogous to the fallacy only if in fact the taxation induces opportunity costs and a net social loss, yet public works are not inherently destructive and need not involve demolition for the sake of superflous rebuilding (i.e., intentionally breaking windows). Public works often are genuinely new and additive to shared societal value, independent of any benefits of maintaining a pool of skilled glaziers or other workers.
Cash for Clunkers which was a program in which the U.S. government paid consumers to trade in antiquated low gas mileage cars for newer, more efficient higher gas mileage cars, and the trade-ins were subsequently destroyed. Cash for Clunkers Is Just a Broken Windshield, Caroline Baum, Bloomberg News, August 4, 2009 A similar program allows for trade-ins of old energy inefficient appliances for more energy efficient appliances. Such programs may be viewed as imposing a net cost only if failing to account for the benefits of expending fewer units of fuel per mile, and the potential added value of decreased pollution. However, the pollution from, and the costs of, the yearly manufacturing of a new fleet of vehicles may outweigh any perceived gains from the improvements in gas mileage.
Depreciation as a means to increase exports by making goods less expensive to foreigners, and to decrease the demand for imports – which are made more expensive – in order to stimulate the domestic economy. What is not seen is the fact that domestic workers must do more work for less pay, and can purchase less with the proceeds. Essentially, the entire nation takes a pay cut.
Inflation as a means to stimulate economic activity by making savings worth less and goods worth more, creating an imperative to purchase sooner rather than waiting. What is not seen is the increased risk of no cushion of savings, the stimulation of large amounts of consumer debt, and the loss of purchasing power due to salaries lagging inflation.
Planned obsolescence, an item designed to break or become undesirable after some period of time.
Advertisements promoting the replacement of old items with new items; for example, replacing last year’s fashions with this year’s.
With all of these examples, care must be taken that every factor is taken into account, just as happened in the parable of the broken window: does one know all the costs and benefits? E.g. hiring people to do nothing or to break things and repair them is probably a bad idea. This is the case in the military spending and government employment examples. But if the hired people or the spent money actually result in useful work (however, the term “useful work” is a fallacy in itself – it is not the work that is useful but only the product or result produced by the work), things are less clear. It depends on the amount of useful work delivered compared to the amount of money spent if the spending was a good idea. This is the case in the cash for clunkers, the public works, and the renewable energy examples. This analysis is only necessary, however, when the accomplished result has a future benefit beyond the simple fact of delivering work. As Bastiat shows, the simple accomplishment of useful work can never make such projects a net positive; the glazier also performed useful work. But if a project improves the efficiency of future work, there can be a net benefit. For example, a public works program to build roads creates no wealth simply by virtue of building the road. But in the future, that road may increase trade by improving the efficiency of moving goods and reducing future costs. In such a case, the road may be a net benefit – it is an investment, rather than destruction followed by redistribution. The point of the broken window parable is to show that one cannot ignore the hidden costs of taking wealth to build the road when totalling up any such “net benefit.”

Criticisms

The interpretations assume that the “window” has positive value and that replacing it is not a good investment. In the broader scope, offsetting factors can reduce or even negate the cost of destruction. For example, new technologies developed during a war and forced modernization during postwar reconstruction can cause old technologies to become valueless. Also, if two shopkeepers keep their “window” beyond the point where it would maximize their profit, the shopkeeper whose window is broken is forced to make a good investment – increasing his comparative profit, or rather, reducing his comparative loss. Regardless, while wanton destruction of real value may not be a net loss, it is of course still a misfortune, not a blessing. Others argue that the broken window may not lead to reduction in spending by the victim, but rather, a reduction in excessive savings. “The logic of limited resources only applies when the economy is using most of those limited resources. If there are slack resources, we need merely mobilize some of the slack resources.” The reductio ad absurdum of breaking 100 windows, then, only applies once underutilised resources have run out, and the tailor is forced to divert resources from more productive means.

Another problem with this being a “fallacy” is that the positive and negative results depend on the financial condition and disposition of the victim. At the low end, the victim simply can’t afford to replace the window, or refuses to do so, resulting in almost no economic effect. In the medium range, it is a fallacy and the standard argument above applies. However, at the top end it is an economic benefit, especially if too much capital is trapped in high end savings and investment, because the funds used to replace the window will not change the victim’s spending habits, but will simply be a minor reduction of his long term savings or investments, or that of his insurer.

It has been argued that the ‘parable’, while intuitive, does not correspond to actual evidence. For instance, researchers have found that natural disasters can often lead to improved growth in both the short and long term.

Defenses

Bastiat and Austrian theorists hold to a subjective theory of value, which holds that the value of a product is determined by its consumer or owner. Therefore, if the window had a negative value, it is because the owner already wishes it broken. If the proverbial window were old, and the newer proverbial window had a greater value (determined by the shopkeeper’s own valuation), then the net increase in value in the economy is the difference between the value of the old and new window. Bastiat and Austrian economists also believe that depreciation and other losses in the value of goods reduces the net value in the economy by the amount of the reduction in value of the goods.

The very concept of excess savings is meaningless without the presumption of an authority which can define the proper amount of savings. Money saved represents demand for money which is a commodity like windows or suits. The result of spending saved money is to lower the demand for money relative to suits or glass and therefore contributes to general price inflation (or loss of purchasing power). Such an authority would suggest that by controlling prices (forcing the exchange of savings for consumer goods), that the economy could be better off. This concept when enforced by governments is known as price controls and results in shortages when prices are artificially reduced below market value. A shortage of savings results in a reduction of capital investment as more resources are diverted from future needs to present needs. The shop owner was saving so that he could grow his business, but an outside authority has determined that his savings were ‘excess’ and that by breaking his window the economy will be better off. Without free exchange there would be no prices, and without prices there can be no economic calculation, and without proper economic calculation there is no means to determine whether or not a particular action will be ‘profitable’. Therefore any authority that suggests that there might be an ‘excess of savings’ presumes the ability to make economic calculations without prices established by free exchange. Ludwig von Mises argued that this was impossible. Economic Calculation In The Socialist Commonwealth by Ludwig von Mises

If there was more profit to be made by destroying buildings, windows or leveling an entire community due to a natural disaster than pursuing other courses of action then participants in the market would pro-actively and voluntarily destroy property. It is quite clear that far more profit can be earned by recycling, selling, or repurposing an old window pane than could be achieved via random destruction by vandal or nature. Therefore, arguments that propose that destruction ‘might’ be good for the economy have to presume that actors in the economy were missing an opportunity to profit in excess of the value of the goods destroyed. The probability of random destruction being good for the economy is equal to the probability of random destruction being bad for the economy as both are equal to the probability that free exchange has not achieved the best allocation of resources.

It must be noted however that non-owners (individually and collectively) also have a value as to the presence of such good, which can be positive or negative. Presupposing rational behavior and a free market, one would expect a community and the good owner to agree to some transaction to improve the overall welfare of both parties. However, problems might arise depending on the nature of the good in the sense that a free market economy might not be able to produce or adjust the level of a good to the Pareto optimal quantity of a good. One prominent example are public goods which are prone to the free rider problem and hence are subject to their over or under production.

91
Q

Definist fallacy

A

Involves the confusion between two notions by defining one in terms of the other.

The definist fallacy can refer to three logical fallacies related to how terms are defined in an argument. The first, coined by William Frankena in 1939, involves the definition of one property in terms of another. The second fallacy refers to the insisted use of a persuasive definition in an argument. Finally, it can also refer to the Socratic fallacy in which terms are required to be defined before use. This article focuses on the first of these fallacies.

The philosopher William Frankena first used the term definist fallacy in a paper published in the British analytic philosophy journal Mind in 1939. In this article he generalized and critiqued G. E. Moore’s naturalistic fallacy, which argued that good cannot be defined by natural properties, as a broader confusion caused by attempting to define a term using non-synonymous properties. Frankena argued that the naturalistic fallacy is a complete misnomer because it is neither limited to naturalistic properties nor necessarily a fallacy. On the first word (naturalistic), he noted that Moore rejected defining good in non-natural as well as natural terms.

On the second word (fallacy), Frankena rejected the idea that it represented an error in reasoning – a fallacy as it is usually recognized – rather than an error in semantics. In Moore’s Open Question Argument, because questions such as “Is that which is pleasurable good?” have no definitive answer, then pleasurable is not synonymous with good. Frankena rejected this argument as the fact that there is always an open question merely reflects the fact that it makes sense to ask whether two things that may be identical in fact are. Thus, even if good is identical to pleasurable, it makes sense to ask whether it is; the answer may be “yes”, but the question was legitimate. This seems to contradict Moore’s view which accepts that sometimes alternative answers could be dismissed without argument, however Frankena objects that this would be committing the fallacy of begging the question.

92
Q

Naturalistic fallacy 2

A

Attempts to prove a claim about ethics by appealing to a definition of the term “good” in terms of either one or more claims about natural properties or gods will.

In philosophical ethics, the naturalistic fallacy was introduced by British philosopher G. E. Moore in his 1903 book Principia Ethica. Moore argues it would be fallacious to explain that which is good reductively, in terms of natural properties such as “pleasant” or “desirable”.

The naturalistic fallacy is close to but not identical with the fallacious appeal to nature, the claim that what is natural is inherently good or right, and that what is unnatural is inherently bad or wrong. The fallacious appeal to nature would be the reverse of a moralistic fallacy: that what is good or right is thus natural.

Furthermore, Moore’s naturalistic fallacy is very close to (and even confused with) the is–ought problem, which comes from Hume’s Treatise. However, unlike Hume’s view of the is–ought problem, Moore (and other proponents of ethical non-naturalism) did not consider the naturalistic fallacy to be at odds with moral realism.

Different common uses

The is–ought problem

The term “naturalistic fallacy” is sometimes used to describe the deduction of an “ought” from an “is” (the Is–ought problem).

In using his categorical imperative Kant deduced that experience was necessary for their application. But experience on its own or the imperative on its own could not possibly identify an act as being moral or immoral. We can have no certain knowledge of morality from them, being incapable of deducing how things ought to be from the fact that they happen to be arranged in a particular manner in experience.

Bentham, in discussing the relations of law and morality, found that when people discuss problems and issues they talk about how they wish it would be as opposed to how it actually is. This can be seen in discussions of natural law and positive law. Bentham criticized natural law theory because in his view it was a naturalistic fallacy, claiming that it described how things ought to be instead of how things are.

Moore’s discussion

According to G. E. Moore’s Principia Ethica, when philosophers try to define “good” reductively in terms of natural properties like “pleasant” or “desirable”, they are committing the naturalistic fallacy.

In defense of ethical non-naturalism, Moore’s argument is concerned with the semantic and metaphysical underpinnings of ethics. In general, opponents of ethical naturalism reject ethical conclusions drawn from natural facts.

Moore argues that good, in the sense of intrinsic value, is simply ineffable: it cannot be defined because it is not a natural property, being “one of those innumerable objects of thought which are themselves incapable of definition, because they are the ultimate terms by reference to which whatever is capable of definition must be defined”.Moore, G.E. Principia Ethica § 10 ¶ 1 On the other hand, ethical naturalists eschew such principles in favor of a more empirically accessible analysis of what it means to be good: for example, in terms of pleasure in the context of hedonism.

In §7, Moore argues that a property is either a complex of simple properties, or else it is irreducibly simple. Complex properties can be defined in terms of their constituent parts but a simple property has no parts. In addition to “good” and “pleasure”, Moore suggests that colour qualia are undefined: if one wants to understand yellow, one must see examples of it. It will do no good to read the dictionary and learn that “yellow” names the colour of egg yolks and ripe lemons, or that “yellow” names the primary colour between green and orange on the spectrum, or that the perception of yellow is stimulated by electromagnetic radiation with a wavelength of between 570 and 590 nanometers, because yellow is all that and more, by the open question argument.

Bernard Williams called Moore’s use of the term ‘naturalistic fallacy’ a “spectacular misnomer”, the question being metaphysical, as opposed to rational.

Appeal to nature

Some people use the phrase “naturalistic fallacy” or “appeal to nature” to characterize inferences of the form “This behaviour is natural; therefore, this behaviour is morally acceptable” or “This property is unnatural; therefore, this property is undesireable.” Such inferences are common in discussions of homosexuality, environmentalism and veganism.

Criticism

Some philosophers reject the naturalistic fallacy and/or suggest solutions for the proposed is-ought problem.

Sam Harris argues that it is possible to derive “ought” from “is”, and even that it has already been done to some extent. He sees morality as a budding science. This view is critical of Moore’s “simple indefinable terms” (which amount to qualia), arguing instead that such terms actually can be broken down into constituents.

Ralph McInerny suggests that “ought” is already bound up in “is”, in so far as the very nature of things have ends/goals within them. For example, a clock is a device used to keep time. When one understands the function of a clock, then a standard of evaluation is implicit in the very description of the clock, i.e., because it IS a clock, it OUGHT to keep the time. Thus, if one can’t pick a good clock from a bad clock, then one does not really know what a clock is. In like manner, if one can’t determine good human action from bad, then one does not really know what the human person is.

93
Q

Slippery slope

A

Asserting that a relatively small first step inevitably leads to a chain of related events culminating in some significant impact/event that should not happen, thus the first step should not happen. While this fallacy is a popular one, it is, in its essence, an appeal to probability fallacy. E.g. If person X does Y then Z would occur, leading to Q, leading to W, leading to E.

In logic and critical thinking, a slippery slope is an informal fallacy. A slippery slope argument states that a relatively small first step leads to a chain of related events culminating in some significant effect, much like an object given a small push over the edge of a slope sliding all the way to the bottom. The strength of such an argument depends on the warrant, i.e. whether or not one can demonstrate a process which leads to the significant effect. The fallacious sense of “slippery slope” is often used synonymously with continuum fallacy, in that it ignores the possibility of middle ground and assumes a discrete transition from category A to category B. Modern usage avoids the fallacy by acknowledging the possibility of this middle ground.

Description

The argument takes on one of various semantical forms:

In the classical form, the arguer suggests that making a move in a particular direction starts something on a path down a “slippery slope”. Having started down the metaphorical slope, it will continue to slide in the same direction (the arguer usually sees the direction as a negative direction, hence the “sliding downwards” metaphor).
Modern usage includes a logically valid form, in which a minor action causes a significant impact through a long chain of logical relationships. Note that establishing this chain of logical implication (or quantifying the relevant probabilities) makes this form logically valid. The slippery slope argument remains a fallacy if such a chain is not established.
Consequentialism and Unintended Consequences

The core of the slippery slope argument is that a specific rule or course of action is likely to result in unintended consequences and that these “unintended consequences” are undesirable (and, typically, worse than either inaction or another course of remediation). This criticism is a consequentialist criticism - interested in consequences or outcomes or results of a course of action - and does not impugn the character or intentions of the one(s) offering the “slippery slope” argument(s), the basis or bases or concerns underlying the offering of the arguments against a rule or course of action, nor the legitimacy of arguing against any specific rule or course of action.

Examples

Eugene Volokh’s Mechanisms of the Slippery Slope ( PDF version) analyzes various types of such slippage. Volokh uses the example “gun registration may lead to gun confiscation” to describe six types of slippage:

Cost-lowering: Once all gun owners have registered their firearms, the government will know exactly from whom to confiscate firearms. Gun-control opponents argue against limits on the sale of “assault weapons” because the confiscation of sportsmen’s shotguns will soon follow. Meanwhile, government officials defend their inflexible enforcement of a regulation, even in circumstances that some see as unfair, because allowing an exception would open the floodgates.
Legal rule combination: Previously the government might need to search every house to confiscate guns, and such a search would violate the Fourth Amendment to the United States Constitution. Registration would eliminate that problem.
Attitude altering: People may begin to think of gun ownership as a privilege rather than a right, and thus regard gun confiscation less seriously.
Small change tolerance, colloquially referred to as the “boiling frog”: People may ignore gun registration because it constitutes just a small change, but when combined with other small changes, it could lead to the equivalent of confiscation.
Political power: The hassle of registration may reduce the number of gun owners, and thus the political power of the gun-ownership bloc.
Political momentum: Once the government has passed this gun law it becomes easier to pass other gun laws, including laws like confiscation.
Slippery slope can also be used as a retort to the establishment of arbitrary boundaries or limitations. For example, someone who is unfamiliar with the possible negative consequences of price ceilings might argue that rent prices must be kept to $1,000 or less a month to be affordable to tenants in an area of a city. A retort invoking the slippery slope could go in two different directions:

Once such price ceilings become accepted, they could be slowly lowered, eventually driving out the landlords and worsening the problem.
If a $1,000 monthly rent is affordable, why isn’t $1,025 or $1,050? By lumping the tenants into one abstract entity, the argument renders itself vulnerable to a slippery slope argument. A more careful argument in favor of price ceilings would statistically characterize the number of tenants who can afford housing at various levels based on income and choose a ceiling that achieves a specific goal, such as housing 80% of the working families in the area.
Sometimes a single action does indeed induce similar later action. For example, judiciary decisions may set legal precedents.

Fallacy

The heart of the slippery slope fallacy lies in abusing the intuitively appreciable transitivity of implication, claiming that A leads to B, B leads to C, C leads to D and so on, until one finally claims that A leads to Z. While this is formally valid when the premises are taken as a given, each of those contingencies needs to be factually established before the relevant conclusion can be drawn. Slippery slope fallacies occur when this is not done—an argument that supports the relevant premises is not fallacious and thus isn’t a slippery slope fallacy.

Often proponents of a “slippery slope” contention propose a long series of intermediate events as the mechanism of connection leading from A to B. The “camel’s nose” provides one example of this: once a camel has managed to place its nose within a tent, the rest of the camel will inevitably follow. In this sense the slippery slope resembles the genetic fallacy, but in reverse.

As an example of how an appealing slippery slope argument can be unsound, suppose that whenever a tree falls down, it has a 95% chance of knocking over another tree. We might conclude that soon, a great number of trees would fall; however this is not the case. There is a 5% chance that no more trees will fall, a 4.75% chance that exactly one more tree will fall (and thus a 9.75% chance of 1 or fewer additional trees falling), and so on. There is a 92.3% chance that 50 or fewer additional trees will fall. The expected value of trees that will fall is 20. In the absence of some momentum factor that makes later trees more likely to fall than earlier ones, this “domino effect” approaches zero probability.

This form of argument often provides evaluative judgments on social change: once an exception is made to some rule, nothing will hold back further, more egregious exceptions to that rule.

Note that these arguments may indeed have validity, but they require some independent justification of the connection between their terms: otherwise the argument (as a logical tool) remains fallacious.

The “slippery slope” approach may also relate to the conjunction fallacy: with a long string of steps leading to an undesirable conclusion, the chance of all the steps actually occurring in sequence is less than the chance of any one of the individual steps occurring alone.

Supporting analogies

Several common analogies support slippery slope arguments. Among these are analogies to physical momentum, to frictional forces and to mathematical induction.

Momentum or frictional

In the momentum analogy, the occurrence of event A will initiate a process which will lead inevitably to occurrence of event B. The process may involve causal relationships between intermediate events, but in any case the slippery slope schema depends for its soundness on the validity of some analogue for the physical principle of momentum. This may take the form of a domino theory or contagion formulation. The domino theory principle may indeed explain why a chain of dominoes collapses, but an independent argument is necessary to explain why a similar principle would hold in other circumstances.

An analogy similar to the momentum analogy is based on friction. In physics, the static co-efficient of friction is always greater than the kinetic co-efficient, meaning that it takes more force to make an object start sliding than to keep it sliding. Arguments that use this analogy assume that people’s habits or inhibitions act in the same way. If a particular rule A is considered inviolable, some force akin to static friction is regarded as maintaining the status quo, preventing movement in the direction of abrogating A. If, on the other hand, an exception is made to A, the countervailing resistive force is akin to the weaker kinetic frictional force. Validity of this analogy requires an argument showing that the initial changes actually make further change in the direction of abrogating A easier.

Induction

Another analogy resembles yet misinterprets mathematical induction. Consider the context of evaluating each one of a class of events A1, A2, A3,…, An (for example, is the occurrence of the event harmful or not?). We assume that for each k, the event Ak is similar to Ak+1, so that Ak has the same evaluation as Ak+1.

Therefore every An has the same evaluation as A1.

For example, the following arguments fit the slippery slope scheme with the inductive interpretation:

If we grant a building permit to build a religious structure in our community, then there will be no bound on the number of building permits we will have to grant for religious structures and the nature of this city will change. This argument instantiates the slippery slope scheme as follows: Ak is the situation in which k building permits are issued. One first argues that the situation of k permits is not significantly different from the one with k + 1 permits. Moreover, issuing permits to build 1000 religious structures in a city of 300,000 will clearly change the nature of the community.
In most real-world applications such as the one above, the naïve inductive analogy is flawed because each building permit will not be evaluated the same way (for example, the more religious structures in a community, the less likely a permit will be granted for another).