PHIL 250 Flashcards

1
Q

Necessary and Sufficient Conditions

A

if x then y -> in this case x is sufficient for y and y is necessary for x. because logically, y is guaranteed if x occurs but if x doesnt occur then y could still occur without x. x can not occur without y also occuring therefore if y is not present then x must also not be present.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Validity Vs Soundness

A

validity is about the truth preserving nature of an argument. soundness is also concerned with whether the premises are true.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Ampliative vs Non-Ampliative

A

ampliative also known as inductive/abductive expands the base of knowledge but at the cost of a guaranteed conclusion as they have a lesser degree of logical strength. Deductive guarantee their conclusion but do not expand the base of knowledge ie they make clear what is already implicit in the premises.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Commitments of Decartes

A
  • Substance - the world is composed of substances with unique properties and one attribute that makes the substance what it is
  • There are two kinds of substances (dualism). Mind and body and mind pocesses the attribute of thought and body pocesses the attribute of extension.
  • Interactionism - Mind and body interact with each other
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Princess bohemia and Descartes response

A

I beseech you tell me how the soul of man (since it is but a thinking
substance) can determine the spirits of the body to produce voluntary
actions. For it seems every determination of movement happens from
an impulsion of the thing moved, according to the manner in which it is
pushed by that which moves it, or else, depends on the qualification and
figure of the superficies of the latter. Contact is required for the first
two conditions, and extension for the third. You entirely exclude
extension from your notion of the soul, and contact seems to me
incompatible with an immaterial thing

DESCARTEs
Thoughts are discrete and singular.
If the mind/body conduit were non-singular
(e.g. a pair of brain hemispheres), then thoughts
would not be singular.
So, the mind/body conduit is not non-singular
(i.e. it is singular).
The pineal gland is the only singular
component of the body/nervous system.
(C) So, the pineal gland is the mind/body conduit.

Empirical: The pineal gland performs no such
function.
Conceptual: No matter what part of the brain or
body Descartes picks, he provides no answer to
the question, how can an immaterial substance
causally interact with a material substance?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Decartes argument of qualitative distinctness

A

Bodies Minds
Spatial Non-spatial
Mathematically quantifiable Non-quantifiable ‘qualitative’ properties (e.g., feel)
Epistemically public (/objective) Epistemically private (/subjective)
Purposeless/normatively ‘inert’ Purposeful and normatively evaluable

If x and y have different properties, then x and y are not identical (from
Leibniz’s law).
My mind and my body differ in their properties (see previous slide).
Therefore, my mind and my body are distinct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Decartes argument of concievability

A

I can conceive, even under ideal conditions (i.e.,
without any restrictions on time, intelligence,
attention, background knowledge, etc.) of a
situation where my mind exists just as it
actually does, but my body does not exist.
Whatever I can conceive of under ideal
conditions, is a possible way the world might
have been (in principle, even if not in practice).
If my mind could (in principle) have existed just
as it actually does without my body, then my
mind is not the same thing as my body.
So, my mind is not the same thing as my body.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Descartes points on animal mindedness

A

Animal behavior can be explained entirely mechanistically .By the principle of parsimony (/Occam’s Razor), we
have rational grounds for adopting the Mechanistic
Alternative and rejecting the Non-mechanistic
Alternative: the two accounts are matched in their
explanatory and predictive power, but the Non-
mechanistic alternative is less parsimonious.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

La mettries response to decartes parsimony argument

A

La Mettrie noted that, by the same reasoning, we
should be able to infer that humans lack minds. But
that conclusion seems obviously false in our case:
we know that we have minds.
Since we have more confidence in the claim that
we, ourselves, have minds than we do in any
premise in Descartes’ argument, we must reject
the reasoning that led us to this obviously false
conclusion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

The language response + slippery slope

A

If something has a mind, then it has
the potential to acquire language.
No nonhuman animal has the
potential to acquire language.
Therefore,
No nonhuman animal has a mind.

They would have an immortal soul like us. This
is unlikely, because there is no reason to believe it
of some animals without believing it of all, and
many of them such as oysters and sponges are too
imperfect for this to be credible.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

The Turing Test (Maximal vs minimal)

A

Minimum Turing Test: Fool a human being, at least once, into thinking you’re human, after a brief
and relaxed text conversation.
* Maximum Turing Test: Reliably perform at 70% at tricking multiple judges into thinking you’re
human, no matter what searching or tricky questions they ask.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

The Turing Machine and the toy example

A

Turing machines are abstract computational devices,
used to formally analyze what can be mechanically
computed.
* The “state machine” includes a set of instructions that
determine how, given any input, the machine changes its
internal states and outputs. This instruction program is
called a machine table.

Suppose a pop machine accepts only nickels (N) and dimes
(D) as input, and the pop costs 15 cents.
* The internal states consist only of (0), (5), (10).
* The machine outputs a pop when it is in state (5) and
receives (D) as input or when it is in state (10) and
receives either (D) or (N) as input; otherwise, the machine
waits.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Gilbert Ryle’s commitments

A

Consider the property of being fragile. This (perfectly
real) property characterizes how an object is disposed
to behave under various nonactual conditions (e.g., it
will tend to shatter if dropped onto a hard surface).
Likewise, to say that S believes it’s raining is to say that
S is behaviourally disposed to (among other things)
bring an umbrella if they’re going out, to reschedule
their plans to go sit on the beach with friends, to say
“it’s raining!” when asked what it’s like out, etc.
Logical behaviourists claim that we can analytically define
each mental state concept exhaustively in terms of
actual and possible behaviours.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Dogma of the ghost in the machine

A

Ryle argues that this entire project is
premised on a fundamental error – namely,
the assumption that ‘mind’ refers to a
certain kind of entity, when actually it
belongs to a different logical category. Ryle
calls this a category mistake:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Putnams response to behaviourism

A

Super Actors are a counterexample to the sufficiency of a given behavioural
disposition for being in pain.
Super Spartans are a counterexample to the necessity of a given behavioural
disposition for being in pain.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Chrisholms response to behaviourism

A

An early objection of behaviourism concerned an apparent
vicious circularity in its account of dispositional mental states
(Chisholm 1957).
Take the mental state of believing that it’s raining. Earlier, we
suggested that we could analyze this as the disposition to
(among other things) take an umbrella with one when leaving
the house, to cancel one’s beach plans, etc.
But someone who believes that it’s raining outside is disposed to
behave in these ways only if they also want to stay dry and believe
various other things to be true: e.g., that taking the umbrella with
them provides a way to stay dry.
More generally, which behavioral dispositions a given mental
state is associated with crucially depends on what other mental
states the subject possesses.
If so, then it seems we cannot identify how a type of mental
state will manifest behaviourally without presupposing countless
other mental states. We will have analyzed mental properties in
terms of … mental properties!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Commitments of Smart

A

Identity theory (brain state theory): For any mental state type, M,
there is a brain state type, B, such that M = B. Identity theory asserts a relation of identity between types. In the
case of pain, identity theory asserts that the mental type, PAIN,
just is a certain type of physical state: say, C-fiber activation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Smarts argument from unification of science

A

Behavioral observation plus knowledge of physiology: At
least to date, any creature to whom we attribute
mental life has a complex neural structure: a brain.
* Scientifically observed mind-brain correlations: We are
learning more and more every day about the
nervous system and discovering more and more
systematic correlations between certain types of
mental state and certain types of brain state.

Observation: the observational phenomenon we
call ‘water’ perfectly correlates with a certain chemical
compound, H2O.
Inference: Water and H2O aren’t merely correlated.
Rather, water just is the chemical compound, H2O.
As Smart sees it, we must choose between two rival
hypotheses: either the observed correlations
between types of brain state and types of mental
state reflect a causal relationship, or the correlated
types of brain state and types of mental state are
not merely correlated types but the same type.
On the latter hypothesis, what initially appeared to be
two (perfectly correlated) phenomena turn out to be
one and the same phenomenon.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Smart’s argument from parsimony

A

Smart says “wishes to resist” the
dualist hypothesis because of
“Occam’s razor” – i.e., because the
physicalist hypothesis offers the
“simpler”, more parsimonious
explanation, and (other things being
equal) we should prefer theories that
do that

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Multiple realisability

A

Conceptual worry: Agents with the same types of mental states but
differing brains are easily conceived.
* Empirical worry: It seems “overwhelmingly likely” that we will
discover one creature with the same mental state type but distinct
neural structures. So identity theory looks doomed as a science of the
mind.

21
Q

Computational functionalism

A

The mind is the ‘software’
running on the ‘hardware’ of the
brain.
* Mental states are nothing more
than a certain causal-functional
roles within a system, i.e., in
terms of input-output relations
specified by a machine table.

22
Q

Challenges Comp Func avoids

A

Avoids human chauvanism
Behavioural equivalence doesn’t entail mental equivalence (super
actor/super spartan thought experiment)
* Functionalists: we agree! Behavioural equivalence ≠ functional equivalence. E.g.,
the super actor’s internal state is functionally distinct from an ordinary person
experiencing pain.
* Mental states explain behaviour holistically, such that no mental state in
isolation has an associated behavioural disposition (Chisholm).
* Functionalists: since functional-causal roles are inter-defined, we can allow that
mental states explain behaviour holistically.

23
Q

Extension of Comp Func

A

Don’t chauvinistically privilege the physical states inside
an agent’s skull over physical states outside the agent’s
skull if the states are functionally equivalent.
working through one’s thoughts by
getting one’s thoughts down on paper
* ‘thinking out loud’ with a friend
* gesturing while speaking
* physically re-arranging one’s scrabble
pieces (rather than re-arranging them
using one’s imagination) to more
easily see what words one can form

24
Q

Otto and Inga

A

Otto and Inga going to the MoMa: Inga
remembers the address in the normal way
(internally). Otto, an Alzheimer’s patient, has
the address written down in his notebook
(which he automatically consults in the
course of going about his daily activities).

25
Weak AI and Strong AI
‘Weak’ AI, including: * The development of useful technologies (e.g., self-driving cars) * methods of simulating specific aspects or dimensions of human intelligence (analogous to a computer simulation of a weather pattern) * ‘Strong’ AI (a.k.a., ‘artificial general intelligence’): * designing and constructing systems that instantiate intelligence. (Not a mere simulation of intelligence, but its duplication).
26
The Chinese room Argument
1. Computer programs are purely formal or ‘syntactic’: their behaviour is sensitive only to the ‘shapes’ of the symbols they process. 2. Genuine understanding is sensitive to the meaning (‘semantic content’) of symbols. 3. Form (or syntax) can never constitute, or be sufficient for, meaning (or semantics). 4. Therefore, running a computer program can never be sufficient for understanding.
27
The Systems reply
While it is true that the individual person who is locked in the room does not understand the story, the fact is that he is merely part of a whole system, and the system does understand the story. The person has a large ledger in front of him in which are written the rules, he has a lot of scratch paper and pencils for doing calculations, he has 'data banks' of sets of Chinese symbols. Now, understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part.
28
The Robot Reply
Suppose we wrote a different kind of program from Schank's program. Suppose we put a computer inside a robot, and this computer would not just take in formal symbols as input and give out formal symbols as output, but rather would actually operate the robot in such a way that the robot does something very much like perceiving, walking, moving about, hammering nails, eating, drinking - anything you like. The robot would, for example, have a television camera attached to it that enabled it to 'see,’ it would have arms and legs that enabled it to 'act,' and all of this would be controlled by its computer 'brain.' Such a robot would, unlike Schank's computer, have genuine understanding and other mental states.
29
The problem of intentionality
Brentano is widely interpreted as claiming that intentionality is the “mark of the mental”: that all and only mental states are ‘intentionally directed’ toward – are about – objects, i.e., represent (/present) an object to the thinker. In the case of any natural, physical relation (e.g., the relation of being taller than, of being rained on, of being subject to a certain gravitational influence), each of the relata must at very least exist. By contrast, it seems as though you can represent (e.g., think about) objects and events that do not exist (e.g., that are wholly fictional), and that you have never and will never come into causal contact with.
30
Original vs derivitive intentionality
We might rephrase Brentano’s thesis as the claim that all and only mental states bear original or nonderivative intentionality. Public words, images, physical gestures, etc. are only derivatively intentional: they get their ‘aboutness’ from the aboutness of our thoughts.
31
naturalizing intentionality
Purely syntactic properties of a symbol will never by itself account for semantic properties. But by examining how the symbol-manipulating system is embedded within a larger environment, we can arrive at a satisfying and thoroughly mechanistic theory of semantic content.
32
Representaion as resemblance Problems with sufficiency
X represents Y if and only if X resembles Y. Sufficiency: Strictly speaking, everything resembles everything else in some respect. But, presumably, it’s false that everything represents everything else. representation isn’t symmetric. If a painting of Napoleon resembles Napoleon, then (equally) Napoleon resembles the painting. But it is the painting of Napoleon that represents Napoleon, not Napoleon who represents the painting!
33
Representaion as resemblance Problems with necesity claim
Necessity:Images represent their subjects without resembling them very much: think of a stick drawing, a cartoon caricature, or a Cubist painting. There may be some resemblance in these cases, but very not much
34
Tracking/ Teleosemantic theories
A popular strategy for contemporary representational theorists has been to try to explain semantic content of mental states precisely in terms of the causal relations that a system’s representational states bear to the worldly entities that they represent. Such causal connections form the naturalistic materials for original (underived) intentionality.
35
reliable indication
Applied to mental states, the idea would be that a mental state represents Y if and only if there is a reliable causal correlation between this type of mental state and Y.
36
the misrepresenation problem
The misrepresentation problem arises from the claim that reliable indication is necessary for representation. If reliable indication is necessary for representation, then misrepresentation (e.g., of a sheep when a sheep is not present) is logically ruled out.
37
The disjunction problem
The disjunction problem arises from the claim that reliable indication is sufficient for representation. If reliable indication is sufficient for representation, then whatever is reliably indicated by a type of representation – including a goat on a dark, foggy night – is thereby part of the content of the representation. Hence, intuitively too many situations are logically ruled in
38
Dretskes proper function
The selective process could be explicit design (in which case it will be an artefact), or natural (e.g., the product of evolution through natural selection). In either case, we appeal to this selective history in order to explain why those items exist, and in order to distinguish the item’s proper function from its other properties.
39
The functional indeterminacy problem
This frog is visually sensitive to the presence of flies. When it sees one, it snaps its tongue out and grabs it. On an indicator teleosemantic account, the frog possesses a visual representation whose proper function is to reliably detect ... * A fly? * Something small, dark, and moving? * A certain pattern of retinal stimulation? * food? * Something untranslatable into human language?
40
Swampman
A being molecule-for-molecule identical to you, but who came into existence by complete accident (and so has no evolutionary history)
41
The subjective character of experience
To be ‘subjective’ in Nagel’s sense is to be understandable only by those who share a certain point of view, or one relevantly similar to it. To understand a human being’s experience, you need to be human, or at least human-like. (Contrast this with physics: it’s neither necessary (nor helpful) for a physicist to make themselves more like a neutron star or an electromagnet, in order to understand one.)
42
Bats?
Nagel: We can learn all the scientific and objective facts about bat brains and behaviour but never know what it is like to actually be a bat. There is a subjective character to the perspective of the bat which could not be understood objectively.
43
The objective character of scientific explanation
Scientific explanation is rooted in objective terms in an attempt to remove the subjectivity and explain phenomena in a way that is not dependant on the being carrying out the science. This seems to show that a scientific explanation of consciousness would mean explaining something that is inherently subjective in objective terms. And that seems problematic, if not impossible.
44
hard and easy problems of consioiucnbess
The ‘easy problems’ of consciousness: how to explain the complex functions performed by conscious states. - The ‘hard problem’ of consciousness: how to explain why these functions are associated with a particular subjective character .
45
the explanatory argument
Premise 1: Physical accounts of phenomena explain at most structure and function. Premise 2: Explaining structure and function does not suffice to explain consciousness. Conclusion 1: No physical account can explain consciousness. Premise 3: What cannot be physically explained is nonphysical. Conclusion 2: Consciousness is nonphysical.
46
The zombie argument
Premise 1: It is conceivable that there be zombies (a creature who is physically identical to you but for whom there is nothing that it is like to be.) Premise 2: If it is conceivable that there be zombies, it is metaphysically possible that there be zombies. Premise 3: If it is metaphysically possible that there be zombies, then consciousness is nonphysical. Conclusion 2: Consciousness is nonphysical.
47
The knowledge argument
Premise 1: Mary knows all the physical facts. Premise 2: Mary does not know all the facts Conclusion 1: The physical facts do not exhaust all the facts.
48
Type A, B, and C physicalism
Type A: Deny an epistemic gap between phenomenal and physical -> Consiouness can be exhaustively explained through physical facts. Type B: There is an epistemic gap but no ontological gap. we cannot grasp that consiouness is physical but it nevertheless is. Connections like water and H20 are concievably false but nonetheless true Type C: An epistemic gap is present but further scientific progress will close this gap.