Machine Consciousness Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What is the difference between analysis and synthesis?

A

Analysis is basic science, whereas synthesis is engineering

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why attempt to build conscious machines?

A

The key intention is to clarify, through synthesis, the notion of what it is to be conscious

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the advantages of focusing on consciousness rather than intelligence?

A

Performance payoffs. Better autonomy and freedom from pre-programming. An ability to represent the machine’s own role in it’s environment. Improve capacity for action based on inner contemplative activy rather than reactive action based largely on table-lookup (GOFAI - doesn’t allow for flexibility) of pre-stored contingency-action couplings (Aleksander, 2007)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does GOFAI stand for?

A

Good Old-Fashioned AI

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is GOFAI?

A

The modelling or creation of intelligent machines centred on traditional computers, i.e. systems that process information, or symbols, according to explicitly coded rules. These symbols have no meaning (not grounded in real world) except as interpreted by the human programmer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are some generally agreed GOFAI requirements for machine consciousness?

A

The brain is a control mechanism and ensures the organism deal appropriately with its external and internal environments, but not all control systems are conscious (eg thermostats). Also the ability to model the world and self, and to integrate the two

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are subsymbolic approaches?

A

Bottom-up, embodied, situated, behaviour-based AI. Rejected symbolic AI and focus on the basic engineering problems that would allow robots to move and survive. Higher intelligence and ‘mind’ can be created only by interacting in real time with real world, and thus that aspects of body are required. World provides constraints, information storage and feedback that make perception, learning and intelligence possible. No internal representations or models

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are hybrid intelligent systems?

A

Combine symbolic and subsymbolic approaches

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are some problems for GOFAI attempts to create conscious machines?

A

Why are some internal representations or models conscious and some not? How could a representation or model be an experience/conscious?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are some problems for subsymbolic attempts to create conscious machines?

A

Need to explain why and how conscious experience arises out of continuous interaction with the outside world. How to create/deal with conscious experiences that are not driven by continuous interaction with the outside world, eg imagining, dreaming, reasoning?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Do conscious machines need to replicate brains?

A

A spectrum of positions on this question ranging from functionalists to physicalists who believe we need to get closer to physically replicating the human brain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is strong AI according to Searle 1980?

A

The appropriately programmed computer is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is strong AI according to Searle 1999?

A

The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exaclty the same sense human beings have minds

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is computer functionalism (computationalism)?

A

Mental states are computational states. Computational states are implementation-independent (it is software that determines the computational state, not the hardware. Since the implementation is unimportant, the only empirical data that matters is how the system functions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the Turing Test?

A

A test of a machine’s ability to exhibit intelligent behaviour, equivalent to or indistinguishable from, that of an actual human

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What were two early computer programs that attempted to pass the Turing test?

A

ELIZA: an attempt to simulate a ROgerian psychotherapist (Weizenbaum 1966) and PARRY: an attempt to simulate a paranoid schizophrenic (Colby)

17
Q

What is the correct simulation according to strong and weak AI?

A

Strong AI: the correct simulation really is a mind (an appropriately programmed computer literally has mental/cognitive states). Weak AI: the correct simulation is a model of the mind (the principle value of the computer in the study of the mind is that it gives us a very powerful tool, eg to simulate mental processes)

18
Q

What is Searle’s Chinese Room?

A

A thought experiment that exploits the fact that computer programs can be ‘multiply realised’. Computer programs can be implemented ona diverse range of hardware

19
Q

What is the narrow argument of the Chinese Room?

A

If strong AI is true, then there is a program for Chinese such that if any computing system runs that program, that system thereby comes to understand Chinese. But I could run a program for Chinese without thereby coming to understand Chinese. Therefore strong AI is false

20
Q

What are 3 classes of reply/objection to the Chinese room?

A

The systems reply, the robot/brain simulator reply, and the other minds reply

21
Q

What is the systems reply?

A

Although the man doesn’t understand Chinese, there may be understanding by a larger or different entity

22
Q

What is the problem with the systems reply?

A

It isn’t a reply. It’s the thesis that Searle is supposed to be refuting - let the individual internalise all of these elements of the system, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn’t anything in the system that isn’t him

23
Q

What is the robot or brain simulator reply?

A

A variation on the computer system, eg a computer in a robot body, or a system that simulates the entire brain, could understand

24
Q

What is the response to the robot or brain simulator reply?

A

Inside a room in the robot’s skull, I shuffle symbols. As long as all I have is a formal computer program, I have no way of attaching any meaning to any of the symbols, and the fact that the robot is engaged in causal interaction with the outside world won’t help me

25
Q

What is the other minds reply?

A

Either the man does understand Chinese, or the scenario is impossible. We attribute consciousness to humans on the basis of behaviour, so why not the behaviour of the man in the Chinese room or the whole system?

26
Q

What are the Lessons learnt from the Chinese Room?

A

Searles official argument against strong AI fails, but he does have a point, that merely implementing a program is arguably insufficient for the system having a mind. Something else is needed, perhaps certain kinds of causal connections between the system and its environment

27
Q

What two things does Dehaene et al 2017 discuss?

A

Global availability (C1) and self-monitoring (C2)

28
Q

What is global availability?

A

Information that’s selected for further processing, eg we can recall it, act upon it and speak about it. Similar to Block’s access consciousness

29
Q

What is self-monitoring?

A

Ability to monitor own processing and obtain information about oneself (introspection or meta-cognition)

30
Q

What is Dehaene et al’s argument?

A

We contend that a machine endowed with C1 and C2 would behave as though it were conscious (though Carter et al 2018 as ‘what about phenomenal consciousness?’)