W2 Flashcards

1
Q

What event is often considered the official founding of the field of Artificial Intelligence?

a) The invention of the Analytical Engine by Charles Babbage
b) Alan Turing’s work on the Turing Machine
c) The Dartmouth Conference in 1956
d) The development of the first neural network by Frank Rosenblatt

A

The Dartmouth Conference in 1956

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

In the Chinese Room argument, John Searle suggests that AI systems lack:
a) Cognitive processing capabilities
b) Understanding of symbolic logic
c) True understanding or consciousness
d) The ability to pass the Turing Test

A

True understanding or consciousness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Which of the following best describes the Turing Test?a) A measure of a machine’s ability to understand natural language
b) A test to determine if a machine can perform mathematical calculations
c) An experiment to see if a computer can mimic human behavior undetectably
d) A method to train neural networks using supervised learning

A

An experiment to see if a computer can mimic human behavior undetectably

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What was one of the main reasons for the decline of AI research funding during the AI Winter?

a) Lack of computing power
b) Excessive reliance on neural networks
c) The failure of promised AI advancements to materialize
d) Insufficient government interest in AI

A

The failure of promised AI advancements to materialize

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a key difference between deep learning and symbolic AI?
a) Deep learning relies on explicit rules written by humans, while symbolic AI learns from large datasets.
b) Symbolic AI mimics brain architecture, while deep learning focuses on logical rules.
c) Deep learning uses layers of neural networks to learn from data, while symbolic AI is rule-based.
d) Deep learning was developed before symbolic AI.

A

Deep learning uses layers of neural networks to learn from data, while symbolic AI is rule-based.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

True or False: Douglas Hofstadter believes that AI reaching human-level intelligence is an immediate and inevitable outcome.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

True or False: The Chinese Room argument suggests that passing the Turing Test would confirm true AI consciousness.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

True or False: The Uncanny Valley describes how humans become more comfortable with robots as they look more human-like, up to a certain point.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

True or False: Deep Blue, IBM’s chess-playing computer, achieved general intelligence by beating the world chess champion Garry Kasparov.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

True or False: In symbolic AI, all knowledge is encoded in human-readable symbols and rules.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How did the rapid advancements in AI at companies like Google influence Douglas Hofstadter’s views on AI?

A

Hofstadter became concerned that AI was advancing so fast it could trivialize human creativity and consciousness, reducing deep human qualities to mere algorithms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Describe John Searle’s ‘Chinese Room’ argument and explain its implications for the idea of AI consciousness.

A

Searle argues that simply following programmed rules doesn’t equate to understanding, suggesting AI might mimic human responses without true comprehension.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How did the concept of ‘symbolic AI’ differ from ‘subsymbolic AI,’ and what was a key limitation of each approach?

A

Symbolic AI relies on predefined rules, making it inflexible with new data, while subsymbolic AI learns from data but lacks transparency in its processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Explain the Uncanny Valley concept and provide an example of how it might affect human interactions with robots.

A

The Uncanny Valley describes how robots that appear almost human can feel unsettling. For example, a lifelike robot may cause discomfort if it’s close to human appearance but not quite natural.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what are Neural Networks (a.k.a. PDP or Parallel Distributed Processing a.k.a. Connectionism)

A

*Based on an abstract view of the neuron
*Artificial neurons are connected to form large networks
*The connections determine the function of the network
*Connections can often be formed by learning and do not need to be ‘programmed’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Is the brain a computer in the Turing sense?

A
  • brain can compute like a computer but it doesn’t work like a computer
16
Q

what did McCulloch-Pitts (1943) say abt the Neuron

A
  1. The activity of the neuron is an “all-or-none” process
  2. A certain fixed number of synapses must be excited within the period of latent addition in order to excite a neuron at any time, and this number is independent of previous activity and position of the neuron
  3. The only significant delay within the nervous
    system is synaptic delay
  4. The activity of any inhibitory synapse absolutely prevents excitation of the neuron at that time
  5. The structure of the net does not change with time
17
Q

how do Neural networks abstract strongly from the details of real neurons

A

*Conductivity delays are neglected
*An output signal is either discrete (e.g., 0 or 1) or it is a real-valued number (e.g., between 0 and 1)
*Net input is calculated as the weighted sum of the input signals
*Net input is transformed into an output signal via a simple function (e.g., a threshold function)

18
Q

wehat is the treshold function

A

Weighted input activations are summed and if this ‘net input’ to the neuron exceeds 0, the output activation becomes 1

19
Q

what are Perceptrons

A

Definition: The perceptron is a type of artificial neuron or neural network model created by Frank Rosenblatt in the 1950s. It’s the simplest form of a neural network, consisting of an input layer connected directly to an output layer.
Structure: Perceptrons have 2 layers (with only one layer of connections) where each neuron uses a simple binary threshold function to produce an output. If the weighted sum of the inputs meets a certain threshold, the neuron “fires” (outputs 1); otherwise, it outputs 0.
Learning Process: Perceptrons learn by adjusting weights based on errors in the output, but this error correction is limited to linearly separable tasks (problems that can be solved by drawing a straight line to separate classes).
Limitations: Perceptrons cannot solve non-linearly separable problems, such as the XOR problem, and are restricted to single-layer networks with binary outputs. This limitation led to the development of more complex models with hidden layers.

*Two-layers
*binary nodes (McCulloch-Pitts nodes) that take values 0 or 1
*continuous weights, initially chosen randomly

20
Q

what is Backpropagation

A

Definition: Backpropagation (“backward propagation of errors”) is a learning algorithm used for training multi-layer neural networks (i.e., networks with hidden layers). It was developed to overcome the limitations of perceptrons and enable learning in complex, multi-layer networks.
Purpose: Backpropagation enables the adjustment of weights across multiple layers by calculating the error at the output layer and propagating it backward through the network. This allows the network to learn non-linear relationships and handle more complex data patterns.
How It Works: In backpropagation, the network calculates the gradient of the error with respect to each weight, adjusting weights in each layer according to their contribution to the error. This process is repeated over multiple iterations (epochs) until the network reaches an optimal set of weights that minimize the error.
Advantage: Backpropagation allows multi-layer networks to handle non-linear problems like XOR, which single-layer perceptrons cannot solve.

21
Q

Learning problem to be solved
*Suppose we have an input pattern (0 1)
*We have a single output pattern (1)
*We have a net input of -0.1, which gives an output pattern of (0)
*How could we adjust the weights, so that this
situation is remedied and the spontaneous output matches our target output pattern of (1)?

A

*Increase the weights, so that the net input exceeds 0.0
*E.g., add 0.2 to all weights
*Observation: Weight from input node with activation 0 does not have any effect on the net input
*So we will leave it alone

22
Q

Perceptron algorithm

A

*weight change = some small constant X error X input activation
* error = “target activation - the spontaneous output activation”,

23
Q

2 limitations of perceptrons and why are they an issue

A
  • Only binary input-output values - no continous values
  • Only two layers - cannot represent certain logical functions (XOR) - An extra layer is necessary to represent
    the XOR
24
Q

how was the limitation od binary input output values solved

A
  • delta rule
25
Q

what is The backprop trick

A

*To find the error value for a given node h in a
hidden layer, …
*Simply take the weighted sum of the errors of all nodes connected from node h
*i.e., of all nodes that have an incoming connection from node h:

26
Q

what are some Characteristics of backpropagation

A

*Any number of layers
*Only feedforward, no cycles (though a more general version does allow this)
*Use continuous nodes
*Initial weights are random
*Total error never increases (gradient descent in error space)

27
Q

why is the logictic function importaint in the backpropagation

A
28
Q

what are some issues with the The gradient descent

A

*It does not guarantee high performance
*It does not prevent local minima (i.e., there is no backpropagation convergence theorem)
*The learning rule is more complicated and tends to slow down learning unnecessary when the logistic function is used

29
Q

Backpropagation algorithm in rules

A

*weight change = some small constant X error X input activation
*For an output node, the error is: error = (target activation - output activation) X output
activation X (1 - output activation)
*For a hidden node, the error is: error = weighted sum of to-node errors X hidden activation X (1 - hidden activation)
- learning rule is often agumented with a momentum term - *This consist in adding a fraction of the old weight change
*The learning rule then looks like:
weight change = some small constant X error X input activation + momentum constant X old weight change

30
Q

Explain the significance of the NetTalk application for neural networks.

A

NetTalk demonstrated backpropagation’s potential by learning text-to-speech pronunciation autonomously from data rather than explicit programming. It showed that multi-layer networks with backpropagation could handle complex real-world tasks, but it also highlighted limitations like slow learning and difficulty retaining information without reinforcement.

31
Q

What problem did backpropagation solve for neural networks?

A

Backpropagation addressed the inability of single-layer perceptrons to learn non-linear functions, allowing training across multiple layers and solving complex problems like XOR.

32
Q

Describe the Perceptron Convergence Theorem and its limitations.

A

The Perceptron Convergence Theorem states that if a linearly separable solution exists, a two-layer perceptron can find it. However, it only applies to simple tasks, limiting its use for complex functions.

33
Q

What is the main purpose of the gradient descent method in backpropagation?
a) Increase network complexity
b) Minimize error by adjusting weights
c) Ensure binary outputs
d) Reduce computational load

A

Answer: b) Minimize error by adjusting weights

34
Q

Which problem could the perceptron not solve, leading to the development of multi-layer networks?
a) Linear separation
b) XOR function
c) AND function
d) Boolean logic

A

Answer: b) XOR function