M2 Flashcards

1
Q

Artificial Neural Networks (ANN) is also known as (3)

A

Neural networks
Neural computing (or neuro-computing) systems
Connectionist models

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

___ simulate the biological brain for problem solving

A

ANNs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

The biological brain is a massively ___ system of interconnected processing elements

A

parallel

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

ANNs simulate a similar network of simple processing elements at a greatly ___ scale

A

reduced

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

ANN is first developed in the 19__ and ___

A

1950s and 1960s

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

There is a great upsurge in interest in Artificial Neural Networks since the mid ___s

A

1980s

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Both ANNs and expert systems are ___ tools for problem solving

A

non-algorithmic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

ES rely on the solution being expressed as a set of heuristics by an expert, ANNs learn solely from ___

A

data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

There is estimated ___ neurons in the human brain, with each connected to up to 10 thousand others

A

1000 billion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Electrical impulses produced by a neuron travel along the ___

A

axon

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

The axon connects to ___ through synaptic junctions

A

dendrites

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A neuron adds its inputs and “fires” (produces an output) when the sum of its inputs exceeds a certain threshold

A

sum

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

The strengths of a neuron’s inputs are modified (enhanced or inhibited) by the ___

A

synaptic junctions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Learning in our brains occurs through a continuous process of new interconnections forming between neurons, and ___ at the synaptic junctions

A

adjustments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A simple model of the ___, first proposed in 1943 by McCulloch and Pitts

A

biological neuron

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A Synthetic Neuron consists of a summing function with an internal threshold, and “___” inputs

A

weighted

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

In Synthetic Nueron, for a neuron receiving n inputs, each input xi ( i ranging from 1 to n) is weighted by multiplying it with a ___

A

weight wi

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

In Synthetic Neuron, the sum of the wi xi products gives the ___ of the neuron

A

net activation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

In Synthetic Neuron, the activation value is subjected to a ___ to produce the neuron’s output.

A

transfer function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

In Synthetic Neuron, the weight value of the connection or link carrying signals from a neuron i to a neuron j is termed ___

A

wij

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

___ compute the output of a node from its net activation

A

transfer function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Step function
Signum function
Sigmoid function
Hyperbolic tangent function

these are the popular ___

A

transfer functions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

In the ___, the neuron produces an output only when its net activation reaches a minimum value – known as the threshold

A

step function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

When the threshold T is 0, the step function is called ___.

A

signum

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

The ___transfer function produces a continuous value in the range 0 to 1

A

sigmoid

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

___ is s variant of the sigmoid transfer function wherein its shape is similar to the sigmoid (like an S), with the difference that the value of outputi ranges between –1 and 1

A

Hyperbolic Tangent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

The building block of an ANN is the ___

A

artificial neuron

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

The most common architecture of an ANN consists of two or more layers of artificial neurons or nodes, with each node in a layer connected to every ___ in the following layer

A

node

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

In ANN, signal usually flows from the ___ layer, which is directly subjected to an input pattern, across one or more hidden layers towards the output layer

A

input

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

The most popular ANN architecture is known as the ___

A

multilayer perceptron

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

In some models of the ANN, such as the ___ or Kohonen net, nodes in the same layer may have interconnections among them

A

self-organising map (SOM)

32
Q

The input stimulus of an ANN are data values grouped together to form a ___

33
Q

The output value(s) of nodes in the output layer represent the ___ of the input pattern

34
Q

Any ___-layer ANN can (at least in theory) represent the functional relationship between an input pattern and its class

35
Q

The process by which an ANN arrives at the values of these weights is known as ___

A

learning or training

36
Q

Learning in ANNs takes place through an iterative training process during which node interconnection ___ values are adjusted

37
Q

___weights, usually small random values, are assigned to the interconnections between the ANN nodes

38
Q

____ in ANNs can be the most time consuming phase in its development

39
Q

In supervised learning of ANN, the weight adjustments during each iteration aim to reduce the “___” (difference between the ANN’s actual output and the expected correct output)

40
Q

In ANN supervised training, pairs of sample input value and corresponding output value are used to train the ___ repeatedly until the output becomes satisfactorily accurate

41
Q

In ANN unsupervised training, the net adapts itself to align its weight values with training patterns. This results in groups of nodes responding strongly to specific ___ of similar inputs patterns

42
Q

A neural network can be in one of two states: (2)

A

training mode or operation mode

43
Q

Most ANNs learn ___ and do not change their weights once training is finished and they are in operation

44
Q

In an ANN capable of ___ learning, training and operation continue together

45
Q

What are the three most well known models of ANN

A
  1. The multilayer perceptron
  2. The Kohonen network (the self-organising map)
  3. The Hopfield net
46
Q

Type of ANN where nodes are arranged into an input layer, an output layer and one or more hidden layers

A

Multilayer Perceptron

47
Q

The multilayer perceptron is also known as the ___ network because of the use of error values from the output layer in the layers before it to calculate weight adjustments during training

A

backpropagation

48
Q

Another name for Multilayer Perceptron is ___

A

Feedforward network

49
Q

The learning rule for the multilayer perceptron is known as “___” or the “backpropagation rule”

A

the generalized delta rule

50
Q

The generalized delta rule repetitively calculates an error value for each input, which is a function of the ___ difference between the expected correct output and the actual output

51
Q

___ = Old weight + change change calculated from square of error

A

New Weight

52
Q

__ is the difference between desired output and actual output

53
Q

Training stops when error becomes acceptable, or after predetermined number of ___

A

iterations

54
Q

In MLP. for a given pattern p, the error Ep can be plotted against the weights to give the so called ___

A

error surface

55
Q

The error surface is a landscape of hills and valleys, with points of minimum error corresponding to ___ and maximum error found on ___

A

wells, peaks

56
Q

MLP follows the method of ___ where the changes are made in the steepest downward direction

A

gradient descent

57
Q

In MLP, all possible solutions are depressions in the error surface, known as ___

A

basins of attraction

58
Q

The MLP may fail to settle into the global minimum of the error surface and instead find itself in one of the ___

A

local minima

59
Q

A number of alternative approaches can be taken to perturb the ANN out of local minima in MLP such as, Lowering the gain term progressively, Addition of more nodes for better representation of patterns, Addition of random noise to perturb the ANN out of local minima, and Introduction of a ___

A

momentum term

60
Q

Lowering the ___ progressively is used to influence rate at which weight changes are made during training. The value by default is 1, but may be gradually reduced to reduce the rate of change as training progresses

61
Q

Introduction of a ____ determines effect of past weight changes on current direction of movement in weight space and is also a small numerical value in the range 0

A

momentum term

62
Q

Addition of random ___ to perturb the ANN out of local minima is usually done by adding small random values to weightso and takes the net to a different point in the error space

63
Q

is a biological systems that display both supervised and unsupervised learning behavior

A

The Kohonen Network (the self-organizing map)

64
Q

During training, the Kohonen net changes its weights to learn appropriate ___, without any right answers being provided

A

associations

65
Q

The Kohonen net consists of an input layer, which distributes the inputs to each node in a second layer, known as the ___

A

competitive layer

66
Q

In Kohonen Network, neurons in the competitive layer have ___ (positively weighted) connections to immediate neighbors and ___ (negatively weighted) connections to more distant neurons

A

excitatory, inhibitory

67
Q

As an input pattern is presented, some of the neurons in the competitive layer are sufficiently activated to produce outputs, which are fed back to other neurons in their ___

A

neighborhoods

68
Q

In Kohonen Network, the node with the set of input weights closest to the input pattern component values produces the largest output. This node is termed the ___

A

winning node

69
Q

The ___ is the most widely known of all the auto-associative ANNs

A

Hopfield net

70
Q

In ___, a noisy or partially incomplete input pattern causes the network to stabilize to a state corresponding to the original pattern

A

auto-association

71
Q

The Hopfield model is also useful for ___tasks

A

optimization

72
Q

The Hopfield net is a ___ ANN in which the output produced by each neuron is fed back as input to all other neurons

73
Q

The Hopfield net has no iterative learning algorithm as such. ___ (or facts) are simply stored by setting weights to lower the network energy

74
Q

During operation of Hopfield Net, an input pattern is applied to all neurons ___and the network is left to stabilize

A

simultaneously

75
Q
A

Parallelism