keywords connections Flashcards

1
Q

creativity - search

A

They were training robots in a simulated environment to adapt to walking if their legs had been damaged. When they tried to find the least contact with the ground needed for the robot to walk, they had found an impossible value — the computer had calculated that the robot could walk with 0% contact of the feet with the ground.How could a robot possibly walk without making contact with the ground? When they ran the video, they found something amazing. The computer had come up with a gait in which the robot could invert itself and walk using its elbow joints, thus reducing the contact of the feet to 0. Parallel search made this much faster than if it was with serial search. Anyhow, the question is: is this creativity?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

creativity - optimization

A

optimization of creative ideas using AI and machine learning. Creatives have long been established as a crucial factor in ad success; research has shown that ad creative is the “single-most important factor” in driving up sales. Yet its optimization was left to the backburner for years, seen as subjective, time-consuming and expensive.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

creativity - deep learning

A

the virtual assistants of online service providers use deep learning to help understand your speech and the language humans use when they interact with them. These systems always search the best optimal situation and might show degrees of creativity. Advertisers are only beginning to realize why creative is the magic touch their digital ads need and how the process of optimization has been made easier by AI.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

emotion - pattern associator

A

: Emotion detection – Face -> Our Emotion AI unobtrusively measures unfiltered and unbiased facial expressions of emotion, using any optical sensor or just a standard webcam. Our technology first identifies a human face in real time or in an image or video. Computer vision algorithms identify key landmarks on the face – for example, the corners of your eyebrows, the tip of your nose, the corners of your mouth. We could train this network by using a pattern associator.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

emotion - deep learning

A

Emotion detection – Face -> Our Emotion AI unobtrusively measures unfiltered and unbiased facial expressions of emotion, using any optical sensor or just a standard webcam. Our technology first identifies a human face in real time or in an image or video. Computer vision algorithms identify key landmarks on the face – for example, the corners of your eyebrows, the tip of your nose, the corners of your mouth. Deep learning algorithms then analyze pixels in those regions to classify facial expressions. Combinations of these facial expressions are then mapped to emotions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

automation - social robotics

A

A social robot is an autonomous robot that interacts and communicates with humans or other autonomous physical agents by following social behaviors and rules attached to its role. Designing an autonomous social robot is particularly challenging, as the robot needs to correctly interpret people’s action and respond appropriately, which is currently not yet possible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

simulated annealing - search

A

it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. It is often used when the search space is discrete (e.g., all tours that visit a given set of cities). For problems where finding an approximate global optimum is more important than finding a precise local optimum in a fixed amount of time, simulated annealing may be preferable to alternatives such as gradient descent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

simulated annealing - hill climbing

A

: Simple heuristics like hill climbing, which move by finding better neighbour after better neighbour and stop when they have reached a solution which has no neighbours that are better solutions, cannot guarantee to lead to any of the existing better solutions – their outcome may easily be just a local optimum, while the actual best solution would be a global optimum that could be different. Metaheuristics (simulated annealing) use the neighbours of a solution as a way to explore the solutions space, and although they prefer better neighbours, they also accept worse neighbours in order to avoid getting stuck in local optima; they can find the global optimum if run for a long enough amount of time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

simulated annealing - local/global minimum/maximum

A

Simple heuristics like hill climbing, which move by finding better neighbour after better neighbour and stop when they have reached a solution which has no neighbours that are better solutions, cannot guarantee to lead to any of the existing better solutions – their outcome may easily be just a local optimum, while the actual best solution would be a global optimum that could be different. Metaheuristics (simulated annealing) use the neighbours of a solution as a way to explore the solutions space, and although they prefer better neighbours, they also accept worse neighbours in order to avoid getting stuck in local optima; they can find the global optimum if run for a long enough amount of time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

local/global minimum/maximum - hill climbing/heuristics

A

Simple heuristics like hill climbing, which move by finding better neighbour after better neighbour and stop when they have reached a solution which has no neighbours that are better solutions, cannot guarantee to lead to any of the existing better solutions – their outcome may easily be just a local optimum, while the actual best solution would be a global optimum that could be different. Metaheuristics (simulated annealing) use the neighbours of a solution as a way to explore the solutions space, and although they prefer better neighbours, they also accept worse neighbours in order to avoid getting stuck in local optima; they can find the global optimum if run for a long enough amount of time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

local/global minimum/maximum -optimization

A

Finding global maxima and minima is the goal of mathematical optimization. If a function is continuous on a closed interval, then by the extreme value theorem global maxima and minima exist. Furthermore, a global maximum (or minimum) either must be a local maximum (or minimum) in the interior of the domain, or must lie on the boundary of the domain. So a method of finding a global maximum (or minimum) is to look at all the local maxima (or minima) in the interior, and also look at the maxima (or minima) of the points on the boundary, and take the largest (or smallest) one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Butterfly effect/Chaos theory- fault tolerance

A

self-organization prevents fault tolerance even though there is a well know effect in chaos theory, that is the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Butterfly effect/chaos theory - cusp catastrophe

A

Dynamic Systems theory derives directly from Chaos theory, which itself is from the same family as Catastrophe models. All approaches have some common attributes. In particular, dual attractor states are integral to each approach. In summary, catastrophe models and dynamic systems have much in common and provide useful information but the more interesting questions belong to future researchers who attempt to unearth the mechanisms that underpin these models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

cusp catastrophe - dynamic systems

A

Dynamic Systems theory derives directly from Chaos theory, which itself is from the same family as Catastrophe models. All approaches have some common attributes. In particular, dual attractor states are integral to each approach. In summary, catastrophe models and dynamic systems have much in common and provide useful information but the more interesting questions belong to future researchers who attempt to unearth the mechanisms that underpin these models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

tri level hypothesis - connectionism

A

: Dawson (1998) has argued that the tri-level hypothesis is a system that can unify apparently incompatible views in cognitive science, such as classical versus connectionist views of cognition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

dynamical systems - connectionism/sub-symbolic approach

A

Connectionist and dynamical systems approaches explain human thought, language and behavior in terms of the emergent consequences of a large number of simple noncognitive processes. We view the entities that serve as the basis for structured probabilistic approaches as abstractions that are occasionally useful but often misleading: they have no real basis in the actual processes that give rise to linguistic and cognitive abilities or to the development of these abilities. Although structured probabilistic approaches can be useful in determining what would be optimal under certain assumptions, we propose that connectionist, dynamical systems, and related approaches, which focus on explaining the mechanisms that give rise to cognition, will be essential in achieving a full understanding of cognition and development.

17
Q

goal stack - connectionism

A

: the ACT-R model is constituted by a GOAL STACK that changes due to current goals that are regulated by procedural and declarative memory. The latter one is composed by chunks which encode objects in the environment by having different activations, the more activation the easiest it is to retrieve them quickly from memory. This could be related to the activation of units in connectionist networks, where the more the unit is activated the more it is related to the input given (weak connection)
Or
A common misunderstanding suggests that ACT-R may not be a symbolic system because it attempts to characterize brain function. This is incorrect on two counts: First, all approaches to computational modeling of cognition, symbolic or otherwise, must in some respect characterize brain function, because the mind is brain function. And second, all such approaches, including connectionist approaches, attempt to characterize the mind at a cognitive level of description and not at the neural level, because it is only at the cognitive level at which important generalizations can be retained.

18
Q

goal stack - creativity

A

in the ACT-R model, new production rules are created. Part of the model is constituited by the procedural memory, which resolves conflict of current goals by creating sub-goals. This, according to the view that creativity is the ideation of novel ideas, could correspond to machine creativity.

19
Q

gradient descent - optimization

A

Gradient descent is an optimization algorithm used to minimize some function by iteratively moving in the direction of steepest descent as defined by the negative of the gradient. In machine learning, we use gradient descent to update the parameters of our model.

20
Q

crum - connectionism

A

crum assumes that the mind has mental representation (=data structures) and computational procedures (=algorithms). Connectionism proposed a new idea on how data structures are instead represented by neural connections and algorithms as neurons firing and spreading activation. Both use serial search.

21
Q

backpropagation - delta rule

A

backpropagation requires the derivatives of activation functions to be known at network design time.

22
Q

gradient descent - backpropagation

A

: Gradient descent with backpropagation is not guaranteed to find the global minimum of the error function, but only a local minimum; also, it has trouble crossing plateaus in the error function landscape. This issue, caused by the non-convexity of error functions in neural networks, was long thought to be a major drawback, but Yann LeCun et al. argue that in many practical problems, it is not. Automatic differentiation is a technique that can automatically and analytically provide the derivatives to the training algorithm. In the context of learning, backpropagation is commonly used by the gradient descent optimization algorithm to adjust the weight of neurons by calculating the gradient of the loss function; backpropagation computes the gradient(s), whereas (stochastic) gradient descent uses the gradients for training the model (via optimization).

23
Q

activation functions - backpropagation

A

Backpropagation requires the derivatives of activation functions to be known at network design time.

24
Q

optimization - backpropagation

A

learning as an optimization problem with backpropagation. the problem of mapping inputs to outputs can be reduced to an optimization problem of finding a function that will produce the minimal error.

25
Q

recurrent ANN - backpropagation

A

: Backpropagation algorithms are a family of methods used to efficiently train artificial neural networks (ANNs)