Lecture 5 - Evolutionary Robotics Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What is the goal of evolutionary robotics?

A

Evolutionary robotics uses artificial evolution to automatically generate control systems and morphologies for autonomous robots.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the 5 steps of an evolutionary algorithm?

A
  1. Generate a series of genomes which define neural networks.
  2. Upload the neural networks to the robot and evaluate the fitness of the robot.
  3. Selection - select the best robot’s genomes
  4. Crossover and mutation
  5. Replace the genomes of previous generation, repeat until solution is found.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What happens if you put a evolutionary robot in complex environment with a simple fitness function? (give example)

A

Sustainability can be evolved.

Robot with a battery of 20s put in an area with a charger in the corner.
Even tho the fitness function didn’t include battery, it evolved to use the charger 2s before dying.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Can you evolve vision? If so how would you do it?

A

Yes you can.
One example was based on linear vision. 64 photo receptors. Only 16 at the centre used. A Laplacian filter was used to detect areas of contrast and scalled between 0 and 1.
This was fed into a neural network that could take a time series of data.
This neural net learnt to avoid areas of high contrast.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How would you evolve learning?

A

Instead of evolving hardcoded weights you can evolve leaning rules that govern the weight of the synapse.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the 4 learning rules in the lecture? Briefly describe them.

A
  1. Hebb - weights on both sides of the neuron are strengthened when they fire.
  2. Post synaptic - the weight after the synapse is strengthened when they fire
  3. Pre synaptic - the weight before
  4. Covarience - combination of the two
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the important aspects of evolving learning?

A
  1. neural networks can use different learning rules in different parts of the system
  2. There is no need of teacher or reinforcement learning, no gradient descent
    and local minima
  3. The system can learn and adapt in new environments - more robust
  4. Evolved adapted individuals can transfer smoothly from simulated to real world.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are Framsticks?

A

Evolutionary evolved creatures made up from sticks. Neurons and sensors are hosted on sticks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What’s was the Golem project? What was significant about the project?

A

Similar the the framsticks simulation but the robots are 3D printed. The simulated evolved neural nets were downloaded onto a PIC controller the robot and tested.

The evolution took place in the simulation, but actually tested on robots.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is it called when evolved Robots don’t work as well as expected in real life?

A

Reality gap.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a disadvantage when trying to make evolved robots in real life?

A

Often the evolved systems are very complicated and therefore difficult and expensive to build.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is exploration-exploitation? What problem does it solve?

A

The co-evolution of robots and the controller.

The reality gap.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Talk me through the algorithm for exploration-exploitation? *(co-evolution)

A
  1. Evolve the robots in initial simulator, test the best on a real robot whilst recording sensory signals.
  2. Evolve the simulators to match sensory signals of evolved simulated robots with those recorded on real robots.
  3. Repeat until a good match is found.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How many iterations of co-evolution did it take to make a successfull star shaped robot in 2007?

A
  1. Proof that it is a good mechanism to cross the reality gap.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

If a robot is broken, how can it to recover?

A

Map of different solutions (walking gaits) is in the controller of the robot. (evaluated by a fitness function) The robot was able to navigate this map to test out different walking gaits until it found a successful gait for its injury. - typically this took less than 40 seconds when tested on the 6 legged robot.

Not so much evolution but a search of feature space.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is co-evolution? (not talking about body and mind, or simulations and physical)

A

When you use two competing species e.g. predator-prey. The fitness function of one depends on the other.

17
Q

What are the benefits of using co-evolution? (competing species) (5)

A
  1. may increase adaptivity by producing an evolutionary arms race
  2. More complex solutions may incrementally emerge as each population tried to win over the opponent.
  3. May be a solution to the bootstrap problem (getting stuck on 0 at the start)
  4. Human designed fitness function plays a less important role.
  5. The continuously changing fitness landscape may help to prevent stagnation in local minima
18
Q

What is a real world example of competing species (used in artifical evolution)?

A

A computer program that sorts. A testing program and and the sorting program both with fitness functions.

It produced a more efficient program than singular evolution or hand design could have produced.

19
Q

What is a problem with competing species (co-evolution)?

A

Recycling - the same set of solutions may be discovered over and over again in a cycle. All the programs care about it is beating the other.

The old solution might work well against new solution.

20
Q

What is the Hall of fame? What does it prevent?

A

When testing a new solution, use a random sample from the ‘hall of fame’ of old solutions. This prevent the recycling dynamics as solutions that get beaten by old solutions will never be picked.

21
Q

Why does it make sense for cooperation to evolve? Under which conditions will this happen.

A

If there is an advantage to helping someone and it doesn’t cost anything is make sense to evolve.

22
Q

What is Hamilton’s law with regards to cooperation?

A

You are more likely to help genetically similar things (siblings) because it enhances the chance of genes similar to yours being passed on.

23
Q

To favour cooperation how should the genome of the teams of swarms be distributed?

A

Homogeneously