Nature Inspired Computation Flashcards

1
Q

What is “Nature-Inspired Computation”?

A

Nature-Inspired Computation (NIC) applies principles and mechanisms found in natural systems to solve computational problems. It involves:

“Nature”: Studying elements such as evolution, human-like activity, and collective behaviors (e.g., ant colonies).

“Inspired”: Using these natural systems as models because they excel at solving certain problems.

“Computation”: Tackling tasks organisms are naturally good at, such as pattern recognition, shortest pathfinding, and search operations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why are natural systems used as inspiration for computing?

A

Natural systems provide models for algorithms because they:

Produce highly complex and efficient solutions (e.g., evolution creating humans).

Solve complex problems (e.g., human cognition and reasoning).

Demonstrate emergent intelligence in simple systems (e.g., swarms solving problems collectively).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What advantages do Nature-Inspired techniques have over classical methods?

A

Nature-Inspired techniques offer:

Good performance on a variety of real-world problems.

Efficiency on modest hardware (e.g., evolutionary algorithms for system optimization).

Better outcomes for pattern recognition and anomaly detection compared to classical approaches (e.g., neural networks outperforming traditional pattern recognition methods).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are some real-world examples of Nature-Inspired Computing?

A

Examples include:

Ant Colony Optimization for scheduling and timetabling.

Particle Swarm Optimization for solving complex systems.

Neural Networks for data pattern recognition.

Artificial Immune Systems for threat detection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the “magic ingredients” of evolutionary algorithms?

A

Key components include:

A population of competing organisms.

Selection with a bias towards fitter individuals while allowing diversity.

Mutation (random changes) and optional recombination (combining traits of parents).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How does “evolution” act as a problem-solving method in computing?

A

Evolution uses trial and error to solve problems by:

Reproducing in changing environments.

Generating diversity in offspring.

Selecting for survival, leading to better solutions over generations. Example: Geckos’ climbing ability inspires material design.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the basic cycle of an evolutionary algorithm?

A

The cycle includes:

Generating an initial population.

Evaluating fitness.

Selecting parents based on fitness.

Applying crossover and mutation to generate offspring.

Replacing the old population with the new one.

Repeating until an optimal solution is reached.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How does the “Infinite Monkey Theorem” relate to evolution?

A

It highlights the role of randomness in generating solutions. While infinite randomness might theoretically solve problems, evolution combines randomness with selection and reproduction to achieve practical and efficient problem-solving.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How are evolutionary algorithms applied in real-world scenarios?

A

Applications include:

Planning: Routing, scheduling, packing.

Design: Electronic circuits, neural networks, structural designs.

Simulation: Modeling economic systems.

Identification: Predicting medical trends.

Control: Robotics and engine design.

Classification: Diagnosing diseases, detecting spam.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the role of mutation and recombination in evolutionary algorithms?

A

Mutation: Introduces diversity by making small random changes to solutions, essential for exploring new possibilities.

Recombination: Combines traits from multiple parents, creating potentially superior offspring. It’s optional but often beneficial.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is “Survival of the Fittest” in the context of evolutionary algorithms?

A

It ensures that solutions with higher fitness are more likely to reproduce. This mechanism gradually improves the population by favoring advantageous traits while maintaining some diversity for exploration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How did NASA use evolutionary algorithms for antenna design?

A

NASA used evolutionary algorithms to design antennas for the ST5 spacecraft. These algorithms outperformed human designs, creating efficient structures suited for the mission’s constraints.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are some limitations of random problem-solving methods?

A

Purely random methods, like the “Infinite Monkey Theorem,” are impractical due to the vast time required. Evolutionary algorithms improve on this by integrating selection and iterative refinement to achieve faster, targeted solutions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How do artificial neural networks differ from classical methods?

A

Artificial Neural Networks (ANNs):

Learn patterns from data without explicit programming.

Adapt dynamically to new inputs.

Excel at tasks like image recognition and natural language processing, outperforming rigid, rule-based systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How do ant colonies inspire computational algorithms?

A

Ant Colony Optimization mimics the behavior of ants finding shortest paths to food. It uses pheromone trails as a memory mechanism, enabling efficient solutions for routing and scheduling problems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Lecture 2

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are the basic varieties of an Evolutionary Algorithm (EA)?

A

Evolutionary Algorithms consist of three main components:

Selection: Determines which individuals from the population are chosen to reproduce. Methods include selecting the top percentage of the population, probability proportional to fitness, or exponentially decreasing probability with rank.

Variation: Alters the chosen individuals to create new ones, typically using genetic operators such as mutation and recombination. This depends on the encoding of the solutions.

Population Update: Decides how to form the next generation, such as replacing the entire population or merging new solutions with the old ones and selecting the best subset.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is the purpose of an Evolutionary Algorithm?

A

Evolutionary Algorithms are used for optimization problems, aiming to find the best possible solution to a problem where evaluating the quality of a solution can be quantified using a fitness function. They are approximate algorithms particularly suited for solving hard problems where exact methods are impractical.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the process of a generic EA?

A

The process includes:

Generate an initial population of random solutions.

Evaluate the fitness of each solution.

Perform the cycle of:

Selection: Choose parents based on fitness.

Variation: Apply genetic operators (e.g., mutation, recombination).

Population Update: Replace some or all of the old population with new solutions.

Repeat until a stopping criterion is met.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Why are EAs considered approximate algorithms?

A

EAs are approximate because:

They aim to find near-optimal solutions in reasonable time rather than guaranteeing the optimal solution.

They are suited for hard problems where exact solutions are computationally infeasible.

They balance exploration (searching new areas) and exploitation (refining known good solutions).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are the key research topics in EAs?

A

Key research areas include:

Selection strategies: Balancing exploration and exploitation.

Encoding methods: Representation of solutions impacts performance.

Variation operators: E.g., small-step mutations and recombination for solution evolution.

Parameter tuning: Adjusting parameters such as population size, mutation rate, and selection pressure over time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What are some real-world applications of EAs?

A

EAs can solve optimization problems in diverse fields such as:

Scheduling (e.g., university timetables).

Network design (e.g., pipe networks, communication networks).

Engineering (e.g., antenna design, car aerodynamics).

Machine learning (e.g., neural network optimization).

Health care (e.g., radiotherapy treatment planning).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Define optimization and the role of the fitness function in EAs?

A

Optimization involves finding the best solution to a problem, often by maximizing or minimizing a fitness function that evaluates solution quality. For example:

Timetabling: Fitness = number of clashes (lower is better).

Engineering design: Fitness = closeness to design specifications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is exhaustive search, and why is it impractical for large problems?

A

Exhaustive search generates every possible solution, evaluates fitness, and selects the best. It’s impractical for large problems because the search space (S) grows exponentially, making it computationally infeasible to evaluate all possibilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What is the difference between hard and easy problems in optimization?

A

Easy (tractable) problems can be solved with polynomial-time algorithms, e.g., sorting. Hard (intractable) problems require exponential time for exact solutions, making them impractical for large inputs, e.g., protein structure search.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What are exact algorithms, and when are they used?

A

Exact algorithms guarantee finding the optimal solution. They are used for small or tractable problems where computational resources are sufficient to evaluate all possibilities or utilize efficient methods (e.g., Prim’s algorithm for minimum spanning trees).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

How is problem complexity classified?

A

Problem complexity is based on the growth of required computation with input size (n):

Polynomial complexity (e.g., O(n log n), O(n^2)): Tractable and solvable efficiently.

Exponential complexity (e.g., O(2^n)): Intractable for large inputs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Explain the concept of polynomial vs. exponential complexity with examples.

A

Polynomial: Sorting n numbers in O(n log n).

Exponential: Finding best alignment for n sequences in O(2^n).
Exponential problems grow so rapidly that they become computationally infeasible for large n.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What is a Minimum Spanning Tree (MST), and how is it solved?

A

An MST connects all nodes in a graph with the minimal total edge cost and no cycles. Solved efficiently using polynomial-time algorithms like Prim’s algorithm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What makes real-world MST problems hard?

A

Real-world MST problems often have constraints (e.g., degree limits, bandwidth requirements) that make them computationally hard and unsuitable for exact polynomial-time solutions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Why are approximate algorithms essential for real-world problems?

A

Approximate algorithms are essential because:

Most real-world problems are hard and infeasible for exact methods.

They deliver good solutions in reasonable time and often approach optimality without guarantees.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What is the typical performance trade-off in approximate algorithms?

A

Approximate algorithms provide quick good solutions early but require more time for higher-quality results. EAs exemplify this with a trade-off curve of quality vs. time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

lecture 3

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What are the basic principles of evolutionary algorithms?

A

Evolutionary algorithms (EAs) mimic natural selection to solve difficult problems. They use:

Objective (Fitness) Function: Measures how good a solution is.

Selection: Chooses the best solutions.

Mutation: Introduces variations by altering solutions.

Recombination (Crossover): Combines solutions to produce new ones.

Replacement: Replaces weaker solutions with better ones.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What is the difference between generational and steady-state evolutionary algorithms?

A

Generational EA: Replaces the entire population each generation.

Steady-State EA: Replaces a few individuals each generation, often just the weakest.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What is the Travelling Salesperson Problem (TSP)?

A

SP is a combinatorial optimization problem where you find the shortest tour through all cities, visiting each once and returning to the start. It’s used to illustrate optimization challenges.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Describe the hillclimbing algorithm.

A

Generate a random solution and evaluate its fitness.

Mutate the solution and evaluate the mutant.

If the mutant’s fitness is better or equal, replace the current solution.

Repeat until the termination condition is met.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Why is hillclimbing named so?

A

Hillclimbing is named for its approach of “climbing” toward better fitness values in a fitness landscape, similar to ascending a hill.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

What are the limitations of hillclimbing?

A

It gets stuck at local optima, failing to explore other potentially better solutions in the search space.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

How can the Travelling Salesperson Problem (TSP) be solved using hillclimbing?

A

TSP solutions are encoded as permutations. The algorithm mutates by swapping adjacent nodes and accepts mutations that improve or maintain fitness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Define a fitness landscape.

A

A fitness landscape is a graph where:

Solutions are plotted on the x-axis.

Their fitness values are plotted on the y-axis.
It shows the relationship between solutions and their fitness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

What role do mutation operators play in search landscapes?

A

Mutation operators determine the “neighbourhood” of a solution by creating slight variations. This impacts the exploration of smooth or rugged fitness landscapes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

What is the significance of neighbourhoods in search landscapes?

A

A neighbourhood is the set of all possible mutants of a solution. It defines local search scope based on the mutation operator.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Describe the topology of typical landscapes.

A

Landscapes are locally smooth (small changes in solutions lead to small changes in fitness).

Realistic problems often have rugged landscapes with many local optima and deceptive features.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

What are the main enhancements beyond hillclimbing?

A

Allowing downhill moves: Helps escape local optima (e.g., local search).

Using populations: Enables exploration of multiple regions in the search space (e.g., evolutionary algorithms).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

How does local search differ from hillclimbing?

A

Local search evaluates a neighbourhood and accepts solutions based on defined policies (e.g., accepting worse solutions probabilistically, as in Monte Carlo search).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

What is the purpose of population-based search?

A

Population-based search maintains multiple solutions, reducing the risk of getting stuck in local optima and allowing for crossover and recombination to enhance exploration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Compare traditional optimization algorithms to evolutionary algorithms regarding landscapes.

A

Traditional algorithms (e.g., gradient search): Get stuck in local optima.

Evolutionary algorithms: Use populations and genetic operators to explore landscapes more effectively, avoiding local optima.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Explain the process of running a steady-state EA on the TSP

A

Encode solutions as permutations and evaluate fitness.

Mutate a selected parent solution.

Replace the weakest individual if the mutant has better fitness.

Repeat for generations, leading to convergence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

List different mutation techniques in evolutionary algorithms.

A

Single-Gene Mutation: Changes one gene.

Multi-Gene Mutation: Alters multiple genes.

Customizable based on problem encoding.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

What are some selection methods used in EAs?

A

Rank-Based Selection: Chooses solutions based on rank.

Roulette Wheel Selection: Probability proportional to fitness.

Tournament Selection: Selects the best from a random subset.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Why are population-based methods crucial for evolutionary algorithms?

A

They allow exploration of diverse regions in the search space, making it easier to avoid local optima and fostering innovation through recombination.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

Lecture 4

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

What are the two main types of genetic algorithms?

A

The two main types are:

Generational: Genetic operators are applied repeatedly to create a new population in each generation. Elitist variants copy the top individuals directly to the next generation.

Steady State: Genetic operators are applied to produce a few (e.g., 1 or 2) new solutions per iteration, replacing weaker solutions in the population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

What is a generational elitist genetic algorithm?

A

It is a type of generational genetic algorithm where the best individuals are copied unchanged into the next generation, ensuring that high-quality solutions persist.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

What are replacement strategies in steady-state algorithms?

A

Replacement strategies include:

Replace Weakest: Always replace the weakest solution.

Replace First Weakest: Replace the first solution weaker than the new candidate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Why is selection pressure important in evolutionary algorithms?

A

Low pressure: Leads to little evolutionary progress as selection becomes almost random.

High pressure: Risks convergence to local optima by overemphasizing the best solutions.

Moderate pressure: Balances exploration and exploitation, supporting better long-term performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

What is Fitness Proportionate Selection (Roulette Wheel)?

A

A selection method where the probability of selecting an individual is proportional to its fitness.
Limitations:

Struggles with minimization problems or negative fitness values.

High disparity in fitness can overly favor the best solutions, leading to premature convergence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

What is Rank-Based Selection?

A

Selection probabilities are proportional to the rank of individuals, not their fitness values. This mitigates the problems of fitness proportionate selection by ensuring a fairer selection process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

What is Tournament Selection, and how does the parameter affect it?

A

In tournament selection:

Randomly select individuals.

Choose the fittest among them as the parent.
**Effect of **:

**Small **: Low selection pressure.

**Large **: High selection pressure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

What is the role of genetic operators in evolutionary algorithms?

A

Genetic operators create new candidate solutions:

Exploitation: Small, beneficial changes to existing solutions.

Exploration: Broader searches of the solution space to avoid local optima.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

What are common representations (encodings) in evolutionary algorithms?

A

Integer Vectors

Binary Strings

Real Values

Permutations

Trees

Each encoding requires specific genetic operators.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

How does k-ary encoding work?

A

In k-ary encoding, each candidate solution is a list of numbers, each ranging from 0 to . Example: For a 5-ary encoding with : [4, 2, 1, 3, 0, 1, 4, 3, 2, 1].

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

Describe single-gene and multi-gene mutation in k-ary encodings.

A

Single-Gene Mutation: Randomly change one gene to a new value.

Multi-Gene Mutation: Apply single-gene mutation multiple times.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

What is swap mutation, and when is it used?

A

Swap mutation involves exchanging the positions of two genes. It is commonly used in permutation-based encodings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

How does mutation work in real-valued encodings?

A

Single-Gene Mutation: Add a small random deviation (e.g., Gaussian noise) to a single gene.

Vector Mutation: Add a small random vector to the entire solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

What are standard recombination operators for k-ary encodings?

A

1-Point Crossover: Split two parents at one point and exchange segments.

2-Point Crossover: Split at two points and exchange middle segments.

Uniform Crossover: Use a random binary mask to determine which genes to swap.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

Why is representation/encoding crucial in evolutionary computation?

A

The representation impacts:

The shape of the search landscape.

The choice of effective genetic operators.
A poorly chosen encoding can hinder algorithm performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

Describe the steady-state, mutation-only evolutionary algorithm with replace-worst strategy.

A

Steps:

Initialize a random population.

Select a parent using tournament selection.

Mutate the parent to create a child.

Replace the weakest solution if the child is fitter.

Repeat until termination.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

What are the steps in a generational, elitist evolutionary algorithm with rank-based selection?

A

Initialize a population and evaluate fitness.

Select parents using rank-based probabilities.

Apply crossover and mutation to create children.

Retain the best individual and replace the rest with children.

Repeat until termination.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

How does the crossover rate influence evolutionary algorithms?

A

The crossover rate determines how often crossover is applied. A high rate encourages exploration, while a low rate emphasizes exploitation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

What is the role of mutation rate in evolutionary algorithms?

A

The mutation rate controls the frequency of random changes.

Low rate: Promotes stability but risks premature convergence.

High rate: Enhances exploration but can disrupt good solutions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

Summarize the key principles of evolutionary algorithms.

A

Selection favors fitter solutions.

Genetic operators must suit the encoding.

Balancing exploration and exploitation is essential.

Encoding and operator design must align with the problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

Lecture 5 - gonna need to watch this one again i think - too fast

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

What is encoding in the context of Evolutionary Computation (EC)?

A

Encoding is the process of representing candidate solutions in a way that can be manipulated by evolutionary algorithms. Different encodings affect the optimization landscape and strategy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

Why is encoding important in evolutionary algorithms?

A

Encoding determines how solutions are represented, impacting the fitness landscape’s complexity, the effectiveness of operators like mutation and crossover, and the overall difficulty of the optimization problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

What problem arises when using standard mutation for the Traveling Salesman Problem (TSP)?

A

Standard mutation may produce invalid solutions by repeating nodes or omitting nodes entirely. For example, a mutation could result in visiting the same city twice or skipping another entirely.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

How can the swap operator help in TSP mutation?

A

The swap operator exchanges two cities in the tour, ensuring that each city is visited exactly once and maintaining a valid solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

What are two strategies to handle invalid solutions resulting from crossover in TSP?

A

1) Use a better crossover operator that inherently avoids invalid solutions.
2) Introduce a repair mechanism to fix invalid solutions after crossover.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

What is a k-ary encoding in the context of water distribution network optimization?

A

A k-ary encoding uses chromosomes of length l, where each gene represents a pipe’s diameter chosen from k possible values, often looked up from a predefined table.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

What is the Generalised assignment problem?

A

A problem where you have n workers and m jobs to complete

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

Describe an encoding approach for the Generalized Assignment Problem.

A

Encoding can be done with:

An array of n integers (workers), each representing a job (range 1 to m).
Advantages: Every worker has at least one job.
Disadvantages: Jobs may not be assigned, and some jobs may be assigned to multiple workers.

An array of m integers (jobs), each representing a worker (range 1 to n).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

What are the benefits of using direct encoding?

A

Direct encoding offers:

Straightforward genotype-to-phenotype mapping.

Easier mutation effect estimation.

Faster chromosome interpretation, speeding up fitness evaluation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

How does indirect encoding differ from direct encoding?

A

Indirect encoding uses a heuristic or constructive approach, enforcing problem constraints and potentially reducing the search space size but at the cost of slower interpretation and more rugged fitness landscapes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

What is the role of mutation in an indirect encoding for a timetable problem?

A

Mutation in indirect encoding involves selecting a clash-free slot for an exam, ensuring valid solutions that respect constraints like non-overlapping exams.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

What are the characteristics of real encoding in evolutionary algorithms?

A

Real encoding represents solutions as real numbers. It is suitable for continuous optimization problems and can directly represent variables like pipe diameters or antenna coordinates.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

Explain the concept of “innovation” in evolutionary algorithms.

A

Innovation refers to the ability of evolutionary algorithms to generate entirely new designs or solutions beyond known optima, often leading to novel and improved designs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

How can evolutionary algorithms optimize antenna design?

A

By encoding the x, y, z coordinates of antenna elements as chromosomes, evolutionary algorithms can evolve designs that meet performance requirements like uniform radiation patterns.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

What is the fitness function for antenna design optimization?

A

The fitness function measures the sum of squared differences between the desired and actual gain patterns of the antenna over various angles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

What are common types of encodings used in evolutionary algorithms?

A

Common encodings include:

Integer vectors

Real vectors

Bit and symbol vectors

Tree representations
Each can be either direct or indirect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

How does encoding affect crossover and mutation operators?

A

Encoding determines how crossover and mutation are applied and whether resulting solutions remain valid. It also influences the fitness landscape’s ruggedness and search efficiency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

What was Ingo Rechenberg’s contribution to evolutionary computation?

A

Ingo Rechenberg pioneered the application of evolutionary algorithms to engineering design, such as optimizing the shape of a jet nozzle to maximize thrust.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

Why might indirect encoding lead to a smaller search space?

A

Indirect encoding can incorporate domain-specific knowledge and enforce constraints directly, eliminating large portions of the search space that would contain invalid solutions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

Lecture 6

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

What is the core idea of Genetic Programming (GP)?

A

GP is a type of evolutionary algorithm that evolves computer programs to solve problems. It starts with random programs, evaluates their performance, selectively modifies them using crossover and mutation, and repeats this process until an optimal or satisfactory program is found.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

Who is credited with popularizing Genetic Programming?

A

John R. Koza popularized GP in his 1992 book Genetic Programming: On the Programming of Computers by Means of Natural Selection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

How does GP differ from conventional programming?

A

In conventional programming, humans write the program to produce the desired output. In GP, programs are automatically evolved to meet the required behavior without human intervention.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

What are some example applications of Genetic Programming?

A

GP can be applied to tasks such as:

Navigation code for mobile robots

Curve fitting

Antenna design

Circuit design

Prediction tasks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
99
Q

How is fitness evaluation performed in GP compared to standard evolutionary algorithms?

A

In standard EAs, a chromosome represents a design or schedule, and its fitness is evaluated by giving it a score. In GP, the chromosome is a program, so fitness evaluation involves running the program and testing its behavior over multiple input conditions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
100
Q

What are the main steps in the Genetic Programming algorithm?

A

The main steps are:

Generate random programs.

Evaluate programs using training data.

Modify the population using crossover and mutation.

Repeat until a good program is found.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
101
Q

What are the five major preparatory steps for running a GP algorithm?

A

The five steps are:

Determining the set of terminals.

Determining the set of functions.

Determining the fitness measure.

Determining the parameters for the run.

Determining the criterion for terminating the run.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
102
Q

What is an example of fitness evaluation in GP?

A

Fitness evaluation involves computing the error between the program’s output and the expected output across multiple test conditions, then summing these errors to determine overall fitness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
103
Q

How can a program be represented in Genetic Programming?

A

Programs can be represented as:

Trees with functions as internal nodes and terminals as leaves.

Procedural code, where operations are carried out in sequence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
104
Q

What are a function set and a terminal set in GP?

A

Function set (F): Includes functions like addition, subtraction, multiplication, division, and conditional operations (e.g., IF statements).

Terminal set (T): Includes variables (e.g., X, Y) and constants (e.g., real numbers).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
105
Q

What rules are followed when generating random programs?

A

Rules include:

Selecting a random function from the function set for the root.

Ensuring that each function node has the correct number of children.

Limiting depth to a predefined maximum.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
106
Q

How does mutation work in Genetic Programming?

A

Mutation involves selecting a random subtree in a program, removing it, and generating a new subtree at that location while following syntax rules and depth limits.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
107
Q

How does crossover work in Genetic Programming?

A

Crossover involves selecting subtrees from two parent programs and swapping them to create offspring programs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
108
Q

What is symbolic regression in the context of Genetic Programming?

A

Symbolic regression is the task of finding a mathematical expression, represented as a program, that best fits a given set of data points.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
109
Q

How is fitness measured in a symbolic regression task?

A

Fitness is measured by calculating the sum of the absolute differences between the output of the candidate program and the given data over multiple values of the independent variable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
110
Q

What are the preparatory steps for a symbolic regression example?

A

Steps include:

Terminal set: {X, Random-Constants}

Function set: {+, -, *, %}

Fitness measure: Sum of absolute errors.

Parameters: Population size (e.g., 4 individuals).

Termination: Error threshold (e.g., less than 0.1).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
111
Q

How can GP be used in search-based software engineering?

A

GP can evolve parts of software, such as optimizing specific functions or algorithms within a larger system. This approach can involve seeding the initial population with existing human-written code and constraining evolution to certain modules.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
112
Q

What are different types of function and terminal sets that can be used in GP?

A

Different sets include:

Arithmetic operations (+, -, *, /)

Logical operations (AND, OR, NOT)

Relational operations (<, >, =)
These sets are useful for various tasks such as symbolic regression, data mining, and decision-making problems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
113
Q

Lecture 7

A
114
Q

What is swarm intelligence?

A

Swarm intelligence refers to the collective behavior of decentralized, self-organized systems, typically composed of simple agents interacting with their environment and each other. Examples include ant colonies, bird flocks, and fish schools.

115
Q

What are the key properties of swarm behavior?

A

Swarm behavior exhibits emergent properties, meaning the collective behavior is more sophisticated than what individual agents can achieve alone. Local interactions among agents lead to the emergence of complex global behavior.

116
Q

How do individual elements in a swarm operate?

A

Each element follows simple behavioral rules and interacts locally with other elements and the environment. There is no central control; instead, the overall behavior emerges from these local interactions.

117
Q

What optimization algorithms are inspired by swarm intelligence?

A

Two main optimization algorithms inspired by swarm intelligence are:

Ant Colony Optimization (ACO)

Particle Swarm Optimization (PSO)

118
Q

What real-world behaviors of ants inspired Ant Colony Optimization (ACO)?

A

Observed behaviors in ants that inspired ACO include:

Finding the shortest route to food sources

Building bridges

Cooperatively carrying large objects

Sorting brood and food items

Regulating nest temperature

119
Q

What are the two types of stigmergy?

A

Sematonic stigmergy: The agent’s actions directly relate to problem-solving, influencing the behavior of others.

Sign-based stigmergy: The agent’s actions affect the environment indirectly, unrelated to immediate problem-solving.

120
Q

How do ants use pheromones in their environment?

A

Ants lay pheromone trails while moving between the nest and food sources. The intensity of the pheromone trail increases with more ants using the path. Over time, trails evaporate, but those with higher usage remain prominent, guiding more ants.

121
Q

What is the basic operation of Ant Colony Optimization algorithms?

A

In ACO algorithms:

Ants represent agents moving along edges of a graph.

They choose paths based on pheromone strength and other criteria.

Each completed path represents a candidate solution.

Pheromone is laid on paths proportional to solution quality.

The process iterates, encouraging convergence towards optimal solutions.

122
Q

How does the pheromone updating mechanism work in ACO?

A

After an ant completes a path:

Pheromone evaporates on all paths to avoid indefinite accumulation.

Pheromone is increased on the traversed paths based on the quality of the solution (e.g., shorter paths receive more pheromone).

123
Q

How does an ant choose its next node in the ACO algorithm?

A

An ant chooses the next node based on a probability calculated from the pheromone level and the distance to the next node. Higher pheromone levels and shorter distances increase the likelihood of selection.

124
Q

Why is pheromone evaporation necessary in ACO?

A

Pheromone evaporation prevents paths from becoming overly dominant and allows the algorithm to explore new and potentially better solutions, avoiding local optima.

125
Q

How does ACO handle the exploration-exploitation trade-off?

A

ACO balances exploration and exploitation by:

Encouraging exploration through initial random choices and pheromone evaporation.

Promoting exploitation of good solutions by increasing pheromone levels on shorter or higher-quality paths.

126
Q

What is the main advantage of using ACO for optimization problems?

A

The main advantage of ACO is its ability to find near-optimal solutions for complex combinatorial optimization problems by mimicking natural behaviors and leveraging collective intelligence.

127
Q

What types of problems can ACO be applied to?

A

ACO is applicable to various discrete optimization problems, such as:

Travelling Salesman Problem (TSP)

Vehicle Routing Problem (VRP)

Job Scheduling

Network Routing

128
Q

What is meant by an autocatalytic process in ACO?

A

An autocatalytic process in ACO refers to a positive feedback loop where good solutions reinforce themselves by attracting more ants, leading to the emergence of better collective solutions over time.

129
Q

What is the significance of the initial pheromone levels in ACO?

A

Initial pheromone levels provide a starting point for exploration. They are typically set uniformly to encourage a broad search of the solution space before pheromone accumulation guides ants towards promising paths.

130
Q

What are some challenges in designing ACO algorithms?

A

Challenges include:

Avoiding premature convergence to suboptimal solutions.

Balancing exploration and exploitation.

Properly tuning parameters such as pheromone evaporation rate and initial pheromone levels.

131
Q

What role do heuristic factors play in ACO?

A

Heuristic factors provide additional guidance to ants by incorporating problem-specific knowledge, such as the inverse of distance in TSP, improving the decision-making process during path selection.

132
Q

Lecture 8

A
133
Q

How do ants in the ACO algorithm select paths during optimization?

A

Ants select paths based on a transition rule, which combines the amount of pheromone on a path and a heuristic score. Paths with higher pheromone levels and better heuristic values are more likely to be chosen.

134
Q

What is the purpose of the pheromone update rule in ACO?

A

The pheromone update rule reinforces successful paths by increasing pheromone levels on paths that lead to better solutions. This helps guide future ants toward optimal or near-optimal solutions.

135
Q

Describe the transition rule used in the ACO algorithm.

A

The transition rule determines the probability of an ant moving from one node to another. It depends on the pheromone level and a heuristic factor, with the probability being proportional to the product of these two factors.

136
Q

What are some common variants of the ACO algorithm?

A

Common variants include:

Max-Min Ant System (MMAS), which limits the pheromone levels to avoid premature convergence.

Elitist Rank Ant System, where only the best-performing ants update the pheromone trails.

137
Q

How is ACO applied to scheduling problems?

A

In scheduling problems, jobs are represented as nodes in a network. Ants build a schedule by moving through the network, selecting jobs based on pheromone levels and heuristic information, such as job deadlines or processing times.

138
Q

What is a construction graph in the context of ACO?

A

A construction graph represents the problem to be solved, where nodes correspond to components of a solution, and edges represent possible transitions. Ants build solutions by traversing this graph.

139
Q

How does ACO handle bin packing problems?

A

In bin packing problems, ants select bins for items based on pheromone levels and a heuristic that aims to minimize weight differences among bins. Ants can only move forward one layer (item) at a time.

140
Q

Why is it useful to have a “Start” node in ACO when solving scheduling problems?

A

The “Start” node ensures that ants begin the scheduling process consistently, and the first task chosen defines the starting point of the schedule. This helps maintain uniformity in solution construction.

141
Q

How does ACO compare with evolutionary algorithms?

A

ACO is often competitive with evolutionary algorithms for discrete optimization problems. Both methods are population-based, but ACO uses indirect communication through pheromone trails, whereas evolutionary algorithms rely on direct manipulation of solution candidates.

142
Q

How does the Max-Min Ant System (MMAS) differ from the Basic Ant System?

A

In MMAS, pheromone levels are bounded by minimum and maximum limits, preventing excessive accumulation on any one path. This helps avoid premature convergence to suboptimal solutions.

143
Q

Why is it important to prevent ants from revisiting nodes in ACO?

A

Preventing ants from revisiting nodes ensures that they construct valid solutions. In problems like TSP or scheduling, revisiting nodes would create loops or redundant tasks, leading to invalid or suboptimal solutions.

144
Q

What factors influence the performance of the ACO algorithm?

A

Factors influencing ACO performance include:

Initial pheromone levels.

Heuristic information.

Parameters for pheromone evaporation and deposition.

The number of ants and iterations.

145
Q

How do pheromone evaporation and deposition work in ACO?

A

Pheromone evaporation reduces pheromone levels over time, preventing convergence to poor solutions. Pheromone deposition increases levels on paths taken by successful ants, reinforcing good solutions.

146
Q

Lecture 11

A
147
Q

What is swarm intelligence and how does it emerge?

A

Swarm intelligence refers to emergent “intelligent” behavior in groups of simple agents. It arises as a consequence of interactions between individual agents, where no single agent directs the entire group. An example is slime mold, which behaves as a multicellular organism when food is scarce.

148
Q

What are examples of collective movement in swarm systems?

A

Examples of collective movement include flocking, shoaling, swarming, and formation traveling. These involve coordinated movements, typically occurring in 2D or 3D environments (natural) or 1D (artificial environments like traffic).

149
Q

How does flocking differ from other types of multi-agent movements?

A

Flocking involves coordinated movement without strict control or a designated leader. Movements that resemble flocking but aren’t include agents following fixed paths, moving toward a fixed point, or strict leader-following behaviors.

150
Q

What are the main characteristics of flocking behavior?

A

Rapid, directed movement.

Reactivity to predators and obstacles.

No collisions between members.

Flock coalescing and splitting.

Tolerance of member loss or gain.

No dedicated leader.
Different species exhibit unique flocking characteristics.

151
Q

In which animal societies is flocking observed?

A

Flocking is seen across various media (air, water, land) and animal families (insects, fish, birds, mammals), ranging from small groups (two geese) to massive groups (herring shoals up to 17 miles long). It may occur only in specific circumstances, such as during migration.

152
Q

What mechanisms enable flocking in animals?

A

Flocking mechanisms involve a balance between attraction (aggregation) and repulsion (segregation). Coordination is self-organized based on neighbor mimetism and environmental guidance, such as magnetic or odor fields.

153
Q

What are the benefits of flocking for animals?

A

Benefits include:

V-formation in birds: Extends range by 70%, as each bird benefits from the wingtip vortex of the one in front.

Turbulence reduction in fish: Swimming behind others reduces energy use due to lower turbulence.

Predator defense: Techniques like flash expansion, fountain effect, and schooling confuse predators.

154
Q

How does group migration reduce directional error?

A

In group migration, directional error decreases with group size. For example, in a flock of 1 million individuals, the error is only 0.1% compared to an individual’s error.

155
Q

Who is Craig Reynolds, and what is his contribution to flocking research?

A

Craig Reynolds, a computer animator, developed a model in 1987 for animating realistic 3D flocking. His model, called “Boids,” uses a local sensory system and relative range-bearing detection to simulate flocking behaviors computationally.

156
Q

What are the three primary rules in Reynolds’ flocking model?

A

Separation: Avoid crowding local flockmates.

Alignment/velocity matching: Steer towards the average heading of nearby flockmates.

Cohesion: Steer to move towards the average position of local flockmates.

157
Q

What are some key characteristics of flocks in Reynolds’ model?

A

Key characteristics include spontaneous polarization, synchronized direction changes, merging of flocks upon meeting, and the formation of smaller groups (flockettes) that later aggregate.

158
Q

Lecture 12

A
159
Q

What is an agent in the context of multi-agent systems

A

An agent is an entity that perceives its environment through sensors and acts upon it using actuators. It operates based on perceptual inputs (percepts) and chooses actions accordingly. The agent’s behavior can depend on its percept sequence but not on anything it hasn’t previously perceived.

160
Q

What is a percept and a percept sequence in agent terminology

A

A percept is the input received by an agent at a specific instant. A percept sequence is the complete history of everything the agent has perceived up to the current moment.

161
Q

How does the environment influence an agent’s task?

A

The environment is the space where an agent operates. The more restrictive the environment, the easier it is for the agent to perform its task but at the cost of realism. Characteristics such as observability, determinism, dynamism, and episodic vs. sequential nature of the environment impact the agent’s behavior.

162
Q

What are the four types of agent programs?

A
  1. Simple reflex agents: Act based on current percepts, ignoring history.
  2. Model-based reflex agents: Maintain internal state to track unseen aspects.
  3. Goal-based agents: Act to achieve specific goals.
  4. Utility-based agents: Act to maximize a utility function, balancing multiple competing goals.
163
Q

What are the components of a learning agent?

A

A learning agent has four key components:

Learning element: Improves the agent’s performance.

Performance element: Selects external actions.

Critic: Provides feedback on the agent’s actions.

Problem generator: Suggests exploratory actions for new experiences.

164
Q

What is the significance of multi-agent systems?

A

Multi-agent systems involve multiple agents working together to solve complex problems. They offer flexibility (agents can collaborate or work independently), scalability (reduce communication overhead), and have wide applications in fields like autonomous vehicles, robotics, trading, and gaming.

165
Q

What is the difference between facilitators and mediators in middle-agent systems?

A

Facilitators act as intermediaries between agents, handling requests and responses. They can become bottlenecks, so multiple facilitators are used collaboratively. Mediators, on the other hand, establish connections between agents, who then communicate directly.

166
Q

How are agents organized in multi-agent systems?

A

Agents can be organized in different structures:

Flat: All agents are equal.

Hierarchical: Agents are structured in levels.

Team-based: Agents are grouped into teams.

Coalition-based: Temporary collaborations between agents.

167
Q

How is task allocation handled in multi-agent systems?

A

Task allocation considers:

Agent talent: Assign tasks based on the resources available to the agent.

Agent position: Reduce communication overhead by assigning tasks to agents close to required resources.

168
Q

What are the main methods of communication in multi-agent systems?

A

Communication methods include:

Speech act: Actions perceived as utterances that update the environment.

Message passing: Direct messages sent between agents using agreed protocols.

Blackboard: A shared data space where agents post and read information.

169
Q

Why is fault detection important in multi-agent systems?

A

Fault detection is crucial to prevent faulty agents from affecting others. Centralized fault detection can introduce a single point of failure, so distributed monitoring and classification based on expected communication patterns are often used.

170
Q

What are the security requirements for multi-agent systems?

A

Security requirements include:

Authentication: Verifying agent identity.

Authorization: Ensuring agents have permission to access resources.

Integrity: Ensuring data is not tampered with.

Availability: Ensuring services are accessible to legitimate agents.

Confidentiality: Ensuring only authorized agents can access certain data.

171
Q

What are some real-world applications of multi-agent systems?

A

Real-world applications include autonomous vehicles, smart factories with multi-robot systems, automated financial trading, and commercial video games.

172
Q

What challenges do multi-agent systems face compared to single-agent systems?

A

Challenges include increased complexity in coordination, fault detection, task allocation, communication overhead, and ensuring security in a decentralized and dynamic environment.

173
Q

What are nature-inspired approaches in multi-agent systems?

A

Nature-inspired approaches involve using principles from natural systems, such as swarms or colonies, to guide agent collaboration and problem-solving strategies. These approaches emphasize decentralized control, adaptability, and scalability.

174
Q

Lecture 13

A
175
Q

Who introduced Particle Swarm Optimization (PSO), and what is its purpose

A

PSO was introduced by Kennedy and Eberhart in 1995. Its purpose is to discover near-optimal solutions using a population-based approach inspired by the social behavior of swarming and flocking animals.

176
Q

What is a particle in PSO?

A

In PSO, a particle represents a candidate solution with no mass or volume but has velocity and position. Particles interact and adjust based on their own experience and that of the swarm

177
Q

What does the term “swarm” refer to in PSO?

A

Swarm refers to a collection of particles working collaboratively to explore the search space, though velocity matching (as seen in natural swarms) is not included in the PSO model.

178
Q

What is the “Cornfield Vector” or Rooster Effect in PSO?

A

It is an analogy to describe how particles converge towards optimal solutions, similar to how birds quickly gather at a food source. The “fitness function” represents the quality of solutions and guides the particles’ movement.

179
Q

How does swarming behavior contribute to optimization in PSO?

A

Swarming incorporates the knowledge of the group into local behaviors, enabling particles to collectively and efficiently locate optimal solutions.

180
Q

What types of problems are best suited for PSO?

A

PSO is ideal for continuous optimization problems but can also be applied to binary and permutation problems. Examples include designing antennas, aircraft wings, amplifiers, controllers, and circuits.

181
Q

What is the difference between discrete and continuous optimization problems?

A

Discrete optimization problems involve finite, distinct variables (e.g., binary values for feature selection), while continuous optimization involves real-number variables.

182
Q

What are the main steps in the PSO algorithm?

A

Initialize a swarm of N particles randomly in the search space.

For each timestep, update each particle’s position and velocity.

Evaluate particle fitness to guide future updates.

183
Q

How is a particle’s new position determined?

A

A particle’s new position is calculated as:
Watch a youtube video for this lmao

184
Q

Why are random weights used in velocity updates?

A

Random weights prevent premature convergence by introducing stochasticity, helping particles explore diverse areas of the search space.

185
Q

What are key parameters in PSO?

A

Key parameters include:

Swarm size: Number of particles.

Neighborhood size: Defines how local best is calculated.

Number of iterations: Controls stopping criteria.

Acceleration coefficients and : Balance between personal () and global () learning.

186
Q

What are common termination criteria for PSO?

A

Maximum number of iterations.

Acceptable solution is found.

No improvement over iterations.

Swarm radius normalizes near zero (indicating convergence).

187
Q

How is PSO adapted for binary spaces?

A

Binary PSO can use either:

Traditional BPSO: Interprets velocity as a probability threshold for binary decision .

Geometric BPSO: Updates positions by moving particles towards and proportionally in Hamming space.

188
Q

What is the difference between traditional and geometric BPSO?

A

Traditional BPSO maintains continuous velocities, while geometric BPSO operates directly in discrete spaces without explicit velocities, using a multi-string recombination approach in Hamming space

189
Q

What are the strengths of PSO?

A

Simple formula for velocity and position updates.

Intuitive parameter tuning.

Flexibility to handle both continuous and discrete spaces.

Can adapt to dynamic optimization problems.

190
Q

How does PSO differ from other swarm algorithms?

A

Unlike ACO (which uses stigmergy), PSO relies on and for communication. It is inspired by swarming behavior and focuses on continuous and dynamic problem-solving.

^generation error

191
Q

What is the role of collectives in swarm intelligence?

A

Collectives of simple individuals generate intelligent behaviors, enabling the swarm to solve complex problems by leveraging distributed knowledge and collaboration.

192
Q

Lecture 14&15

A
193
Q

What is the main difference between single-objective and multi-objective optimization problems?

A

Single-objective optimization focuses on optimizing one objective (e.g., efficiency, cost, performance), while multi-objective optimization involves optimizing multiple, often conflicting objectives (e.g., minimizing cost while maximizing efficiency).

194
Q

What is the mathematical definition of a multi-objective optimization problem?

A

It is defined as finding a solution whose quality is determined by an objective vector , subject to feasibility constraints, where is the number of objectives.

195
Q

What is Pareto dominance in multi-objective optimization?

A

A solution dominates another solution if is at least as good as on all objectives and strictly better in at least one objective.

196
Q

What is the Pareto front?

A

The Pareto front is the set of non-dominated solutions that represents the best trade-offs between objectives. These solutions cannot be improved in one objective without degrading another.

197
Q

What are the desirable characteristics of the Pareto front?

A

The Pareto front should:

Contain evenly spaced solutions

Cover the largest possible area of the front, representing a diverse set of optimal trade-offs.

198
Q

How is non-dominated sorting used in multi-objective optimization?

A

Non-dominated sorting ranks solutions by:

Identifying all non-dominated solutions (Rank 0) and removing them from the population.

Repeating the process to find the next non-dominated set (Rank 1), then Rank 2, and so on.

199
Q

What modifications are needed in genetic algorithms (GAs) for multi-objective optimization?

A

Modifications include:

Multi-objective fitness functions

Using Pareto dominance for selection

Visualizing solutions with Pareto front approximations

200
Q

What is Pareto dominance tournament selection?

A

It involves selecting two random solutions. If one dominates the other, it is selected. If they are mutually non-dominating, a random choice or further criteria (e.g., crowding distance) is used.

201
Q

How is solution diversity maintained in Pareto dominance-based selection?

A

Diversity is maintained using techniques like:

Niching, where solutions are separated into less crowded regions (based on a niche radius)

Crowding distance, which measures how isolated a solution is compared to others.

202
Q

What is NSGA-II, and why is it significant?

A

NSGA-II (Non-dominated Sorting Genetic Algorithm II) is an elitist multi-objective genetic algorithm that:

Ranks solutions using a fast non-dominated sort

Uses crowding distance as a tie-breaker

Is computationally efficient and widely cited.

203
Q

What is crowding distance, and how is it calculated?

A

Crowding distance measures the density of solutions around a given point. It is calculated by summing the normalized distances between a solution’s neighbors for each objective.

204
Q

Why are elitist algorithms preferred over non-elitist ones in multi-objective optimization?

A

Elitist algorithms (e.g., NSGA-II):

Retain the best solutions across generations

Require fewer parameters

Execute and converge faster compared to non-elitist algorithms.

205
Q

Can single-objective evolutionary algorithms be adapted for multi-objective problems?

A

Yes, by assigning weights to objectives and optimizing for weighted sums. However, this approach may:

Miss parts of the Pareto front

Require separate runs for different weight combinations.

206
Q

What challenges arise in many-objective optimization (M ≥ 4)?

A

Challenges include:

Difficulty in visualizing high-dimensional Pareto fronts

Reduced search capability of algorithms

Exponential growth in solutions needed to approximate the Pareto front.

207
Q

What is an example of a many-objective optimization problem?

A

Optimizing offshore wind farms with six objectives:

Minimize turbine count, area, levelized cost of energy

Minimize annual energy prediction error

Maximize annual energy production and efficiency

208
Q

What are the primary visual tools used for multi-objective and many-objective optimization?

A

Tools include:

Pareto front approximations (for 2-3 objectives)

Parallel coordinate plots (for many-objective problems).

209
Q

What are the steps in NSGA-II execution?

A

Steps include:

Initialize a population

Select, crossover, and mutate to generate offspring

Combine parent and offspring populations

Perform a fast non-dominated sort

Use crowding distance to select the next generation.

210
Q

How does crowding distance ensure a diverse Pareto front?

A

Solutions with larger crowding distances are less densely populated and are prioritized, ensuring better coverage of the Pareto front.

211
Q

Summarize multi-objective evolutionary algorithms.

A

Multi-objective evolutionary algorithms adapt existing evolutionary algorithms to:

Handle multiple conflicting objectives

Use mechanisms like Pareto dominance, non-dominated sorting, and diversity preservation

Optimize for both solution quality and distribution.

212
Q

Lecture 16

A
213
Q

What is the basic idea of Particle Swarm Optimisation (PSO)?

A

Particle Swarm Optimisation (PSO) is a nature-inspired optimization technique where a swarm of particles explores the search space. Each particle adjusts its position by considering its own best-known position and the best-known position of the swarm, with an update rule for velocity and position

214
Q

How is PSO adapted for multi-objective problems?

A

In multi-objective PSO, leaders are identified based on multiple objectives. A particle’s personal best (pbest) is updated using dominance criteria:

Update if the new solution dominates the old pbest.

Retain the old pbest if it dominates the new solution.

If neither dominates, the decision depends on the preference (e.g., retaining the oldest or newest solution).

215
Q

What is an archive in MOPSO, and what is its purpose?

A

The archive is a repository of non-dominated solutions representing the Pareto front approximation at a given time. Key characteristics:

All members are mutually non-dominated.

New solutions are compared for dominance before being added.

Dominated solutions are removed.

The archive size is often limited to maintain computational efficiency and aid decision-making.

216
Q

How is the archive updated with new solutions?

A

When a new solution is evaluated:

Compare against each solution in the archive :

If dominates , remove .

If dominates , mark as dominated and skip further comparisons.

If is not dominated, add it to the archive.

If the archive size exceeds its limit, apply a diversity-preserving mechanism to prune solutions.

217
Q

What mechanisms are used to maintain diversity in the archive?

A

Two common diversity-preserving mechanisms are:

Clustering:

Hierarchical clustering (Zitzler, 1999) merges clusters until the desired number is reached.

Retains solutions closest to cluster centers.

Crowding Distance:

Used in NSGA-II to score solutions based on their proximity to neighbors.

Solutions in crowded areas receive lower scores, while isolated ones get higher scores.

Extreme points are assigned infinite scores to ensure retention.

218
Q

How are leaders selected in MOPSO?

A

Leaders in MOPSO are selected from the archive, as they represent non-dominated solutions. Methods include:

Random Selection: Randomly choose a member of the archive.

Diversity-based Selection: Prefer solutions in sparser areas of the archive.

Dominated Solution Count: Rank solutions by the number of population members they dominate; solutions dominating fewer members are more likely to be selected.

Hypervolume Contribution: Select the solution contributing the most to the hypervolume dominated by the Pareto front.

219
Q

What is hypervolume in the context of MOPSO?

A

Hypervolume measures the volume in the objective space dominated by the non-dominated solutions and bounded by a reference point. Key points:

A larger hypervolume indicates better coverage of the Pareto front.

Leaders are chosen based on their contribution to the hypervolume.

If no reference point exists, one is chosen at random.

220
Q

What challenges arise in many-objective optimisation problems?

A

Many-objective optimisation faces three key challenges:

Difficulty visualising high-dimensional Pareto fronts.

Reduced search efficiency of multi-objective algorithms.

Exponential growth in the number of solutions needed to approximate the Pareto front.

221
Q

How is ranking performed in many-objective PSO?

A

Two ranking methods are commonly used:

Average Rank:
Rank each solution times, once for each objective.

Compute the average rank:
Lower ranks indicate better solutions.

Distance Ranking:
Compute the absolute distance between solutions for all objectives.
Favor solutions in less crowded regions.

222
Q

How does MOPSO differ from multi-objective genetic algorithms (MOGAs)?

A

While both MOPSO and MOGAs aim to solve multi-objective problems, key differences include:

MOPSO uses particles and velocity updates; MOGAs rely on genetic operators like crossover and mutation.

MOPSO identifies leaders from an archive; MOGAs often use Pareto ranking and crowding distance.

MOPSO is more influenced by swarm behavior, while MOGAs focus on population diversity.

223
Q

What are examples of tasks that humans excel at compared to computers?

A

Humans excel at perception, reasoning, and learning, such as catching a ball, recognizing faces in images, and recalling specific personal events. Computers struggle with these but excel in tasks like complex calculations and memory-related operations.

224
Q

How do computers differ from humans in terms of capabilities?

A

Computers excel in sequential processing, fast calculations, and memory storage but lack perception, reasoning, and learning capabilities that humans possess.

225
Q

What is the primary difference in architecture between computers and the human brain?

A

Computers are fast serial machines, whereas the brain is slower but massively parallel.

226
Q

What are the key differences between computers and humans in terms of memory and learning?

A

Computers have separate memory and processing units, store information indefinitely, and rarely make mistakes.

Humans have distributed memory and processing, can generalize and learn, but forget unimportant information over time.

227
Q

What is the structure and connectivity of the human brain?

A

The brain contains approximately 100 billion neurons, each connected to around 10,000 others, communicating through synapses, which are configurable chemical junctions.

228
Q

What are the three main components of a neuron and their functions?

A

Dendritic Tree: Receives signals.

Cell Body: Processes signals.

Axon: Transmits signals to other neurons.

229
Q

What is connectionism in neural networks?

A

Connectionism is a computational approach inspired by neuroscience, focusing on how neurons process, store, and communicate information.

230
Q

How does symbolic AI differ from connectionism?

A

Symbolic AI: Relies on explicit reasoning (e.g., IF-THEN rules) and interpretable systems but struggles with generalization.

Connectionism: Uses neural networks with implicit reasoning, learning through weighted graphs, enabling generalization but often functions as a “black box.”

231
Q

How do neurons process and transmit signals?

A

Neurons receive electrical signals via dendrites. If the signal strength exceeds a threshold, the axon produces a pulse passed to other neurons through synapses.

232
Q

What role do synapses play in learning?

A

Synapses adjust their chemical properties (e.g., neurotransmitter release) to either excite or inhibit signals, facilitating learning.

233
Q

Who introduced the first artificial neuron, and what was its purpose?

A

McCulloch and Pitts (1943) introduced the artificial neuron to process simple logical expressions, marking the beginning of neural networks.

234
Q

How do artificial neurons perform logical functions like AND and OR?

A

By setting thresholds:

OR Functionality: Produces output 1 if at least one input is 1.

AND Functionality: Produces output 1 only if all inputs are 1.

235
Q

What problem does a multi-layer perceptron solve, and how?

A

MLPs solve problems like XOR by adding a hidden layer of neurons, enabling them to model non-linear relationships.

236
Q

What are the key properties of neural networks?

A

Ability to learn input-output relationships (e.g., predicting fuel consumption).

Graceful degradation: Partial network failure reduces performance without complete failure.

Fault tolerance, mimicking the brain’s robustness.

237
Q

Why is generalization an important property of neural networks?

A

Generalization allows neural networks to:

Recognize patterns and classes (e.g., characters like A or B).

Tolerate noise in data.

Infer properties of new objects or events, even with varying orientations.

238
Q

Why is rainfall runoff modeling important, and how can neural networks help?

A

Rainfall runoff modeling predicts urban flooding risks. Neural networks can model runoff based on environmental conditions like rainfall and air temperature, complementing traditional hydraulic models.

239
Q

What data is used for ANN-based rainfall runoff modeling?

A

Rainfall and air temperature data from January 2010 to October 2012, covering 15 rainfall events, were used to predict sewer flow.

240
Q

How are ANNs structured for rainfall runoff modeling?

A

The model includes:

A single hidden layer with 10 neurons.

One output neuron predicting flow.

Cross-validation to ensure generalization, splitting data by rainfall events.

241
Q

How are evolutionary algorithms (EAs) used in training ANNs?

A

EAs optimize ANN weights using mutation and crossover, with fitness evaluated by model accuracy.

242
Q

How is optimization of ANN weights performed?

A

Represent weights as decision variables.

Use mean absolute error (MAE) as a fitness metric.

Apply additive Gaussian mutation to adjust weights.

243
Q

What is the formula for mean absolute error (MAE)?

A

sum(yi - ti) /n

^Prolly double check

244
Q

What is the Nash-Sutcliffe Efficiency Coefficient, and what does it indicate?

A

The Nash-Sutcliffe Efficiency Coefficient evaluates model accuracy. A value of 1 indicates a perfect model, while negative values imply poor performance.

245
Q

Lecture 18

A
246
Q

What is neuromorphic computing, and how did the term originate?

A

Neuromorphic computing refers to mixed analogue-digital implementations of brain-inspired techniques. The term originated in the 1980s and has since expanded to include a broad range of brain-inspired techniques.

247
Q

How does neuromorphic architecture differ from von Neumann architecture?

A

Von Neumann Architecture: Separate CPUs for processing and memory for storage; uses programs to instruct the computer and encodes data in binary.

Neuromorphic Architecture: Synapses handle both data storage and processing; instructions are determined by the structure of synapses, with data encoded via spikes—timing and magnitude conveying information.

248
Q

What are the key characteristics and advantages of neuromorphic computing?

A

Parallel Operation: All neurons can operate simultaneously, enabling simpler operations compared to parallelized von Neumann machines.

Collocated Processing and Memory: Faster and lower energy consumption as data doesn’t need to be retrieved from separate memory.

Scalability: Adding more chips with neurons enhances performance.

Event-Driven: Processing occurs only when spikes arrive, improving efficiency.

249
Q

How is neuromorphic computing applied to graph algorithms

A

Neuromorphic computing excels in tasks like communication routing, network analysis, and graph structure analysis due to its architecture’s advantages over conventional systems.

250
Q

What optimization problems and signal processing tasks benefit from neuromorphic computing?

A

Optimization: Effective for integer optimization (e.g., constraint satisfaction, QUBO) and continuous problems (e.g., LASSO, Bayesian optimization).

Signal Processing: Useful in autonomous vehicle control and detecting brain oscillations to predict seizures.

251
Q

What is the SpiNNaker system, and what are its capabilities?

A

SpiNNaker is a neuromorphic system with 18 ARM cores sharing 128MB memory, supporting 16,000 neurons and 8 million synapses in real-time, consuming 1W of energy.

It explores spiking neural networks (SNNs) potential and announced SpiNNaker 2 in 2019, 10x larger than its predecessor.

252
Q

What are the features of IBM’s TrueNorth and NorthPole systems?

A

TrueNorth: Contains 1 million neurons and 256 million synapses; uses the Corelet language and focuses on energy-efficient computations.

NorthPole: Successor to TrueNorth, optimized for modern AI pipelines, real-time processing, and improved energy efficiency.

253
Q

What are Spiking Neural Networks (SNNs), and why are they significant?

A

SNNs mimic the brain’s activity by transmitting discrete spikes over time, making them biologically plausible and efficient for dynamic, temporal pattern recognition tasks. Applications include robotics, speech recognition, and sensory processing.

254
Q

How do neurons propagate spikes, and what role does charge accumulation play?

A

Neurons accumulate charge over time via input signals. When the charge surpasses a threshold, the neuron fires and emits a signal along its connections. Unused charge dissipates over time (leakage), and signal transmission can include delays for temporal encoding.

255
Q

What is spike-timing-dependent plasticity, and how does it support learning?

A

STDP adapts synaptic strength based on neuron activity.

Potentiation: Connection strengthens when a pre-synaptic neuron fires before a post-synaptic neuron.

Depression: Connection weakens when the post-synaptic neuron fires first.
This mechanism supports unsupervised learning.

256
Q

What challenges arise in training SNNs using backpropagation?

A

Backpropagation is challenging for SNNs due to non-differentiable thresholds and timing complexities. Workarounds include using surrogate activation functions or training traditional DNNs and converting them to SNNs.

257
Q

How are evolutionary algorithms used to optimize SNNs?

A

EAs optimize network structure and parameters like weights and delays. Neural architecture search alternates between structure and weight optimization but faces challenges such as high costs and co-evolution complexities.

258
Q

Why are traditional ML approaches often superior to neuromorphic computing?

A

Traditional ML methods outperform neuromorphic computing in general due to more established techniques like backpropagation. Neuromorphic systems, however, excel in energy efficiency and specialized tasks.

259
Q

What are the main usability and benchmarking challenges in neuromorphic computing?

A

Usability: Lack of accessible hardware/software and long training times hinder real-world applications.

Benchmarks: Few challenge problems exist to drive development, especially incorporating temporal aspects.

260
Q

What issues arise in programming neuromorphic systems?

A

Developers must design at the neuron/synapse level, which is error-prone. Existing solutions often focus on graph modeling due to its alignment with SNN architectures.

261
Q

What is the current state and future of neuromorphic computing?

A

Neuromorphic computing, based on SNNs, is a promising but experimental field. Large systems like SpiNNaker and TrueNorth demonstrate potential but face significant challenges in usability and accessibility before widespread adoption.

262
Q

Lecture 19

A
263
Q

What is unsupervised learning, and how does it differ from supervised learning?

A

Unsupervised learning involves finding regularities or patterns in data without target outputs. In contrast, supervised learning involves inputs and target outputs, where a function is learned to fit the given examples.

264
Q

What is the primary aim of Self-Organising Maps (SOM)?

A

The primary aim of SOM is to learn how to map points from a high-dimensional space to a low-dimensional, discrete space while preserving the topological properties of the data.

265
Q

What are the typical uses of Self-Organising Maps?

A

SOMs are used for visualization and discovering regularities in data, making them helpful for tasks like clustering and dimensionality reduction.

266
Q

How are SOMs self-organized?

A

SOMs are self-organized through local interactions between data points involving competition and cooperation among the nodes.

267
Q

How does SOM differ from traditional multi-layer perceptrons (MLPs)?

A

Unlike MLPs, SOMs do not use activation functions, linear combinations of inputs, or backpropagation. They rely on competitive learning and topological mapping.

268
Q

What are the key working assumptions of SOM?

A

SOM assumes that:

Input data belonging to the same class share common features.

It can identify key features across different data points.

It can organize the input data meaningfully according to a low-dimensional structure.

269
Q

Can you provide an example application of SOM?

A

An example is grouping countries based on poverty levels using 39 indicators of quality of life. The input is a 39-dimensional dataset, and the output is a 2D map showing clusters of countries from rich to poor.

270
Q

Describe the architecture of SOM.

A

SOM consists of:

Input nodes equal to the number of features in the data.

A map of interconnected nodes.

Every input node connects to each node in the map via weighted edges.

271
Q

How is SOM considered a function?

A

SOM acts as a function where every input pattern corresponds to a node in the output map through a competitive process. The node with the smallest Euclidean distance to the input pattern is the winner.

272
Q

What are the main steps in the SOM learning algorithm?

A

The steps are:

Initialize the network with small random weights and a large neighborhood size.

Sample a random input from the data.

Compute the Euclidean distance of each node from the input.

Identify the best matching unit (BMU).

Update the weights of the BMU and its neighbors.

Repeat until the map converges.

273
Q

What is the role of the learning rate (η) in SOM?

A

The learning rate (η) controls the magnitude of weight updates. It decreases over time, allowing large changes initially and smaller adjustments as learning progresses.

274
Q

Why is neighborhood size important in SOM, and how does it change over time?

A

Neighborhood size determines the range of nodes affected by a BMU during learning. It starts large to capture general structures and decreases over time to fine-tune the map.

275
Q

How does SOM adapt its topology during learning?

A

SOM adapts its topology by repeatedly applying competition and cooperation cycles, gradually aligning the output map with the input space distribution.

276
Q

What are some common initialization problems in SOM?

A

Poor initialization can cause the network to unfold badly, resulting in topological knots. This issue can be mitigated by:

Random initialization with data range awareness.

Principal component initialization.

Sample-based initialization.

Pre-training using methods like k-means.

277
Q

List some applications of Self-Organising Maps.

A

Applications include:

Data visualization.

Pattern recognition.

Speech analysis.

Industrial and medical diagnostics.

Data mining.

278
Q

How can SOM be used for visualizing many-objective solutions?

A

SOM can cluster many-objective solutions and color the output map based on the objective that each cluster best optimizes.

279
Q

Summarize the characteristics of neural networks inspired by SOM.

A

Neural networks inspired by SOM are:

Based on the structure and function of neurons in the brain.

Useful for tasks like classification, pattern recognition, and clustering.

Capable of generalization and graceful degradation, similar to human learning.

280
Q
A