GA Case studies Flashcards
Roulette wheel pc
+ Fittest individual multiple times
+Simple
+chance for all
- Premature convergence
Tounrament
+ Diversirty
+Paralell processing
-Large tounrament
Stochastic universal sampling
similar to roulette but more chance for less fit individuals / less os individual chosen multiple times
Truncation
+built in elitism sort of
- Incredibly computationally intensive -> soting
- low diversity
- not used
Selection case study
2005: Tounrament converged to a more optimal solution is less than half the time compared to roulette and other.
Elitism case study
1994: using markov chains analysis GA are not guarenteed to converge to optimal solution unless there is elitism built in.
Crossover case study
2012: ordered crossover + variants almost half the relative distacnce for tsp. using roulette
Integrated Lin kernighan case study
1995 - LK defacto algorithm - LK generates initial population - GA improves it - 0.75% held karp lower bound Parthenogenetic (1) with local search = high perf
Darwinean natural selction
- Heridity
- variation
- selection
GA vs others
1992: GA much better than greedy. with better time efficiency
2015 much worse than greedy + nearest neighbour.
Advantagges of GA
- Work well with large fitness landscapes, i.e. large problem spaces. (From the Discussion section.)
- Their randomness helps avoid local optima, and therefore sub-optimal solutions. (From the Discussion section.)
- There is a natural balance between exploration and exploitation. (From the Discussion section.)
- They explore even further through novelty searches. (From the Discussion section.)
- Their concept is easy to understand.
- Can find an answer fairly quickly.
- Work on a wide variety of problems.
- Coding is relatively easy.
- They are inherently parallel.
Disadvantages of GA
- Don’t find the definitive answer.
For all but the smallest sets, it will almost surely not find The definitive most optimal solution to the defined problem.
- Getting the design right is difficult.
Designing an objective function and getting the representation right can be difficult.
i.e. It’s really hard to come up with a good heuristic which actually reflects what we want the algorithm to do.
- Figuring out ideal parameters is difficult.
And even having decided on the design of a certain representation/algorithmic strategy, you really don’t know what optimal parameters are, such as how big an initial population to use, the mutation factor, or even how many generations we should run it to get the expected result, etc.
(See the parameter list image below from the Genetic Algorithm program we are using.)
So it’s generally not possible to predict their effects before playing around with the parameters. (From the Discussion section.)
- And so it’s somewhat of an “art”, employing trial and error. (From the Discussion section.)
So one way to look at it in terms of the previous two points is that their implementation is still as much an art as a science.
- Can be a little computationally expensive.
Even though better than most alternatives, they can still be computationally expensive i.e. time-consuming.
GA time complexity
nlog()n +. linear + polynomial
Lin kernighan complexity
n^2.2
Simulated annealing complexity
n^2