Search Algorithms Flashcards

1
Q

Definition of Search algorithms:

A

Given a set of possible solutions, find the best (a reasonable) one

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What purpose does Search serve?

A

Algorithm for finding a solution with specified properties among a collection of solutions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A Search solution:

A
  • may be stored individually as records in a database
  • may be elements of a search space defined by a
    mathematical formula or procedure
  • a combination of the two, such as the Hamiltonian
    circuits of a graph
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Solution Spaces:

A

Ordered space
(e.g., dictionary)
* Unordered space
(e.g., Where’s Wally?)
* Higher-dimensional
space

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Exhaustive (brute force)

A

Try all combinations — usually unfeasible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Branch and bound

A
  • Divide, find upper and lower bounds for error/quality
    — linear programming
  • Time-consuming
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Randomized

A
  • Generate random combinations, pick the best among them
  • Random walk — generate random changes based on the last seen and move if better than current best
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Iterative improvement methods

A
  • Tabu Search
  • Simulated Annealing
  • Evolutionary Algorithm
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Gradient Descent Approach refers to:

A
  • Refers to learning methods that seek to minimize an objective function by which system performance is measured
  • At each point in time, the policy is to choose the next step that yields the minimal objective function value
  • The learning rate parameters refer to the step size taken at each point in time
  • Each step is computed only on the basis of local information, which is extremely efficient, but introduces possibility of traps in local minima
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Hill-Climbing

A

Analogous process whereby objective function is maximized

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Hill-Climbing Algorithm

A

loop do
neighbor ← a highest-valued successor of current
if neighbor.VALUE ≤ current.VALUE then return current.STATE
current ← neighbor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Tabu Search: move selection

A
  • Each move is chosen from a (generated) neighborhood
  • Tabu list of forbidden moves (for a given duration)
  • Tabu restrictions may be overridden when the tabu
    move evaluation in a solution is better than any other visited so far: aspiration criterion
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Tabu Search

A
  • Search around last solution neighborhood
  • Tries to overcome local minima by exploring neighbors and deciding moves based on “memory”
  • Semi-random exploration based in maintaining a selective history of the states encountered during the search
  • New states (neighbors) are feasible moves if they are not in the tabu list (tabu rule) → memory (short & long)
  • The neighborhood of a state is not a static set, but rather a set that can change according to the history of the search
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Simulated Annealing combines:

A

Gradient descent with random walk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Simulation of cooling of a heated material

A
  • A solid material is heated past its melting point and then cooled back into a solid state (annealing)
  • The final structure depends on how the cooling is performed
  • slow cooling → large crystal (low energy)
  • fast cooling → imperfections (high energy)
  • Metropolis’ algorithm simulates the change in energy of the system when subjected to the cooling process, converging to a final state (of a certain energy)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Simulated Annealing new solution is accepted if

A
  • Has lower cost than the previous
  • When cost is higher – with a probability based on the
    difference to the best and the defined “temperature”
17
Q

Simulated Annealing Algorithm

A

current ← MAKE-NODE(problem.INITIAL-STATE)

for t = 1 to ∞ do

T ← schedule(t)

if T = 0 then return current

next ← a randomly selected successor (similar) of current

∆E ← next.VALUE – current.VALUE

if ∆E > 0 then current ← next

else current ← next only with probability e∆E/T

18
Q

Simulated Annealing Issues

A
  • Minimum and maximum temperature
  • Scheme for decreasing temperature
  • Temperature length (number of iterations at each temperature)
  • Evaluation (cost) function
  • Initialization
19
Q

Genetic Algorithm

A

Tries to find the best
fitted solutions by:

  • Generating a
    population of
    solutions
  • Evaluating all (fitness
    function)
  • Recombining and/or
    Mutating the best
    individuals in the
    evolutionary
    population to create a
    new generation
20
Q

Genetic Algorithm implementation

A

repeat

new population ← empty set

for i = 1 to SIZE(population) do

x ← RANDOM-SELECTION(population,FITNESS-FN)

y ← RANDOM-SELECTION(population,FITNESS-FN)

child ← REPRODUCE(x , y)

if (small random probability) then child ← MUTATE(child)

add child to new population

population ← new population

until some individual is fit enough, or enough time has elapsed

return the best individual in population, according to FITNESS-FN

21
Q

Important Aspects Genetic Algorithm

A

Implementation

  • choose a good representation
  • decide on the population size, mutation rate, . . .
  • select/delete individuals
  • best crossover(s), kind of mutation(s)
  • Termination criteria
  • Performance, scalability
  • Adequate evaluation function
22
Q

Genetic Algorithms advantages

A

*Are easily understandable

  • Support multi-objective optimization
  • Show good performance in noisy environments
  • Answers get better over time
  • Inherently parallel and easy to distribute