ES-1 Flashcards
(15 cards)
nondeterministic direct search methods vs deterministic direct search methods
one have the element of randomness one not
Evolution striges do not make use of what property of function ?
ES do not use derivatives, smoothness, convexity, separability
describe 1+1 ES
-one parent generate one offspring
one fifth rule step size adaptation of 1+1 ES?
- one out of five offspring will gave improvements in the function value
describe (1, λ)-ES
generate multiple offspring from one parent
1+1 ES vs (1, λ)-ES ?
the (1,λ)-ES is less ef- ficient than the (1 + 1)- ES unless offspring can be evaluated in parallel
describe (μ, λ)-ES
(μ, λ)-ES multiple parents generate multiple offspring the select best μ offspring as the new parents.
(1, λ)-ES vs (μ, λ)-ES in sphere problem and noise problem ?
-the (μ, λ)-ES with μ > 1 is less efficient than the (1, λ)-ES;
-only keep the best offspring is enough.
-in the presence of noise, the
(μ, λ)-ES can be superior to the (1, λ)-ES
What is self-adaptation step size in (μ, λ)-ES ?
- the step size are adapted differently.
- good strategy parameter gives good objective parameter so the candidate will survive.
problems of self-adaptation ?
- adjust the mutation step size
- selection of strategy parameters is indirect and noisy
- large populations are required to adapt more than a single parameter
- self-adaptation does not perform well for the (μ/μ, λ)-ES because the efficiency is similar to 1+1-ES
what is Recombination ?
- create child that is the average of the parents.
describe (μ/μ, λ)-ES and its advantage.
- with μ parents, with recombination of all μ parents, either Intermediate or Weighted, and λ offspring.
- add recombination
- robust with noise
what is Cumulative Step Size Adaptation ?
and its advantage to noise?
- if consecutive steps are positively correlated, then the step size should be increased
- if consecutive steps are negatively correlated, then the step size should be decreased
- is a combination of the previous steps
- works well even in the presence of noise if μ and λ are sufficiently large
what we learned In the presence of noise about the step size ?
In the presence of noise, it may be useful to make large trial steps, but small search steps.
How do evolution strategies differ from other direct search methods?
- nondeterministic
- use a population of candidate solutions
- is not driven by the desire to guarantee convergence to stationary points
- emphasize adaptivity
- strong invariance properties