Lecture 5 Flashcards

1
Q

how does an evolution strategy work?

A
  • initialize the population
  • evaluate the population
    WHILE termination criterion not met DO:
  • recombination
  • mutation
  • selection ((mu + lambda) or (mu, lambda))
  • evaluate the new generation
  • set the population to the new generation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what kinds of mutation operators exist for ES?

A
  • one sigma (self-adaptive with one step size)
  • individual sigma (self-adaptive with individual step sizes)
  • correlated mutations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

how does one sigma work?

A

individual before mutation: a = ((x1, x2,…), sigma)
mutate step size: sigma’ = sigma exp(tau_0*N(0,1))
mutate search variables: x’_i = x_i + sigma’ * N(0,1)
individual after mutation: a’=((x’1, x’2, …), sigma’)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what is tau_0?

A

it’s the learning rate
- affects speed of the sigma adaptation
- tau_0 bigger: faster but less precise
- tau_0 smaller: slower but more precise
- according to Schwefel: tau_0 = 1/sqrt(n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what are the pros and cons of one sigma?

A

pros:
- simple adaptation mechanism
- self-adaptation usually fast and precise

cons:
- bad adaptation in case of complicated contour lines
- bad adaptation in case of very differently scaled object variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

how does individual mutation work?

A

before: a = ((x1, x2,…), (sigma1, sigma2))
1. sample a gobal perturbation: g ~ N(0,1)
2. mutate individual step sizes:
sigma’_i = sigma_i exp(tau’g + tauN(0,1))
3. mutate search variables:
x’_i = x_i + sigma’_i * N(0,1)
after: ((x’1, x’2,…), (sigma’1, sigma’2))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what are tau and tau’?

A

tau’ is the global learning rate
tau is the local learning rate
Schwefel: tau’ = 1/sqrt(2n) and
tau = 1/sqrt(2*sqrt(n))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what are the pros and cons of individual mutation?

A

pros:
- individual scaling of object variables
- increased global convergence reliability

cons:
- slower convergence due to increased learning effort
- no rotation of coordinate system possible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is correlated mutation?

A
  • changes to the parameters are not independent, but guided by a covariance matrix that captures correlation between the different elements of the solution vector
  • also the rotation angles of planes are adapted
  • the covariance matrix is obtained by multiplying the solution vector by the rotation matrix
  • the correlation matrix must be positive definitive
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what are the pros and cons of correlated mutation?

A

pros:
- individual scaling of object variables
- rotation of coordinate system possible
- increased global convergence reliability

cons:
- much slower convergence
- effort for mutations scales quadratically
- self-adaptation very inefficient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what is recombination?

A

the step in an ES directly after selection that iteratively generates lambda offspring

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

how does discrete recombination work?

A

the variable at position i will be copied at random (uniformly distributed) from position i of parent1 or parent2 to the offspring

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

how does intermediate recombination work?

A

the variable at position i of the offspring is the artithmetic mean of position i from parent1 and parent2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

how does global discrete recombination work?

A

consider all parents and copy one of the parents at position i randomly to position i of the offpsring

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

how does global intermediary recombination work?

A

take the mean of all parents at position i

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what is (mu + lambda) selection?

A
  • mu parents produce lambda offspring by (recombination and) mutation
  • the mu best out of mu + lambda will be selected (deterministic selection)
  • this method guarantees monotonicity: deteriorations will never be accepted
17
Q

what is (mu, lambda) selection?

A
  • mu parents produce lambda&raquo_space; mu offspring by (recombination and) mutation
  • the mu best out of lambda offspring will be selected
  • this method does not guarantee monotonicity: deteriorations are possible
18
Q

what are the possible occurrences of selection in ES?

A
  • (1+1): one parent, one offspring, 1/5 rule
  • (1, lambda): one parent, lambda offspring
  • (mu, lambda): mu > 1 parents, lambda > mu offspring (can overcome local optima)
  • elitist strategies
19
Q

what is the takeover time tau*?

A

the number of generations until repeated application of selection completely fills the population with copies of the initially best individual

for (mu, lambda) selection in ES:
tau* = ln lambda / ln (lambda/mu)

for proportional selection in GA:
tau* = lambda ln lambda

20
Q

what are the update rules for correlated mutation?

A

sigma update same as individual sigma
alpha’_j = alpha_j + N(0, beta) with j in [n(n-1)/2]
x’_i = x_i + sigma’_i * N(0, C’)
with C’ (the new covariance matrix) computed from alpha’_j and sigma’_i