Lecture 4 Flashcards

1
Q

what is the success probability p_a+?

A

p_a+ = P{f(m(a)) > f(a)}
the probability that the mutated candidate solution has a higher fitness than the candidate solution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what is the convergence velocity of a (1+1) GA?

A

phi(1+1) = sum from k=0 to k_max of (k * p_a+(k))

for Counting Ones, k_max = l - f_a
where f_a is the fitness, or the amount of bits that do not have to change

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what is a (1+1) GA?

A

a genetic algorithm where both the parent and the offspring consist of one instance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

for the OneMax problem, what is the probability that we flip i 1s to 0s?

A

(f_a boven i) p^i * (1 - p)^(f_a - i)
p is the mutation rate, f_a is the fitness, equal to the number ones in the string

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

for the OneMax problem, what is the probability that we flip (i+k) 0s to 1s?

A

(l - f_a boven i + k) p^(i+k) * (1 - p)^(l - f_a - i - k)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what is absorption in a Markov chain?

A

the state (0) where all l bits are correct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what is the time to absorption of OneMax with the (1+1) EA as a function of l?

A

in the order of magnitude O(l ln l)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what is the approximated optimum mutation rate?

A

p* = 1 / (2(f_a + 1) - l)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what is the effect of different mutation rate settings?

A

if p_m is too large, we get exponential complexity because it becomes random search
if p_m is too small, the time to absorption is almost constant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what is (1 + lambda) GA?

A

one parent is used to generate lambda offspring

the next generation parent is chosen as the best among offspring and the old parent: a_next = best(O_t U {a_t})

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what is (1, lambda) GA?

A

one parent is used to generate lambda offspring

the next generation parent is chosen as the best among offspring a_next = best(O_t)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

what is the probability of having exactly k improvements in a (1+1)-GA?

A

p+(k) = the sum from i = 0 to min(f_a, l - f_a - k) of (the probability that i 1s flip to 0s * the probability that (i+k) 0s flip to 1s)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly