PSO Flashcards
(7 cards)
What is Particle Swarm Optimization (PSO)?
A population-based stochastic optimization technique inspired by social behavior like bird flocking or fish schooling. (ACO & PSO Slides, p26)
A metaheuristic.
Each “particle” (candidate solution) “flies” through the multi-dimensional search space. (ACO & PSO Slides, p26)
Particles adjust their paths based on their own experience and the experience of other particles in the swarm.
What properties does each particle have?
Position (x_i): Its current location in the search space (representing a candidate solution).
Velocity (v_i): The speed and direction of its movement.
Fitness Value: The quality of its current position, evaluated by an objective function.
Personal Best (pbest_i or p_i): The best position it has found so far.
(Each particle also knows the Global Best (gbest or g): The best position found so far by any particle in the entire swarm.)
How do particles move/update their state?
Particles move through the search space.
Their movement (velocity and position update) is influenced by:
Their current velocity (inertia).
Their own personal best (pbest) position found so far (cognitive component - individual experience).
The global best (gbest) position found by any particle in the swarm (social component - collective experience).
PSO: What are the main components of a particle’s velocity update?
v_i(t+1) = wv_i(t) + c1r1(pbest_i - x_i(t)) + c2r2*(gbest - x_i(t))
How does a particle update its position?
x_i(t+1) = x_i(t) + v_i(t+1)
The new position is the old position plus the newly calculated velocity.
(ACO & PSO Slides, p29)
PSO: Key parameters?
Swarm Size: Number of particles.
Inertia Weight (w): Controls the impact of the previous velocity.
Cognitive Coefficient (c1): Weight of the particle’s pbest.
Social Coefficient (c2): Weight of the swarm’s gbest.
r1, r2: Random numbers (0-1) introduce stochasticity.
What kind of optimization problems is PSO well-suited for?
Problems with continuous and differentiable functions (though it can be adapted for discrete problems).
It does not require knowledge of the gradient (unlike gradient descent).
Can handle complex, non-linear, multi-modal search spaces.