Multimodal Optimization Tutorial
This tutorial tackles the Rastrigin function, a challenging multimodal benchmark with many local optima. You'll learn strategies for escaping local optima and finding global solutions.
The Rastrigin Function
The Rastrigin function is defined as:
f(x) = 10n + Σ[xᵢ² - 10cos(2πxᵢ)]
Properties:
- Global minimum: 0 at origin
- Highly multimodal: ~10ⁿ local minima!
- Non-separable: Variables interact through cosine terms
- Deceptive: Local minima look similar to global minimum
The Challenge
With 20 dimensions, the Rastrigin function has approximately 10²⁰ local minima. A naive optimization will likely get trapped in one of these local minima.
Complete Example
//! Rastrigin Function Benchmark
//!
//! This example demonstrates optimization of the highly multimodal Rastrigin
//! function, a challenging benchmark that tests the GA's ability to escape
//! local optima.
//!
//! The Rastrigin function has many local minima but a single global minimum
//! at the origin.
use fugue_evo::prelude::*;
use rand::rngs::StdRng;
use rand::SeedableRng;
fn main() -> Result<(), Box<dyn std::error::Error>> {
println!("=== Rastrigin Function Benchmark ===\n");
let mut rng = StdRng::seed_from_u64(12345);
const DIM: usize = 20;
// Rastrigin is a highly multimodal function
let fitness = Rastrigin::new(DIM);
let bounds = MultiBounds::symmetric(5.12, DIM);
println!("Problem: {} dimensions", DIM);
println!("Search space: [-5.12, 5.12]^{}", DIM);
println!("Global optimum: 0.0 at origin\n");
// Run GA with larger population for multimodal landscape
let result = SimpleGABuilder::<RealVector, f64, _, _, _, _, _>::new()
.population_size(200)
.bounds(bounds)
.selection(TournamentSelection::new(5)) // Higher pressure
.crossover(SbxCrossover::new(15.0)) // More exploration
.mutation(PolynomialMutation::new(20.0).with_probability(0.1))
.fitness(fitness)
.max_generations(500)
.elitism(true)
.elite_count(2)
.build()?
.run(&mut rng)?;
println!("Results:");
println!(" Best fitness: {:.6}", result.best_fitness);
println!(" Generations: {}", result.generations);
println!(" Evaluations: {}", result.evaluations);
// Check how close we got to the global optimum
let max_deviation = result
.best_genome
.genes()
.iter()
.map(|x| x.abs())
.fold(0.0f64, |a, b| a.max(b));
println!("\nSolution quality:");
println!(" Max deviation from origin: {:.6}", max_deviation);
// Print fitness progress
let history = result.stats.best_fitness_history();
println!("\nFitness progression (every 50 generations):");
for (i, fitness) in history.iter().enumerate() {
if i % 50 == 0 {
println!(" Gen {:4}: {:.6}", i, fitness);
}
}
Ok(())
}
Source:
examples/rastrigin_benchmark.rs
Running the Example
cargo run --example rastrigin_benchmark
Key Strategies
1. Larger Population
.population_size(200)
More individuals means more parallel exploration of the search space. For multimodal functions, this helps maintain diversity and cover more basins of attraction.
2. Higher Selection Pressure
.selection(TournamentSelection::new(5))
Tournament size of 5 (vs. 3 for Sphere) increases selection pressure, helping the population converge faster on good regions once found.
3. More Exploration in Crossover
.crossover(SbxCrossover::new(15.0))
Lower distribution index (15 vs. 20) creates more diverse offspring, helping explore new regions.
4. Elitism
.elitism(true)
.elite_count(2)
Critical for multimodal optimization! Without elitism, the best solution can be lost due to selection randomness.
5. More Generations
.max_generations(500)
Multimodal problems need more time to find and refine global optima.
Understanding Results
Tracking Progress
let history = result.stats.best_fitness_history();
for (i, fitness) in history.iter().enumerate() {
if i % 50 == 0 {
println!(" Gen {:4}: {:.6}", i, fitness);
}
}
Watch for:
- Rapid early improvement: Finding good basins
- Plateaus: Stuck in local optima
- Jumps: Escaping to better regions
Solution Quality
let max_deviation = result
.best_genome
.genes()
.iter()
.map(|x| x.abs())
.fold(0.0f64, |a, b| a.max(b));
println!("Max deviation from origin: {:.6}", max_deviation);
For Rastrigin, each gene should be close to 0. Large deviations indicate the solution is in a local optimum.
Alternative Approaches
Island Model
For heavily multimodal problems, consider the Island Model:
// Multiple populations with periodic migration
let result = IslandModelBuilder::<RealVector, _, _, _, _, f64>::new()
.num_islands(4)
.island_population_size(50)
.migration_interval(25)
.migration_policy(MigrationPolicy::Best(2))
// ... operators
.build(&mut rng)?
.run(200, &mut rng)?;
See Island Model Tutorial for details.
Restart Strategy
Manual restarts can help escape deep local optima:
let mut best_overall = f64::NEG_INFINITY;
let mut best_genome = None;
for restart in 0..5 {
let result = SimpleGABuilder::<RealVector, f64, _, _, _, _, _>::new()
// ... configuration
.build()?
.run(&mut rng)?;
if result.best_fitness > best_overall {
best_overall = result.best_fitness;
best_genome = Some(result.best_genome);
}
println!("Restart {}: {:.6}", restart, result.best_fitness);
}
Parameter Guidelines for Multimodal Functions
| Parameter | Unimodal | Multimodal |
|---|---|---|
| Population | 50-100 | 150-300 |
| Tournament size | 2-3 | 4-7 |
| SBX eta | 15-25 | 10-15 |
| Mutation probability | 1/n | 1.5/n - 2/n |
| Generations | 100-300 | 300-1000 |
| Elitism | Optional | Essential |
Diagnosing Problems
Stuck in Local Optima
Symptoms:
- Fitness plateaus early
- Solution values are multiples of π (Rastrigin local minima)
Solutions:
- Increase mutation probability
- Use Island Model
- Try different random seeds
- Add restarts
Loss of Diversity
Symptoms:
- All individuals become similar
- No improvement despite many generations
Solutions:
- Increase population size
- Lower selection pressure
- Add diversity maintenance (niching)
Exercises
- Vary dimensions: Compare results for DIM = 10, 20, 30
- Compare seeds: Run with 5 different seeds and analyze variance
- Population study: Try populations of 50, 100, 200, 400
Next Steps
- Island Model Tutorial - Parallel populations for multimodal optimization
- CMA-ES Tutorial - Alternative algorithm for continuous optimization