Introduction

fugue-evo is a probabilistic genetic algorithm library for Rust that treats evolution as Bayesian inference over solution spaces. It provides a flexible, type-safe framework for optimization problems ranging from simple continuous optimization to complex genetic programming.

Why Fugue-Evo?

Fugue-evo differentiates itself from traditional GA libraries through several key design principles:

Probabilistic Foundations

Traditional genetic algorithms use heuristic operators, but fugue-evo views evolution through a probabilistic lens:

  • Fitness as Likelihood: Selection pressure maps directly to Bayesian conditioning
  • Learnable Operators: Automatic inference of optimal crossover, mutation, and selection hyperparameters using conjugate priors
  • Trace-Based Evolution: Deep integration with the fugue-ppl probabilistic programming library enables novel trace-based genetic operators

Type Safety

Rust's type system ensures correctness at compile time:

  • Trait-Based Abstraction: The EvolutionaryGenome trait provides a flexible foundation for any genome type
  • Builder Patterns: Algorithm configuration uses type-safe builders that catch misconfigurations at compile time
  • Generic Operators: Selection, crossover, and mutation operators work across genome types with proper type constraints

Production Ready

Built for real-world use:

  • Checkpointing: Save and restore evolution state for long-running optimizations
  • Parallelism: Optional Rayon-based parallel evaluation
  • WASM Support: Run optimizations in the browser with WebAssembly bindings
  • Interactive Evolution: Human-in-the-loop optimization with multiple evaluation modes

Algorithms

Fugue-evo includes several optimization algorithms:

AlgorithmBest ForKey Features
SimpleGAGeneral-purpose optimizationFlexible operators, elitism, parallelism
CMA-ESContinuous optimizationCovariance adaptation, state-of-the-art performance
NSGA-IIMulti-objective optimizationPareto ranking, crowding distance
Island ModelEscaping local optimaParallel populations, migration
EDA/UMDAProbabilistic model buildingDistribution estimation
Interactive GASubjective optimizationHuman feedback integration

Genome Types

Built-in genome representations for different problem domains:

  • RealVector: Continuous optimization (function minimization)
  • BitString: Combinatorial optimization (knapsack, feature selection)
  • Permutation: Ordering problems (TSP, scheduling)
  • TreeGenome: Genetic programming (symbolic regression)

Getting Started

Ready to dive in? Head to the Installation guide to add fugue-evo to your project, then follow the Quick Start to run your first optimization.

For a deeper understanding of the library's design, explore the Core Concepts section.

Example

Here's a taste of what optimization looks like with fugue-evo:

use fugue_evo::prelude::*;
use rand::SeedableRng;

fn main() -> Result<(), Box<dyn std::error::Error>> {
    let mut rng = rand::rngs::StdRng::seed_from_u64(42);

    // Define search bounds: 10 dimensions in [-5.12, 5.12]
    let bounds = MultiBounds::symmetric(5.12, 10);

    // Run optimization
    let result = SimpleGABuilder::<RealVector, f64, _, _, _, _, _>::new()
        .population_size(100)
        .bounds(bounds)
        .selection(TournamentSelection::new(3))
        .crossover(SbxCrossover::new(20.0))
        .mutation(PolynomialMutation::new(20.0))
        .fitness(Sphere::new(10))
        .max_generations(200)
        .build()?
        .run(&mut rng)?;

    println!("Best fitness: {:.6}", result.best_fitness);
    Ok(())
}

Documentation Structure

This documentation is organized into several sections:

  • Getting Started: Installation, concepts, and first steps
  • Tutorials: Step-by-step walkthroughs of example problems
  • How-To Guides: Task-oriented guides for specific use cases
  • Reference: Detailed API documentation for algorithms, genomes, and operators
  • Architecture: Design philosophy and system internals

API Reference

For complete API documentation generated from source code, see the Rustdoc API Reference.

Installation

Add fugue-evo to your Rust project using Cargo.

Basic Installation

Add the following to your Cargo.toml:

[dependencies]
fugue-evo = "0.1"
rand = "0.8"

The rand crate is required for random number generation in evolutionary algorithms.

Feature Flags

Fugue-evo provides several optional features:

[dependencies]
fugue-evo = { version = "0.1", features = ["parallel", "checkpoint"] }

Available Features

FeatureDefaultDescription
stdYesStandard library support
parallelYesRayon-based parallel fitness evaluation
checkpointYesSave/restore evolution state to files

Minimal Installation

For embedded or no-std environments:

[dependencies]
fugue-evo = { version = "0.1", default-features = false }

Full Installation

To enable all features:

[dependencies]
fugue-evo = { version = "0.1", features = ["std", "parallel", "checkpoint"] }

WASM Support

For browser-based optimization, use the WASM package:

[dependencies]
fugue-evo-wasm = "0.1"

Or install via npm for JavaScript projects:

npm install fugue-evo-wasm

See WASM & Browser Usage for detailed setup instructions.

Verifying Installation

Create a simple test program to verify your installation:

use fugue_evo::prelude::*;

fn main() {
    // Create a simple real vector genome
    let genome = RealVector::new(vec![1.0, 2.0, 3.0]);
    println!("Genome: {:?}", genome.genes());

    // Create bounds for optimization
    let bounds = MultiBounds::symmetric(5.0, 3);
    println!("Bounds: {:?}", bounds);

    println!("fugue-evo is working!");
}

Run with:

cargo run

Development Installation

To work with the latest development version:

[dependencies]
fugue-evo = { git = "https://github.com/fugue-evo/fugue-evo" }

Next Steps

Now that fugue-evo is installed, continue to:

Core Concepts

This guide explains the fundamental concepts and abstractions in fugue-evo. Understanding these will help you use the library effectively and create custom components.

The Evolutionary Loop

At its core, genetic algorithms follow an iterative process:

┌─────────────────────────────────────────────────────────┐
│                    EVOLUTIONARY LOOP                      │
│                                                           │
│  ┌──────────┐    ┌───────────┐    ┌──────────────────┐  │
│  │Initialize│ -> │ Evaluate  │ -> │    Selection     │  │
│  │Population│    │  Fitness  │    │                  │  │
│  └──────────┘    └───────────┘    └────────┬─────────┘  │
│                                             │            │
│  ┌──────────┐    ┌───────────┐    ┌────────▼─────────┐  │
│  │  Check   │ <- │  Replace  │ <- │    Crossover     │  │
│  │Terminate │    │Population │    │   & Mutation     │  │
│  └──────────┘    └───────────┘    └──────────────────┘  │
└─────────────────────────────────────────────────────────┘

Fugue-evo provides abstractions for each step while handling the loop orchestration.

Genomes

A genome represents a candidate solution to your optimization problem. The EvolutionaryGenome trait is the core abstraction:

pub trait EvolutionaryGenome: Clone + Send + Sync {
    /// Convert genome to a Fugue trace for probabilistic operations
    fn to_trace(&self) -> Trace;

    /// Reconstruct genome from a trace
    fn from_trace(trace: &Trace) -> Result<Self, GenomeError>;
}

Built-in Genome Types

TypeUse CaseExample
RealVectorContinuous optimizationFunction minimization
BitStringBinary optimizationFeature selection, knapsack
PermutationOrdering problemsTSP, job scheduling
TreeGenomeGenetic programmingSymbolic regression

Example: RealVector

use fugue_evo::prelude::*;

// Create a 5-dimensional real vector
let genome = RealVector::new(vec![1.0, 2.0, 3.0, 4.0, 5.0]);

// Access genes
println!("Genes: {:?}", genome.genes());
println!("Dimension: {}", genome.dimension());

// Vector operations
let other = RealVector::new(vec![0.5, 0.5, 0.5, 0.5, 0.5]);
let distance = genome.euclidean_distance(&other);

Fitness Functions

A fitness function evaluates how good a solution is. Higher fitness = better solution.

pub trait Fitness<G>: Send + Sync {
    type Value: FitnessValue;

    fn evaluate(&self, genome: &G) -> Self::Value;
}

Built-in Benchmarks

Fugue-evo includes standard optimization benchmarks:

// Sphere function: f(x) = Σxᵢ²
// Global minimum at origin
let sphere = Sphere::new(10);

// Rastrigin function: highly multimodal
let rastrigin = Rastrigin::new(10);

// Rosenbrock function: curved valley
let rosenbrock = Rosenbrock::new(10);

Custom Fitness

Implement Fitness for your own evaluation:

struct MyFitness;

impl Fitness<RealVector> for MyFitness {
    type Value = f64;

    fn evaluate(&self, genome: &RealVector) -> f64 {
        // Your evaluation logic here
        let genes = genome.genes();
        // Return fitness (higher = better)
        -genes.iter().map(|x| x * x).sum::<f64>()
    }
}

Operators

Operators manipulate genomes to explore the solution space.

Selection

Selection chooses parents for reproduction based on fitness:

// Tournament: randomly sample k individuals, select best
let selection = TournamentSelection::new(3);

// Roulette wheel: probability proportional to fitness
let selection = RouletteWheelSelection::new();

// Rank-based: probability based on fitness rank
let selection = RankSelection::new();

Crossover

Crossover combines two parents to create offspring:

// SBX: Simulated Binary Crossover for real values
let crossover = SbxCrossover::new(20.0); // eta parameter

// Uniform: swap genes with 50% probability
let crossover = UniformCrossover::new();

// One-point: single crossover point
let crossover = OnePointCrossover::new();

Mutation

Mutation introduces random variation:

// Polynomial mutation for real values
let mutation = PolynomialMutation::new(20.0);

// Gaussian noise
let mutation = GaussianMutation::new(0.1); // std dev

// Bit-flip for binary genomes
let mutation = BitFlipMutation::new(0.01); // per-bit rate

Bounds

For continuous optimization, bounds constrain the search space:

// Symmetric bounds: [-5.12, 5.12] for all dimensions
let bounds = MultiBounds::symmetric(5.12, 10);

// Asymmetric bounds per dimension
let bounds = MultiBounds::new(vec![
    Bounds::new(-10.0, 10.0),
    Bounds::new(0.0, 100.0),
    Bounds::new(-1.0, 1.0),
]);

// Uniform bounds for all dimensions
let bounds = MultiBounds::uniform(Bounds::new(0.0, 1.0), 5);

Population

A population is a collection of individuals being evolved:

// Create random population within bounds
let mut population: Population<RealVector, f64> =
    Population::random(100, &bounds, &mut rng);

// Evaluate all individuals
population.evaluate(&fitness);

// Access best individual
if let Some(best) = population.best() {
    println!("Best fitness: {}", best.fitness_value());
}

Termination Criteria

Define when evolution should stop:

// Fixed number of generations
let term = MaxGenerations::new(500);

// Target fitness achieved
let term = TargetFitness::new(-0.001); // for minimization

// Fitness stagnation (no improvement for N generations)
let term = FitnessStagnation::new(50);

// Combine criteria
let term = AnyOf::new(vec![
    Box::new(MaxGenerations::new(1000)),
    Box::new(TargetFitness::new(0.0)),
]);

Builder Pattern

Algorithms are configured using type-safe builders:

let result = SimpleGABuilder::<RealVector, f64, _, _, _, _, _>::new()
    .population_size(100)      // Population size
    .bounds(bounds)            // Search space
    .selection(selection)      // Selection operator
    .crossover(crossover)      // Crossover operator
    .mutation(mutation)        // Mutation operator
    .fitness(fitness)          // Fitness function
    .max_generations(200)      // Termination criterion
    .elitism(true)             // Keep best individual
    .elite_count(2)            // Number of elites
    .build()?                  // Build algorithm
    .run(&mut rng)?;           // Execute

Results

After evolution completes, results provide comprehensive statistics:

let result = ga.run(&mut rng)?;

println!("Best fitness: {}", result.best_fitness);
println!("Best genome: {:?}", result.best_genome);
println!("Generations: {}", result.generations);
println!("Evaluations: {}", result.evaluations);

// Detailed statistics
println!("{}", result.stats.summary());

Next Steps

Now that you understand the core concepts:

Quick Start

This guide walks you through running your first optimization with fugue-evo in under 5 minutes.

Prerequisites

Ensure you have installed fugue-evo.

The Problem: Minimize the Sphere Function

The Sphere function is a classic optimization benchmark:

f(x) = x₁² + x₂² + ... + xₙ²

The global minimum is at the origin (all zeros) with a value of 0.

Full Example

Here's the complete code to optimize the Sphere function:

//! Sphere Function Optimization
//!
//! This example demonstrates basic continuous optimization using the Simple GA
//! to minimize the Sphere function (sum of squares).
//!
//! The Sphere function is a simple unimodal, convex, and separable benchmark
//! that's easy to optimize but useful for verifying the GA is working correctly.

use fugue_evo::prelude::*;
use rand::rngs::StdRng;
use rand::SeedableRng;

fn main() -> Result<(), Box<dyn std::error::Error>> {
    println!("=== Sphere Function Optimization ===\n");

    // Create a seeded RNG for reproducibility
    let mut rng = StdRng::seed_from_u64(42);

    // Define the problem dimension
    const DIM: usize = 10;

    // Create the fitness function (Sphere minimizes to 0 at origin)
    // We negate because fugue-evo maximizes by default
    let fitness = Sphere::new(DIM);

    // Define search bounds: each dimension in [-5.12, 5.12]
    let bounds = MultiBounds::symmetric(5.12, DIM);

    // Build and run the Simple GA
    let result = SimpleGABuilder::<RealVector, f64, _, _, _, _, _>::new()
        .population_size(100)
        .bounds(bounds)
        .selection(TournamentSelection::new(3))
        .crossover(SbxCrossover::new(20.0))
        .mutation(PolynomialMutation::new(20.0))
        .fitness(fitness)
        .max_generations(200)
        .build()?
        .run(&mut rng)?;

    // Print results
    println!("Optimization complete!");
    println!("  Best fitness: {:.6}", result.best_fitness);
    println!("  Generations:  {}", result.generations);
    println!("  Evaluations:  {}", result.evaluations);
    println!("\nBest solution:");
    for (i, val) in result.best_genome.genes().iter().enumerate() {
        println!("  x[{}] = {:.6}", i, val);
    }

    // The optimal solution is at the origin with fitness 0
    let distance_from_optimum: f64 = result
        .best_genome
        .genes()
        .iter()
        .map(|x| x * x)
        .sum::<f64>()
        .sqrt();
    println!("\nDistance from optimum: {:.6}", distance_from_optimum);

    // Show convergence statistics
    println!("\n{}", result.stats.summary());

    Ok(())
}

Source: examples/sphere_optimization.rs

Running the Example

Run the example directly:

cargo run --example sphere_optimization

Expected output:

=== Sphere Function Optimization ===

Optimization complete!
  Best fitness: -0.000023
  Generations:  200
  Evaluations:  20000

Best solution:
  x[0] = 0.001234
  x[1] = -0.000567
  ...

Distance from optimum: 0.004567

Code Breakdown

1. Imports and Setup

use fugue_evo::prelude::*;
use rand::rngs::StdRng;
use rand::SeedableRng;

let mut rng = StdRng::seed_from_u64(42);

The prelude imports everything you need. We use a seeded RNG for reproducibility.

2. Define the Problem

const DIM: usize = 10;
let fitness = Sphere::new(DIM);
let bounds = MultiBounds::symmetric(5.12, DIM);
  • DIM: Number of variables to optimize
  • Sphere::new(DIM): Built-in benchmark function
  • MultiBounds::symmetric(5.12, DIM): Search space [-5.12, 5.12] per dimension

3. Configure the Algorithm

let result = SimpleGABuilder::<RealVector, f64, _, _, _, _, _>::new()
    .population_size(100)
    .bounds(bounds)
    .selection(TournamentSelection::new(3))
    .crossover(SbxCrossover::new(20.0))
    .mutation(PolynomialMutation::new(20.0))
    .fitness(fitness)
    .max_generations(200)
    .build()?
    .run(&mut rng)?;
SettingValuePurpose
population_size100Number of candidate solutions
selectionTournament(3)Select best of 3 random individuals
crossoverSBX(20.0)Simulated Binary Crossover
mutationPolynomial(20.0)Polynomial mutation
max_generations200When to stop

4. Analyze Results

println!("Best fitness: {:.6}", result.best_fitness);
println!("Generations:  {}", result.generations);

for (i, val) in result.best_genome.genes().iter().enumerate() {
    println!("  x[{}] = {:.6}", i, val);
}

Understanding the Output

The fitness value should be close to 0 (the global minimum). The solution values should be close to 0 (the optimal point).

What if Results Aren't Good?

If the solution isn't converging well:

  1. Increase population size: More diversity helps exploration
  2. Increase generations: More time to converge
  3. Adjust mutation: Higher rates for exploration, lower for exploitation
  4. Try different selection pressure: Higher tournament size = more exploitation

Next Steps

Your First Optimization

In this guide, you'll create a complete optimization from scratch with a custom fitness function. We'll optimize a simple engineering problem: finding the dimensions of a box with maximum volume given a surface area constraint.

The Problem

Design a rectangular box with:

  • Maximum volume: V = l × w × h
  • Fixed surface area: 2(lw + wh + lh) = 100

This is a constrained optimization problem that we'll handle using a penalty method.

Step 1: Create the Project

cargo new box_optimizer
cd box_optimizer

Add dependencies to Cargo.toml:

[dependencies]
fugue-evo = "0.1"
rand = "0.8"

Step 2: Define the Fitness Function

Create src/main.rs:

use fugue_evo::prelude::*;
use rand::rngs::StdRng;
use rand::SeedableRng;

/// Custom fitness function for box volume optimization
struct BoxVolumeFitness {
    target_surface_area: f64,
    penalty_weight: f64,
}

impl BoxVolumeFitness {
    fn new(target_surface_area: f64) -> Self {
        Self {
            target_surface_area,
            penalty_weight: 1000.0, // Heavy penalty for constraint violation
        }
    }
}

impl Fitness<RealVector> for BoxVolumeFitness {
    type Value = f64;

    fn evaluate(&self, genome: &RealVector) -> f64 {
        let genes = genome.genes();
        let length = genes[0];
        let width = genes[1];
        let height = genes[2];

        // Calculate volume (what we want to maximize)
        let volume = length * width * height;

        // Calculate surface area
        let surface_area = 2.0 * (length * width + width * height + length * height);

        // Constraint violation (should be close to target)
        let violation = (surface_area - self.target_surface_area).abs();

        // Fitness = volume - penalty * violation
        // We maximize fitness, so volume is positive and violation is penalized
        volume - self.penalty_weight * violation
    }
}

Why This Works

  • Volume maximization: Higher volume = higher fitness
  • Constraint handling: Penalty for deviating from target surface area
  • Penalty weight: Large enough to make constraint violations costly

Step 3: Set Up the Optimization

fn main() -> Result<(), Box<dyn std::error::Error>> {
    println!("=== Box Volume Optimization ===\n");

    let mut rng = StdRng::seed_from_u64(42);

    // Problem setup
    let target_surface_area = 100.0;
    let fitness = BoxVolumeFitness::new(target_surface_area);

    // Search bounds: dimensions between 0.1 and 10
    let bounds = MultiBounds::uniform(Bounds::new(0.1, 10.0), 3);

    println!("Problem: Maximize box volume with surface area = {}", target_surface_area);
    println!("Search space: [0.1, 10.0] for each dimension\n");

    // Configure and run GA
    let result = SimpleGABuilder::<RealVector, f64, _, _, _, _, _>::new()
        .population_size(100)
        .bounds(bounds)
        .selection(TournamentSelection::new(3))
        .crossover(SbxCrossover::new(20.0))
        .mutation(PolynomialMutation::new(20.0).with_probability(0.1))
        .fitness(fitness)
        .max_generations(300)
        .elitism(true)
        .elite_count(2)
        .build()?
        .run(&mut rng)?;

    // Display results
    print_results(&result, target_surface_area);

    Ok(())
}

Step 4: Display Results

fn print_results(result: &EvolutionResult<RealVector, f64>, target_area: f64) {
    let genes = result.best_genome.genes();
    let length = genes[0];
    let width = genes[1];
    let height = genes[2];

    let volume = length * width * height;
    let surface_area = 2.0 * (length * width + width * height + length * height);

    println!("=== Optimization Results ===\n");
    println!("Best solution found:");
    println!("  Length: {:.4}", length);
    println!("  Width:  {:.4}", width);
    println!("  Height: {:.4}", height);
    println!();
    println!("Metrics:");
    println!("  Volume:       {:.4}", volume);
    println!("  Surface Area: {:.4} (target: {})", surface_area, target_area);
    println!("  Constraint violation: {:.4}", (surface_area - target_area).abs());
    println!();
    println!("Evolution statistics:");
    println!("  Generations: {}", result.generations);
    println!("  Evaluations: {}", result.evaluations);
    println!("  Best fitness: {:.4}", result.best_fitness);

    // For a cube with surface area 100, the optimal solution is:
    // 6s² = 100 → s = √(100/6) ≈ 4.08
    // Volume = s³ ≈ 68.04
    let optimal_side = (target_area / 6.0).sqrt();
    let optimal_volume = optimal_side.powi(3);
    println!();
    println!("Theoretical optimum (cube):");
    println!("  Side length: {:.4}", optimal_side);
    println!("  Volume: {:.4}", optimal_volume);
    println!("  Your solution is {:.2}% of optimal", (volume / optimal_volume) * 100.0);
}

Step 5: Run It

cargo run

Expected output:

=== Box Volume Optimization ===

Problem: Maximize box volume with surface area = 100
Search space: [0.1, 10.0] for each dimension

=== Optimization Results ===

Best solution found:
  Length: 4.0824
  Width:  4.0819
  Height: 4.0827

Metrics:
  Volume:       68.0352
  Surface Area: 99.9987 (target: 100)
  Constraint violation: 0.0013

Evolution statistics:
  Generations: 300
  Evaluations: 30000
  Best fitness: 68.0339

Theoretical optimum (cube):
  Side length: 4.0825
  Volume: 68.0414
  Your solution is 99.99% of optimal

Understanding the Result

The GA discovered that a cube maximizes volume for a given surface area - a well-known mathematical result! The three dimensions converged to approximately equal values.

Experimenting

Try modifying the problem:

1. Different Surface Area

let fitness = BoxVolumeFitness::new(200.0); // Larger box

2. More Generations

.max_generations(500) // More refinement

3. Constrained Dimensions

What if height must be less than length?

impl Fitness<RealVector> for BoxVolumeFitness {
    fn evaluate(&self, genome: &RealVector) -> f64 {
        let genes = genome.genes();
        let length = genes[0];
        let width = genes[1];
        let height = genes[2];

        // Height constraint: h <= l
        let height_violation = if height > length {
            height - length
        } else {
            0.0
        };

        let volume = length * width * height;
        let surface_area = 2.0 * (length * width + width * height + length * height);
        let area_violation = (surface_area - self.target_surface_area).abs();

        volume - self.penalty_weight * (area_violation + height_violation)
    }
}

Complete Source Code

The full example is available at examples/box_optimization.rs.

Next Steps

Congratulations! You've created your first custom optimization. Continue learning:

Continuous Optimization Tutorial

This tutorial covers optimizing functions with real-valued variables. You'll learn how to set up, run, and analyze continuous optimization problems.

The Sphere Function

We'll start with the Sphere function, a simple unimodal benchmark:

f(x) = Σxᵢ² = x₁² + x₂² + ... + xₙ²

Properties:

  • Global minimum: 0 at origin (all zeros)
  • Unimodal: Single optimum, no local minima
  • Convex: Any local minimum is global
  • Separable: Each variable is independent

Complete Example

//! Sphere Function Optimization
//!
//! This example demonstrates basic continuous optimization using the Simple GA
//! to minimize the Sphere function (sum of squares).
//!
//! The Sphere function is a simple unimodal, convex, and separable benchmark
//! that's easy to optimize but useful for verifying the GA is working correctly.

use fugue_evo::prelude::*;
use rand::rngs::StdRng;
use rand::SeedableRng;

fn main() -> Result<(), Box<dyn std::error::Error>> {
    println!("=== Sphere Function Optimization ===\n");

    // Create a seeded RNG for reproducibility
    let mut rng = StdRng::seed_from_u64(42);

    // Define the problem dimension
    const DIM: usize = 10;

    // Create the fitness function (Sphere minimizes to 0 at origin)
    // We negate because fugue-evo maximizes by default
    let fitness = Sphere::new(DIM);

    // Define search bounds: each dimension in [-5.12, 5.12]
    let bounds = MultiBounds::symmetric(5.12, DIM);

    // Build and run the Simple GA
    let result = SimpleGABuilder::<RealVector, f64, _, _, _, _, _>::new()
        .population_size(100)
        .bounds(bounds)
        .selection(TournamentSelection::new(3))
        .crossover(SbxCrossover::new(20.0))
        .mutation(PolynomialMutation::new(20.0))
        .fitness(fitness)
        .max_generations(200)
        .build()?
        .run(&mut rng)?;

    // Print results
    println!("Optimization complete!");
    println!("  Best fitness: {:.6}", result.best_fitness);
    println!("  Generations:  {}", result.generations);
    println!("  Evaluations:  {}", result.evaluations);
    println!("\nBest solution:");
    for (i, val) in result.best_genome.genes().iter().enumerate() {
        println!("  x[{}] = {:.6}", i, val);
    }

    // The optimal solution is at the origin with fitness 0
    let distance_from_optimum: f64 = result
        .best_genome
        .genes()
        .iter()
        .map(|x| x * x)
        .sum::<f64>()
        .sqrt();
    println!("\nDistance from optimum: {:.6}", distance_from_optimum);

    // Show convergence statistics
    println!("\n{}", result.stats.summary());

    Ok(())
}

Source: examples/sphere_optimization.rs

Running the Example

cargo run --example sphere_optimization

Understanding the Components

Search Bounds

let bounds = MultiBounds::symmetric(5.12, DIM);

This creates bounds [-5.12, 5.12] for each of the 10 dimensions. The Sphere function is typically tested in this range.

Operators for Real-Valued Optimization

Simulated Binary Crossover (SBX)

SbxCrossover::new(20.0)

SBX creates offspring similar to parent values. The distribution index (eta = 20.0) controls spread:

  • Higher eta → offspring closer to parents (exploitation)
  • Lower eta → more diverse offspring (exploration)

Polynomial Mutation

PolynomialMutation::new(20.0)

Adds bounded perturbations to gene values. The distribution index controls mutation magnitude.

Fitness Function

let fitness = Sphere::new(DIM);

Built-in Sphere computes the sum of squares. Since GAs maximize by default, the function is negated internally.

Analyzing Results

println!("Best fitness: {:.6}", result.best_fitness);
println!("Generations:  {}", result.generations);
println!("Evaluations:  {}", result.evaluations);

Key metrics:

  • Best fitness: Should approach 0 (the global minimum)
  • Generations: How many evolutionary cycles completed
  • Evaluations: Total fitness function calls

Convergence Statistics

println!("{}", result.stats.summary());

The summary shows:

  • Mean and standard deviation per generation
  • Diversity metrics
  • Improvement trends

Parameter Tuning

Population Size

.population_size(100)
SizeTrade-off
Small (20-50)Fast generations, less diversity
Medium (100-200)Good balance
Large (500+)More exploration, slower per generation

For Sphere (unimodal), smaller populations work well. Multimodal functions need larger populations.

Operator Parameters

SBX Distribution Index

.crossover(SbxCrossover::new(eta))
etaEffect
1-5High exploration
15-25Balanced
50+High exploitation

Mutation Probability

.mutation(PolynomialMutation::new(20.0).with_probability(0.1))

Default is 1/dimension. Increase for more exploration.

Elitism

.elitism(true)
.elite_count(2)

Elitism preserves the best individuals across generations, preventing loss of good solutions.

Common Issues

Premature Convergence

Symptoms: Population converges too quickly, stuck at suboptimal solution

Solutions:

  1. Increase population size
  2. Lower selection pressure (smaller tournament size)
  3. Increase mutation probability

Slow Convergence

Symptoms: Fitness improves very slowly

Solutions:

  1. Increase selection pressure
  2. Reduce mutation (more exploitation)
  3. Use elitism to preserve progress

Exercises

  1. Change dimensions: Modify DIM to 30 and observe how convergence changes
  2. Adjust operators: Try eta values of 5 and 50 for crossover
  3. Compare selections: Replace TournamentSelection with RouletteWheelSelection

Next Steps

Multimodal Optimization Tutorial

This tutorial tackles the Rastrigin function, a challenging multimodal benchmark with many local optima. You'll learn strategies for escaping local optima and finding global solutions.

The Rastrigin Function

The Rastrigin function is defined as:

f(x) = 10n + Σ[xᵢ² - 10cos(2πxᵢ)]

Properties:

  • Global minimum: 0 at origin
  • Highly multimodal: ~10ⁿ local minima!
  • Non-separable: Variables interact through cosine terms
  • Deceptive: Local minima look similar to global minimum

The Challenge

With 20 dimensions, the Rastrigin function has approximately 10²⁰ local minima. A naive optimization will likely get trapped in one of these local minima.

Complete Example

//! Rastrigin Function Benchmark
//!
//! This example demonstrates optimization of the highly multimodal Rastrigin
//! function, a challenging benchmark that tests the GA's ability to escape
//! local optima.
//!
//! The Rastrigin function has many local minima but a single global minimum
//! at the origin.

use fugue_evo::prelude::*;
use rand::rngs::StdRng;
use rand::SeedableRng;

fn main() -> Result<(), Box<dyn std::error::Error>> {
    println!("=== Rastrigin Function Benchmark ===\n");

    let mut rng = StdRng::seed_from_u64(12345);

    const DIM: usize = 20;

    // Rastrigin is a highly multimodal function
    let fitness = Rastrigin::new(DIM);
    let bounds = MultiBounds::symmetric(5.12, DIM);

    println!("Problem: {} dimensions", DIM);
    println!("Search space: [-5.12, 5.12]^{}", DIM);
    println!("Global optimum: 0.0 at origin\n");

    // Run GA with larger population for multimodal landscape
    let result = SimpleGABuilder::<RealVector, f64, _, _, _, _, _>::new()
        .population_size(200)
        .bounds(bounds)
        .selection(TournamentSelection::new(5)) // Higher pressure
        .crossover(SbxCrossover::new(15.0)) // More exploration
        .mutation(PolynomialMutation::new(20.0).with_probability(0.1))
        .fitness(fitness)
        .max_generations(500)
        .elitism(true)
        .elite_count(2)
        .build()?
        .run(&mut rng)?;

    println!("Results:");
    println!("  Best fitness:   {:.6}", result.best_fitness);
    println!("  Generations:    {}", result.generations);
    println!("  Evaluations:    {}", result.evaluations);

    // Check how close we got to the global optimum
    let max_deviation = result
        .best_genome
        .genes()
        .iter()
        .map(|x| x.abs())
        .fold(0.0f64, |a, b| a.max(b));

    println!("\nSolution quality:");
    println!("  Max deviation from origin: {:.6}", max_deviation);

    // Print fitness progress
    let history = result.stats.best_fitness_history();
    println!("\nFitness progression (every 50 generations):");
    for (i, fitness) in history.iter().enumerate() {
        if i % 50 == 0 {
            println!("  Gen {:4}: {:.6}", i, fitness);
        }
    }

    Ok(())
}

Source: examples/rastrigin_benchmark.rs

Running the Example

cargo run --example rastrigin_benchmark

Key Strategies

1. Larger Population

.population_size(200)

More individuals means more parallel exploration of the search space. For multimodal functions, this helps maintain diversity and cover more basins of attraction.

2. Higher Selection Pressure

.selection(TournamentSelection::new(5))

Tournament size of 5 (vs. 3 for Sphere) increases selection pressure, helping the population converge faster on good regions once found.

3. More Exploration in Crossover

.crossover(SbxCrossover::new(15.0))

Lower distribution index (15 vs. 20) creates more diverse offspring, helping explore new regions.

4. Elitism

.elitism(true)
.elite_count(2)

Critical for multimodal optimization! Without elitism, the best solution can be lost due to selection randomness.

5. More Generations

.max_generations(500)

Multimodal problems need more time to find and refine global optima.

Understanding Results

Tracking Progress

let history = result.stats.best_fitness_history();
for (i, fitness) in history.iter().enumerate() {
    if i % 50 == 0 {
        println!("  Gen {:4}: {:.6}", i, fitness);
    }
}

Watch for:

  • Rapid early improvement: Finding good basins
  • Plateaus: Stuck in local optima
  • Jumps: Escaping to better regions

Solution Quality

let max_deviation = result
    .best_genome
    .genes()
    .iter()
    .map(|x| x.abs())
    .fold(0.0f64, |a, b| a.max(b));

println!("Max deviation from origin: {:.6}", max_deviation);

For Rastrigin, each gene should be close to 0. Large deviations indicate the solution is in a local optimum.

Alternative Approaches

Island Model

For heavily multimodal problems, consider the Island Model:

// Multiple populations with periodic migration
let result = IslandModelBuilder::<RealVector, _, _, _, _, f64>::new()
    .num_islands(4)
    .island_population_size(50)
    .migration_interval(25)
    .migration_policy(MigrationPolicy::Best(2))
    // ... operators
    .build(&mut rng)?
    .run(200, &mut rng)?;

See Island Model Tutorial for details.

Restart Strategy

Manual restarts can help escape deep local optima:

let mut best_overall = f64::NEG_INFINITY;
let mut best_genome = None;

for restart in 0..5 {
    let result = SimpleGABuilder::<RealVector, f64, _, _, _, _, _>::new()
        // ... configuration
        .build()?
        .run(&mut rng)?;

    if result.best_fitness > best_overall {
        best_overall = result.best_fitness;
        best_genome = Some(result.best_genome);
    }

    println!("Restart {}: {:.6}", restart, result.best_fitness);
}

Parameter Guidelines for Multimodal Functions

ParameterUnimodalMultimodal
Population50-100150-300
Tournament size2-34-7
SBX eta15-2510-15
Mutation probability1/n1.5/n - 2/n
Generations100-300300-1000
ElitismOptionalEssential

Diagnosing Problems

Stuck in Local Optima

Symptoms:

  • Fitness plateaus early
  • Solution values are multiples of π (Rastrigin local minima)

Solutions:

  1. Increase mutation probability
  2. Use Island Model
  3. Try different random seeds
  4. Add restarts

Loss of Diversity

Symptoms:

  • All individuals become similar
  • No improvement despite many generations

Solutions:

  1. Increase population size
  2. Lower selection pressure
  3. Add diversity maintenance (niching)

Exercises

  1. Vary dimensions: Compare results for DIM = 10, 20, 30
  2. Compare seeds: Run with 5 different seeds and analyze variance
  3. Population study: Try populations of 50, 100, 200, 400

Next Steps

Advanced Algorithms

Fugue-evo includes several advanced optimization algorithms beyond the Simple GA. This guide provides an overview and helps you choose the right algorithm for your problem.

Algorithm Comparison

AlgorithmBest ForKey Advantage
SimpleGAGeneral-purposeFlexible, well-understood
CMA-ESContinuous (non-separable)Learns problem structure
NSGA-IIMulti-objectivePareto front discovery
Island ModelMultimodalMaintains diversity
EDA/UMDASeparable problemsDistribution learning

CMA-ES

Covariance Matrix Adaptation Evolution Strategy is a state-of-the-art algorithm for continuous optimization.

When to use:

  • Continuous real-valued optimization
  • Non-separable problems (variables interact)
  • Problems where second-order information helps
  • When you don't need custom operators

Key features:

  • Adapts step size automatically
  • Learns covariance structure (variable correlations)
  • Typically requires fewer evaluations than GA

See CMA-ES Tutorial for details.

NSGA-II

Non-dominated Sorting Genetic Algorithm II is designed for multi-objective optimization.

When to use:

  • Multiple conflicting objectives
  • Need the full Pareto front
  • Trade-off analysis between objectives

Key features:

  • Pareto dominance ranking
  • Crowding distance for diversity
  • Constraint handling support

See Multi-Objective Optimization for details.

Island Model

Island Model runs multiple populations in parallel with periodic migration.

When to use:

  • Highly multimodal problems
  • Parallel computation available
  • Need to maintain diversity

Key features:

  • Multiple independent populations
  • Configurable topology (Ring, Star, Complete)
  • Migration policies

See Island Model Tutorial for details.

EDA/UMDA

Estimation of Distribution Algorithm builds a probabilistic model of good solutions.

When to use:

  • Separable problems
  • When crossover disrupts good solutions
  • Discrete optimization

Key features:

  • No crossover operator needed
  • Builds statistical model of population
  • Good for problems with building blocks

Algorithm Selection Guide

                    ┌─────────────────────┐
                    │   Is the problem    │
                    │   multi-objective?  │
                    └──────────┬──────────┘
                               │
              ┌────────────────┼────────────────┐
              │ Yes                             │ No
              ▼                                 ▼
        ┌───────────┐                 ┌─────────────────────┐
        │  NSGA-II  │                 │ Is it continuous    │
        └───────────┘                 │ real-valued?        │
                                      └──────────┬──────────┘
                                                 │
                         ┌───────────────────────┼───────────────────────┐
                         │ Yes                                           │ No
                         ▼                                               ▼
               ┌─────────────────────┐                         ┌─────────────────┐
               │ Is it non-separable │                         │    SimpleGA     │
               │ or ill-conditioned? │                         │ (with BitString │
               └──────────┬──────────┘                         │ or Permutation) │
                          │                                    └─────────────────┘
          ┌───────────────┼───────────────┐
          │ Yes                           │ No
          ▼                               ▼
    ┌───────────┐               ┌─────────────────────┐
    │  CMA-ES   │               │ Is it highly        │
    └───────────┘               │ multimodal?         │
                                └──────────┬──────────┘
                                           │
                       ┌───────────────────┼───────────────────┐
                       │ Yes                                   │ No
                       ▼                                       ▼
               ┌─────────────────┐                    ┌─────────────────┐
               │  Island Model   │                    │    SimpleGA     │
               │   or Restarts   │                    └─────────────────┘
               └─────────────────┘

Combining Algorithms

Sometimes the best approach is to combine algorithms:

1. CMA-ES After GA

Use GA for global exploration, then CMA-ES for local refinement:

// Phase 1: GA for exploration
let ga_result = SimpleGABuilder::<RealVector, f64, _, _, _, _, _>::new()
    .population_size(100)
    .max_generations(50)
    // ...
    .run(&mut rng)?;

// Phase 2: CMA-ES from best GA solution
let initial_mean = ga_result.best_genome.genes().to_vec();
let mut cmaes = CmaEs::new(initial_mean, 0.5);
let final_result = cmaes.run_generations(&fitness, 100, &mut rng)?;

2. Island Model with Different Algorithms

Run different configurations on different islands:

// Create islands with varied parameters
let model = IslandModelBuilder::<RealVector, _, _, _, _, f64>::new()
    .num_islands(4)
    .migration_interval(25)
    // Each island evolves differently due to RNG
    .build(&mut rng)?;

Performance Considerations

Evaluation Cost

If fitness evaluation is expensive:

  • CMA-ES: Typically needs fewer evaluations
  • Smaller populations: Fewer evaluations per generation
  • Surrogate models: Approximate fitness for pre-selection

Parallelism

If you have multiple CPU cores:

  • Island Model: Natural parallelism
  • Parallel evaluation: Enable with parallel feature
  • Multiple runs: Compare results from different seeds

Memory

For very high-dimensional problems:

  • CMA-ES: O(n²) memory for covariance matrix
  • SimpleGA: O(population × n) memory
  • Consider dimensionality reduction for n > 1000

Next Steps

Dive into specific algorithms:

CMA-ES Tutorial

Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is one of the most powerful algorithms for continuous optimization, especially for non-separable and ill-conditioned problems.

When to Use CMA-ES

Ideal for:

  • Non-separable problems (variables are correlated)
  • Ill-conditioned problems (different scaling per variable)
  • Medium-dimensional problems (10-100 variables)
  • When you want automatic parameter adaptation

Not ideal for:

  • Very high-dimensional problems (n > 1000)
  • Discrete or combinatorial problems
  • Problems requiring custom genetic operators

How CMA-ES Works

CMA-ES maintains a multivariate normal distribution over the search space and iteratively:

  1. Sample new solutions from the distribution
  2. Evaluate and rank solutions by fitness
  3. Update the distribution mean toward better solutions
  4. Adapt the covariance matrix to learn variable correlations
  5. Adjust the step size (sigma) based on progress

The covariance matrix captures the shape of the fitness landscape, allowing efficient search even when variables are correlated.

Complete Example

//! CMA-ES Optimization Example
//!
//! This example demonstrates the Covariance Matrix Adaptation Evolution
//! Strategy (CMA-ES), a state-of-the-art derivative-free optimization
//! algorithm for continuous domains.
//!
//! CMA-ES adapts both the mean and covariance of a multivariate normal
//! distribution to efficiently search the fitness landscape.

use fugue_evo::prelude::*;
use rand::rngs::StdRng;
use rand::SeedableRng;

fn main() -> Result<(), Box<dyn std::error::Error>> {
    println!("=== CMA-ES Optimization Example ===\n");

    let mut rng = StdRng::seed_from_u64(42);

    // Rosenbrock function - a classic test for optimization algorithms
    // The global minimum is at (1, 1, ..., 1) with value 0
    const DIM: usize = 10;

    println!("Problem: {}-D Rosenbrock function", DIM);
    println!("Global optimum: 0.0 at (1, 1, ..., 1)\n");

    // Create CMA-ES optimizer
    let initial_mean = vec![0.0; DIM]; // Start at origin
    let initial_sigma = 0.5; // Initial step size
    let bounds = MultiBounds::symmetric(5.0, DIM);

    // Create fitness function that CMA-ES can use
    let fitness = RosenbrockCmaEs { dim: DIM };

    let mut cmaes = CmaEs::new(initial_mean, initial_sigma).with_bounds(bounds);

    // Run optimization
    let best = cmaes.run_generations(&fitness, 1000, &mut rng)?;

    println!("Results:");
    println!("  Best fitness (minimized): {:.10}", best.fitness_value());
    println!("  Generations: {}", cmaes.state.generation);
    println!("  Evaluations: {}", cmaes.state.evaluations);
    println!("  Final sigma: {:.6}", cmaes.state.sigma);

    println!("\nBest solution:");
    for (i, val) in best.genome.genes().iter().enumerate().take(5) {
        println!("  x[{}] = {:.6}", i, val);
    }
    if DIM > 5 {
        println!("  ... ({} more dimensions)", DIM - 5);
    }

    // Calculate distance from optimal solution
    let distance_from_opt: f64 = best
        .genome
        .genes()
        .iter()
        .map(|x| (x - 1.0).powi(2))
        .sum::<f64>()
        .sqrt();

    println!("\nDistance from optimum: {:.10}", distance_from_opt);

    Ok(())
}

/// Rosenbrock function wrapper for CMA-ES
struct RosenbrockCmaEs {
    dim: usize,
}

impl CmaEsFitness for RosenbrockCmaEs {
    fn evaluate(&self, x: &RealVector) -> f64 {
        let genes = x.genes();
        let mut sum = 0.0;
        for i in 0..self.dim - 1 {
            let term1 = genes[i + 1] - genes[i] * genes[i];
            let term2 = 1.0 - genes[i];
            sum += 100.0 * term1 * term1 + term2 * term2;
        }
        sum // CMA-ES minimizes, so return positive value
    }
}

Source: examples/cma_es_example.rs

Running the Example

cargo run --example cma_es_example

Key Components

Initialization

let initial_mean = vec![0.0; DIM];  // Starting point
let initial_sigma = 0.5;             // Initial step size
let bounds = MultiBounds::symmetric(5.0, DIM);

let mut cmaes = CmaEs::new(initial_mean, initial_sigma)
    .with_bounds(bounds);

Parameters:

  • initial_mean: Starting center of the search distribution
  • initial_sigma: Initial standard deviation (step size)
  • bounds: Optional constraints on the search space

Fitness Function

CMA-ES uses a different fitness trait than GA:

struct RosenbrockCmaEs { dim: usize }

impl CmaEsFitness for RosenbrockCmaEs {
    fn evaluate(&self, x: &RealVector) -> f64 {
        // Return value to MINIMIZE (not maximize!)
        let genes = x.genes();
        let mut sum = 0.0;
        for i in 0..self.dim - 1 {
            let term1 = genes[i + 1] - genes[i] * genes[i];
            let term2 = 1.0 - genes[i];
            sum += 100.0 * term1 * term1 + term2 * term2;
        }
        sum
    }
}

Important: CMA-ES minimizes by default (unlike GA which maximizes).

Running Optimization

let best = cmaes.run_generations(&fitness, 1000, &mut rng)?;

println!("Best fitness: {:.10}", best.fitness_value());
println!("Final sigma: {:.6}", cmaes.state.sigma);

The Rosenbrock Function

The example optimizes the Rosenbrock function:

f(x) = Σ[100(xᵢ₊₁ - xᵢ²)² + (1 - xᵢ)²]

Properties:

  • Global minimum: f(1,1,...,1) = 0
  • Curved valley: The optimum lies in a narrow, curved valley
  • Non-separable: Variables are strongly coupled
  • Tests algorithm's ability to learn correlations

CMA-ES excels at Rosenbrock because it learns the valley's shape through covariance adaptation.

Understanding CMA-ES Output

Sigma (Step Size)

println!("Final sigma: {:.6}", cmaes.state.sigma);
  • High sigma: Still exploring, large steps
  • Low sigma: Converging, fine-tuning
  • Oscillating sigma: May indicate problem with scale

Generations vs Evaluations

println!("Generations: {}", cmaes.state.generation);
println!("Evaluations: {}", cmaes.state.evaluations);

CMA-ES typically samples λ individuals per generation:

  • Default λ ≈ 4 + 3ln(n) for n dimensions
  • Total evaluations ≈ generations × λ

Tuning CMA-ES

Initial Step Size (Sigma)

let initial_sigma = 0.5;

Guidelines:

  • Should cover roughly 1/3 of the search space
  • Too small: Slow convergence, may miss global optimum
  • Too large: Wasted evaluations exploring infeasible regions

For bounds [-5, 5], sigma = (5 - (-5)) / 6 ≈ 1.67 or smaller.

Initial Mean

let initial_mean = vec![0.0; DIM];

If you have prior knowledge about good regions, start there:

let initial_mean = vec![1.0; DIM];  // Near expected optimum

Population Size

Default population is typically sufficient, but can be increased for multimodal problems:

let mut cmaes = CmaEs::new(initial_mean, initial_sigma)
    .with_population_size(100);  // More exploration

Comparison with GA

AspectCMA-ESSimple GA
OperatorsAutomatic adaptationManual configuration
CorrelationsLearns automaticallyRequires special operators
EvaluationsUsually fewerUsually more
FlexibilityFixed frameworkHighly customizable
MemoryO(n²)O(population × n)
Best forNon-separable continuousGeneral purpose

Advanced Usage

Early Stopping

for gen in 0..max_generations {
    cmaes.step(&fitness, &mut rng)?;

    // Check for convergence
    if cmaes.state.sigma < 1e-8 {
        println!("Converged at generation {}", gen);
        break;
    }
}

Accessing Distribution Parameters

let mean = &cmaes.state.mean;  // Current distribution center
let sigma = cmaes.state.sigma;  // Current step size
let generation = cmaes.state.generation;

Exercises

  1. Compare with GA: Run both CMA-ES and SimpleGA on Rosenbrock, compare evaluations needed
  2. Step size study: Try initial sigma values of 0.1, 0.5, 2.0, 5.0
  3. Higher dimensions: Increase DIM to 20, 50 and observe scaling

Next Steps

Island Model Tutorial

The Island Model runs multiple populations (islands) in parallel with periodic migration of individuals between them. This approach helps maintain diversity and escape local optima in multimodal problems.

When to Use Island Model

Ideal for:

  • Highly multimodal problems
  • When single-population GA gets trapped
  • Parallel computation environments
  • Problems requiring diversity maintenance

Trade-offs:

  • More complex setup
  • Migration parameters require tuning
  • Higher total population size

How It Works

┌─────────────────────────────────────────────────────────────┐
│                       ISLAND MODEL                           │
│                                                              │
│   ┌─────────┐      ┌─────────┐      ┌─────────┐           │
│   │ Island 1│◄────►│ Island 2│◄────►│ Island 3│           │
│   │  Pop=50 │      │  Pop=50 │      │  Pop=50 │           │
│   └────┬────┘      └─────────┘      └────┬────┘           │
│        │                                  │                 │
│        └──────────────────────────────────┘                │
│                      Migration                              │
│                  (every N generations)                      │
└─────────────────────────────────────────────────────────────┘

Each island:

  1. Evolves independently for several generations
  2. Periodically sends/receives individuals to/from neighbors
  3. Continues evolving with new genetic material

Complete Example

//! Island Model Parallelism
//!
//! This example demonstrates the island model for parallel evolution,
//! where multiple subpopulations evolve independently with periodic
//! migration of individuals between islands.
//!
//! Island models can help maintain diversity and escape local optima.

use fugue_evo::prelude::*;
use rand::rngs::StdRng;
use rand::SeedableRng;

fn main() -> Result<(), Box<dyn std::error::Error>> {
    println!("=== Island Model Parallelism ===\n");

    let mut rng = StdRng::seed_from_u64(42);

    // Rastrigin - multimodal function benefits from island diversity
    const DIM: usize = 20;
    let fitness = Rastrigin::new(DIM);
    let bounds = MultiBounds::symmetric(5.12, DIM);

    println!("Problem: {}-D Rastrigin", DIM);
    println!("Configuration:");
    println!("  Islands: 4");
    println!("  Population per island: 50");
    println!("  Migration interval: 25 generations");
    println!("  Migration policy: Best(2)\n");

    // Create island model
    let mut island_model = IslandModelBuilder::<RealVector, _, _, _, _, f64>::new()
        .num_islands(4)
        .island_population_size(50)
        .topology(MigrationTopology::Ring)
        .migration_interval(25)
        .migration_policy(MigrationPolicy::Best(2))
        .bounds(bounds.clone())
        .selection(TournamentSelection::new(3))
        .crossover(SbxCrossover::new(15.0))
        .mutation(PolynomialMutation::new(20.0))
        .fitness(fitness)
        .build(&mut rng)?;

    // Run the island model
    let best = island_model.run(200, &mut rng)?;
    let best_fitness = *best.fitness_value();
    let generations = island_model.generation;

    println!("Results:");
    println!("  Best fitness: {:.6}", best_fitness);
    println!("  Generations:  {}", generations);

    // Compare with single-population GA
    println!("\n--- Comparison with single population ---");
    let mut rng2 = StdRng::seed_from_u64(42);
    let fitness2 = Rastrigin::new(DIM);
    let bounds2 = MultiBounds::symmetric(5.12, DIM);

    let single_result = SimpleGABuilder::<RealVector, f64, _, _, _, _, _>::new()
        .population_size(200) // Same total population
        .bounds(bounds2)
        .selection(TournamentSelection::new(3))
        .crossover(SbxCrossover::new(15.0))
        .mutation(PolynomialMutation::new(20.0))
        .fitness(fitness2)
        .max_generations(200)
        .build()?
        .run(&mut rng2)?;

    println!("Single population best: {:.6}", single_result.best_fitness);
    println!("Island model best:      {:.6}", best_fitness);

    if best_fitness > single_result.best_fitness {
        println!("\nIsland model found better solution!");
    } else {
        println!("\nSingle population found better solution.");
    }

    Ok(())
}

Source: examples/island_model.rs

Running the Example

cargo run --example island_model

Key Components

Configuration

let mut island_model = IslandModelBuilder::<RealVector, _, _, _, _, f64>::new()
    .num_islands(4)                            // Number of populations
    .island_population_size(50)                // Size per island
    .topology(MigrationTopology::Ring)         // Connection pattern
    .migration_interval(25)                    // Generations between migrations
    .migration_policy(MigrationPolicy::Best(2)) // What to migrate
    .bounds(bounds.clone())
    .selection(TournamentSelection::new(3))
    .crossover(SbxCrossover::new(15.0))
    .mutation(PolynomialMutation::new(20.0))
    .fitness(fitness)
    .build(&mut rng)?;

Migration Topologies

MigrationTopology::Ring
TopologyPatternBest For
RingEach island connects to 2 neighborsGeneral purpose, good diversity
StarCentral hub connects to allFast information spread
FullyConnectedEveryone connects to everyoneMaximum mixing

Ring Topology:

    1 ←→ 2
    ↕     ↕
    4 ←→ 3

Star Topology:

    2   3
     \ /
      1
     / \
    5   4

Migration Policies

MigrationPolicy::Best(2)
PolicyDescription
Best(n)Send n best individuals
Random(n)Send n random individuals
Worst(n)Replace n worst with immigrants

Migration Interval

.migration_interval(25)
  • Short interval (5-10): Frequent mixing, faster convergence
  • Long interval (50-100): More independent evolution, more diversity
  • Typical: 20-50 generations

Understanding the Comparison

The example compares Island Model with a single population:

// Island Model: 4 islands × 50 = 200 total
let mut island_model = IslandModelBuilder::new()
    .num_islands(4)
    .island_population_size(50)
    // ...

// Single Population: 200 total
let single_result = SimpleGABuilder::new()
    .population_size(200)
    // ...

Same total population size, different structure. For multimodal problems like Rastrigin, islands often find better solutions because:

  1. Diversity preservation: Islands explore different regions
  2. Niching effect: Each island can specialize in a local optimum
  3. Genetic variety: Migration introduces new genetic material

Tuning Island Model

Number of Islands

.num_islands(4)

Guidelines:

  • 2-4 islands: Good for most problems
  • 4-8 islands: Highly multimodal problems
  • More islands = more diversity but slower convergence

Island Population Size

.island_population_size(50)

Each island should be large enough to:

  • Maintain genetic diversity
  • Support effective selection
  • Typically 30-100 individuals

Migration Rate

The effective migration rate is:

migration_rate = migrants_per_interval / (island_size × interval)

For Best(2) with size 50 and interval 25:

rate = 2 / (50 × 25) = 0.16%

Too high: Islands become homogeneous Too low: Islands don't share discoveries

Advanced Patterns

Heterogeneous Islands

Run different configurations on each island:

// This is a conceptual example
// Each island uses different operator parameters
let island_configs = vec![
    SbxCrossover::new(10.0),  // Explorative
    SbxCrossover::new(20.0),  // Balanced
    SbxCrossover::new(30.0),  // Exploitative
    SbxCrossover::new(15.0),  // Balanced
];

Adaptive Migration

Adjust migration based on progress:

for gen in 0..max_generations {
    island_model.step(&mut rng)?;

    // Migrate more frequently if stuck
    if gen % check_interval == 0 {
        let improvement = check_improvement(&island_model);
        if improvement < threshold {
            island_model.force_migration(&mut rng);
        }
    }
}

Performance Comparison

For the 20-D Rastrigin function:

ConfigurationTypical Best FitnessNotes
Single Pop (200)-15 to -25Often stuck in local optima
Island 4×50-5 to -15Better exploration
Island 8×25-8 to -18More diversity, slower convergence

Results vary by run due to randomness.

Exercises

  1. Topology comparison: Compare Ring, Star, and FullyConnected on Rastrigin
  2. Migration interval: Try intervals of 10, 25, 50, 100
  3. Island count: Compare 2, 4, 8, 16 islands with same total population

Next Steps

Genetic Programming Tutorial

Genetic Programming (GP) evolves tree-structured programs rather than fixed-length vectors. This tutorial demonstrates symbolic regression - discovering mathematical expressions that fit data.

When to Use Genetic Programming

Ideal for:

  • Symbolic regression (finding formulas)
  • Automated program synthesis
  • Discovery of mathematical relationships
  • Feature construction for machine learning

Challenges:

  • Bloat (trees grow unnecessarily large)
  • More complex operators
  • Larger search spaces

The Problem: Symbolic Regression

Given input-output pairs, find a mathematical expression that fits the data.

Target function:

f(x) = x² + 2x + 1

Training data:

x: -5, -4, ..., 4, 5
y: f(x) for each x

Goal: Rediscover the formula (or an equivalent one) from data alone.

Complete Example

//! Symbolic Regression with Genetic Programming
//!
//! This example demonstrates genetic programming using tree genomes
//! to discover symbolic expressions that fit data.
//!
//! The goal is to find a mathematical expression that approximates
//! a target function from input-output examples.

use fugue_evo::prelude::*;
use rand::rngs::StdRng;
use rand::Rng;
use rand::SeedableRng;

fn main() -> Result<(), Box<dyn std::error::Error>> {
    println!("=== Symbolic Regression with GP ===\n");

    let mut rng = StdRng::seed_from_u64(42);

    // Generate training data from target function: f(x) = x^2 + 2*x + 1
    let training_data: Vec<(f64, f64)> = (-5..=5)
        .map(|i| {
            let x = i as f64;
            let y = x * x + 2.0 * x + 1.0; // Target function
            (x, y)
        })
        .collect();

    println!("Target function: f(x) = x^2 + 2*x + 1");
    println!("Training points: {}", training_data.len());
    println!();

    // Create fitness function for symbolic regression
    let fitness = SymbolicRegressionFitness::new(training_data.clone());

    // Create initial population of random trees
    let mut population: Vec<TreeGenome<ArithmeticTerminal, ArithmeticFunction>> = Vec::new();
    for _ in 0..100 {
        // Mix of full and grow initialization
        let tree = if rng.gen::<bool>() {
            TreeGenome::generate_full(&mut rng, 3, 6)
        } else {
            TreeGenome::generate_grow(&mut rng, 5, 0.3)
        };
        population.push(tree);
    }

    // Evaluate initial population
    let mut pop_with_fitness: Vec<(TreeGenome<ArithmeticTerminal, ArithmeticFunction>, f64)> =
        population
            .iter()
            .map(|tree| (tree.clone(), fitness.evaluate(tree)))
            .collect();

    pop_with_fitness.sort_by(|a, b| b.1.partial_cmp(&a.1).unwrap());

    println!("Initial population stats:");
    println!("  Best fitness: {:.6}", pop_with_fitness[0].1);
    println!("  Best expr: {}", pop_with_fitness[0].0.to_sexpr());
    println!();

    // Simple evolution loop
    let max_generations = 50;
    let tournament_size = 5;
    let mutation_rate = 0.2;
    let crossover_rate = 0.8;

    for gen in 0..max_generations {
        let mut new_pop = Vec::new();

        // Elitism: keep best
        new_pop.push(pop_with_fitness[0].0.clone());

        while new_pop.len() < 100 {
            // Tournament selection
            let parent1 = tournament_select(&pop_with_fitness, tournament_size, &mut rng);
            let parent2 = tournament_select(&pop_with_fitness, tournament_size, &mut rng);

            let mut child = if rng.gen::<f64>() < crossover_rate {
                // Subtree crossover
                subtree_crossover(parent1, parent2, &mut rng)
            } else {
                parent1.clone()
            };

            // Point mutation
            if rng.gen::<f64>() < mutation_rate {
                point_mutate(&mut child, &mut rng);
            }

            // Limit tree depth
            if child.depth() <= 10 {
                new_pop.push(child);
            } else {
                new_pop.push(parent1.clone());
            }
        }

        // Evaluate new population
        pop_with_fitness = new_pop
            .iter()
            .map(|tree| (tree.clone(), fitness.evaluate(tree)))
            .collect();

        pop_with_fitness.sort_by(|a, b| b.1.partial_cmp(&a.1).unwrap());

        if (gen + 1) % 10 == 0 {
            println!(
                "Gen {:3}: Best = {:.6}, Size = {}, Expr = {}",
                gen + 1,
                pop_with_fitness[0].1,
                pop_with_fitness[0].0.size(),
                truncate_expr(&pop_with_fitness[0].0.to_sexpr(), 50)
            );
        }
    }

    println!("\n=== Final Result ===");
    let best = &pop_with_fitness[0];
    println!("Best fitness: {:.6}", best.1);
    println!("Best expression: {}", best.0.to_sexpr());
    println!("Tree size: {} nodes", best.0.size());
    println!("Tree depth: {}", best.0.depth());

    // Test the discovered function
    println!("\nComparison on test points:");
    println!(
        "{:>6} {:>12} {:>12} {:>12}",
        "x", "Target", "Predicted", "Error"
    );
    for x in [-3.5, -1.0, 0.0, 1.0, 2.5] {
        let target = x * x + 2.0 * x + 1.0;
        let predicted = best.0.evaluate(&[x]);
        let error = (target - predicted).abs();
        println!(
            "{:6.1} {:12.4} {:12.4} {:12.6}",
            x, target, predicted, error
        );
    }

    Ok(())
}

fn truncate_expr(s: &str, max_len: usize) -> String {
    if s.len() <= max_len {
        s.to_string()
    } else {
        format!("{}...", &s[..max_len - 3])
    }
}

fn tournament_select<'a>(
    pop: &'a [(TreeGenome<ArithmeticTerminal, ArithmeticFunction>, f64)],
    size: usize,
    rng: &mut StdRng,
) -> &'a TreeGenome<ArithmeticTerminal, ArithmeticFunction> {
    use rand::seq::SliceRandom;
    let contestants: Vec<_> = pop.choose_multiple(rng, size).collect();
    let best = contestants
        .iter()
        .max_by(|a, b| a.1.partial_cmp(&b.1).unwrap())
        .unwrap();
    &best.0
}

fn subtree_crossover(
    parent1: &TreeGenome<ArithmeticTerminal, ArithmeticFunction>,
    parent2: &TreeGenome<ArithmeticTerminal, ArithmeticFunction>,
    rng: &mut StdRng,
) -> TreeGenome<ArithmeticTerminal, ArithmeticFunction> {
    let positions1 = parent1.root.positions();
    let positions2 = parent2.root.positions();

    if positions1.is_empty() || positions2.is_empty() {
        return parent1.clone();
    }

    let pos1 = &positions1[rng.gen_range(0..positions1.len())];
    let pos2 = &positions2[rng.gen_range(0..positions2.len())];

    if let Some(subtree) = parent2.root.get_subtree(pos2) {
        let mut new_root = parent1.root.clone();
        new_root.replace_subtree(pos1, subtree.clone());
        TreeGenome::new(new_root, parent1.max_depth.max(parent2.max_depth))
    } else {
        parent1.clone()
    }
}

fn point_mutate(tree: &mut TreeGenome<ArithmeticTerminal, ArithmeticFunction>, rng: &mut StdRng) {
    let positions = tree.root.positions();
    if positions.is_empty() {
        return;
    }

    let pos = &positions[rng.gen_range(0..positions.len())];

    // Replace with random subtree
    let new_subtree = if rng.gen::<bool>() {
        TreeNode::terminal(ArithmeticTerminal::random(rng))
    } else {
        let func = ArithmeticFunction::random(rng);
        let arity = func.arity();
        let children: Vec<_> = (0..arity)
            .map(|_| TreeNode::terminal(ArithmeticTerminal::random(rng)))
            .collect();
        TreeNode::function(func, children)
    };

    tree.root.replace_subtree(pos, new_subtree);
}

/// Fitness function for symbolic regression
struct SymbolicRegressionFitness {
    data: Vec<(f64, f64)>,
}

impl SymbolicRegressionFitness {
    fn new(data: Vec<(f64, f64)>) -> Self {
        Self { data }
    }

    fn evaluate(&self, tree: &TreeGenome<ArithmeticTerminal, ArithmeticFunction>) -> f64 {
        let mse: f64 = self
            .data
            .iter()
            .map(|(x, y)| {
                let predicted = tree.evaluate(&[*x]);
                if predicted.is_finite() {
                    (y - predicted).powi(2)
                } else {
                    1e6 // Penalty for invalid values
                }
            })
            .sum::<f64>()
            / self.data.len() as f64;

        // Negate MSE (we maximize fitness)
        // Add small parsimony pressure to prefer simpler trees
        let complexity_penalty = tree.size() as f64 * 0.001;
        -mse - complexity_penalty
    }
}

Source: examples/symbolic_regression.rs

Running the Example

cargo run --example symbolic_regression

Key Components

Tree Genome

TreeGenome<ArithmeticTerminal, ArithmeticFunction>

A tree genome consists of:

  • Terminals: Leaf nodes (variables like X, constants like 3.14)
  • Functions: Internal nodes (operations like +, *, sin)

Example tree for x² + 1:

       +
      / \
     *   1
    / \
   X   X

Tree Generation

// Full method: all leaves at same depth
let tree = TreeGenome::generate_full(&mut rng, 3, 6);

// Grow method: variable depth
let tree = TreeGenome::generate_grow(&mut rng, 5, 0.3);

Full method:

  • All branches have exactly the specified depth
  • Creates bushy, complete trees
  • Good initial diversity

Grow method:

  • Terminates probabilistically
  • Creates varied shapes
  • More natural tree sizes

Fitness for Symbolic Regression

struct SymbolicRegressionFitness {
    data: Vec<(f64, f64)>,  // (x, y) pairs
}

impl SymbolicRegressionFitness {
    fn evaluate(&self, tree: &TreeGenome<...>) -> f64 {
        let mse: f64 = self.data.iter()
            .map(|(x, y)| {
                let predicted = tree.evaluate(&[*x]);
                (y - predicted).powi(2)
            })
            .sum::<f64>() / self.data.len() as f64;

        // Negate MSE (we maximize fitness)
        // Add parsimony pressure for simpler trees
        let complexity_penalty = tree.size() as f64 * 0.001;
        -mse - complexity_penalty
    }
}

Key elements:

  • Mean Squared Error (MSE) measures fit
  • Parsimony pressure discourages bloat
  • Negated because GA maximizes

Genetic Operators for Trees

Subtree Crossover:

fn subtree_crossover(parent1: &Tree, parent2: &Tree, rng: &mut Rng) -> Tree {
    // 1. Select random node in parent1
    // 2. Select random node in parent2
    // 3. Replace subtree at pos1 with subtree from pos2
}

Point Mutation:

fn point_mutate(tree: &mut Tree, rng: &mut Rng) {
    // 1. Select random node
    // 2. Replace with new random subtree
}

Understanding GP Output

Expression Representation

println!("Expression: {}", tree.to_sexpr());

S-expression format (Lisp-like):

  • (+ X 1) means X + 1
  • (* X X) means X * X
  • (+ (* X X) (+ (* 2 X) 1)) means X² + 2X + 1

Tree Metrics

println!("Tree size: {} nodes", tree.size());
println!("Tree depth: {}", tree.depth());
  • Size: Total number of nodes
  • Depth: Longest path from root to leaf

Watch for bloat - trees growing without fitness improvement.

Controlling Bloat

1. Parsimony Pressure

Penalize large trees in fitness:

let fitness = raw_fitness - tree.size() as f64 * penalty;

2. Depth Limits

Reject trees exceeding maximum depth:

if child.depth() <= 10 {
    new_pop.push(child);
} else {
    new_pop.push(parent.clone());
}

3. Tournament with Parsimony

Prefer smaller trees when fitness is similar:

fn tournament_with_parsimony(pop: &[Tree], k: usize) -> &Tree {
    let contestants = sample(pop, k);
    contestants.iter()
        .max_by(|a, b| {
            let fitness_cmp = a.fitness.cmp(&b.fitness);
            if fitness_cmp == Equal {
                b.size().cmp(&a.size())  // Prefer smaller
            } else {
                fitness_cmp
            }
        })
}

Available Functions and Terminals

Built-in Arithmetic

Terminals:

  • X: Input variable
  • Constant(f64): Numeric constants

Functions:

  • Add: Binary addition
  • Sub: Binary subtraction
  • Mul: Binary multiplication
  • Div: Protected division (returns 1 if divisor is 0)

Custom Function Sets

Define problem-specific primitives:

enum MyFunction {
    Add, Sub, Mul, Div,
    Sin, Cos, Exp, Log,
}

enum MyTerminal {
    X,
    Y,  // Multiple variables
    Constant(f64),
}

GP Parameters

ParameterTypical RangeEffect
Population100-1000More = better exploration
Tournament size3-7Selection pressure
Crossover rate0.7-0.9Combination vs mutation
Mutation rate0.1-0.3Exploration
Max depth6-17Complexity limit
Parsimony coefficient0.0001-0.01Bloat control

Common Issues

Bloat

Symptoms: Trees grow huge with no fitness improvement

Solutions:

  • Increase parsimony pressure
  • Stricter depth limits
  • Use tournament with size tie-breaker

Premature Convergence

Symptoms: Population becomes homogeneous early

Solutions:

  • Larger population
  • Higher mutation rate
  • Lower selection pressure

Numeric Instability

Symptoms: NaN or Inf in evaluations

Solutions:

  • Use protected division
  • Clamp outputs to reasonable range
  • Penalize extreme predictions

Exercises

  1. Different target: Try f(x) = sin(x) or f(x) = x³
  2. Multiple variables: Extend to f(x, y) = x² + y²
  3. More functions: Add sin, cos, exp

Next Steps

Interactive Evolution Tutorial

Interactive Genetic Algorithms (IGA) incorporate human feedback into the evolutionary process. Instead of an automated fitness function, users evaluate candidates through ratings, comparisons, or selections.

When to Use Interactive Evolution

Ideal for:

  • Aesthetic optimization (art, design, music)
  • Subjective preferences (user interfaces)
  • Hard-to-formalize objectives
  • Creative exploration

Challenges:

  • User fatigue limits evaluations
  • Noisy, inconsistent feedback
  • Slower convergence than automated GA

Evaluation Modes

Fugue-evo supports multiple ways to gather user feedback:

ModeUser ActionBest For
RatingScore each candidate 1-10Absolute quality assessment
PairwiseChoose better of twoRelative comparisons
Batch SelectionPick best N from batchQuick approximate ranking

Complete Example

//! Interactive Genetic Algorithm Example
//!
//! This example demonstrates how to use the Interactive GA for human-in-the-loop
//! evolutionary optimization. Instead of an automated fitness function, users
//! provide feedback by rating, comparing, or selecting candidates.
//!
//! In this example, we simulate user feedback with a simple automated scorer,
//! but in a real application, you would present candidates to users via a UI.

use fugue_evo::genome::bounds::Bounds;
use fugue_evo::interactive::prelude::*;
use fugue_evo::prelude::*;
use rand::rngs::StdRng;
use rand::SeedableRng;

/// Simulates user preference for solutions close to a target
struct SimulatedUserPreference {
    target: Vec<f64>,
}

impl SimulatedUserPreference {
    fn new(dim: usize) -> Self {
        // User prefers solutions where values are around 0.5
        Self {
            target: vec![0.5; dim],
        }
    }

    /// Simulate a user rating (1-10 scale)
    fn rate(&self, genome: &RealVector) -> f64 {
        let distance: f64 = genome
            .genes()
            .iter()
            .zip(self.target.iter())
            .map(|(g, t)| (g - t).powi(2))
            .sum::<f64>()
            .sqrt();

        // Convert distance to rating (closer = higher rating)
        let rating = 10.0 - (distance * 5.0).min(9.0);
        rating.max(1.0)
    }

    /// Simulate pairwise comparison
    fn compare(&self, a: &RealVector, b: &RealVector) -> std::cmp::Ordering {
        let rating_a = self.rate(a);
        let rating_b = self.rate(b);
        rating_a.partial_cmp(&rating_b).unwrap()
    }
}

fn main() -> Result<(), Box<dyn std::error::Error>> {
    println!("=== Interactive Genetic Algorithm Demo ===\n");

    let mut rng = StdRng::seed_from_u64(42);

    const DIM: usize = 5;
    let bounds = MultiBounds::uniform(Bounds::new(0.0, 1.0), DIM);

    // Create the simulated user preference
    let user = SimulatedUserPreference::new(DIM);

    // Build the Interactive GA
    let mut iga = InteractiveGABuilder::<RealVector, (), (), ()>::new()
        .population_size(12)
        .elitism_count(2)
        .evaluation_mode(EvaluationMode::Rating)
        .batch_size(4)
        .min_coverage(0.8)
        .max_generations(5)
        .aggregation_model(AggregationModel::DirectRating {
            default_rating: 5.0,
        })
        .bounds(bounds)
        .selection(TournamentSelection::new(2))
        .crossover(SbxCrossover::new(15.0))
        .mutation(PolynomialMutation::new(20.0))
        .build()?;

    println!("Starting interactive evolution...\n");
    println!(
        "Configuration: {} individuals, {} mode, {} generations max",
        iga.config().population_size,
        match iga.config().evaluation_mode {
            EvaluationMode::Rating => "rating",
            EvaluationMode::Pairwise => "pairwise",
            EvaluationMode::BatchSelection => "batch selection",
            EvaluationMode::Adaptive => "adaptive",
        },
        iga.config().max_generations
    );
    println!();

    // Main evolution loop
    loop {
        match iga.step(&mut rng) {
            StepResult::NeedsEvaluation(request) => {
                // In a real app, you'd present this to a user via UI
                // Here we simulate user feedback
                let response = simulate_user_response(&user, &request);
                iga.provide_response(response);
            }

            StepResult::GenerationComplete {
                generation,
                best_fitness,
                coverage,
            } => {
                println!(
                    "Generation {} complete: best = {:.2}, coverage = {:.0}%",
                    generation,
                    best_fitness.unwrap_or(0.0),
                    coverage * 100.0
                );
            }

            StepResult::Complete(result) => {
                println!("\n=== Evolution Complete ===");
                println!("Reason: {}", result.termination_reason);
                println!("Generations: {}", result.generations);
                println!("Total evaluations: {}", result.total_evaluations);
                println!("\nTop 3 candidates:");

                for (i, candidate) in result.best_candidates.iter().take(3).enumerate() {
                    println!(
                        "  #{}: fitness = {:.2}, genes = {:?}",
                        i + 1,
                        candidate.fitness_estimate.unwrap_or(0.0),
                        candidate
                            .genome
                            .genes()
                            .iter()
                            .map(|g| format!("{:.3}", g))
                            .collect::<Vec<_>>()
                            .join(", ")
                    );
                }
                break;
            }
        }
    }

    // Demonstrate different evaluation modes
    println!("\n=== Batch Selection Mode Demo ===\n");

    let mut iga_batch = InteractiveGABuilder::<RealVector, (), (), ()>::new()
        .population_size(12)
        .evaluation_mode(EvaluationMode::BatchSelection)
        .batch_size(6)
        .select_count(2)
        .min_coverage(0.5)
        .max_generations(3)
        .aggregation_model(AggregationModel::ImplicitRanking {
            selected_bonus: 1.0,
            not_selected_penalty: 0.3,
            base_fitness: 5.0,
        })
        .bounds(MultiBounds::uniform(Bounds::new(0.0, 1.0), DIM))
        .selection(TournamentSelection::new(2))
        .crossover(SbxCrossover::new(15.0))
        .mutation(PolynomialMutation::new(20.0))
        .build()?;

    loop {
        match iga_batch.step(&mut rng) {
            StepResult::NeedsEvaluation(request) => {
                let response = simulate_user_response(&user, &request);
                iga_batch.provide_response(response);
            }

            StepResult::GenerationComplete {
                generation,
                best_fitness,
                ..
            } => {
                println!(
                    "Generation {}: best = {:.2}",
                    generation,
                    best_fitness.unwrap_or(0.0)
                );
            }

            StepResult::Complete(result) => {
                println!("\nBatch selection mode complete!");
                println!(
                    "Best fitness: {:.2}",
                    result.best_candidates[0].fitness_estimate.unwrap_or(0.0)
                );
                break;
            }
        }
    }

    Ok(())
}

/// Simulate user response to an evaluation request
fn simulate_user_response(
    user: &SimulatedUserPreference,
    request: &EvaluationRequest<RealVector>,
) -> EvaluationResponse {
    match request {
        EvaluationRequest::RateCandidates { candidates, .. } => {
            let ratings: Vec<_> = candidates
                .iter()
                .map(|c| (c.id, user.rate(&c.genome)))
                .collect();
            EvaluationResponse::ratings(ratings)
        }

        EvaluationRequest::PairwiseComparison {
            candidate_a,
            candidate_b,
            ..
        } => {
            use std::cmp::Ordering;
            match user.compare(&candidate_a.genome, &candidate_b.genome) {
                Ordering::Greater => EvaluationResponse::winner(candidate_a.id),
                Ordering::Less => EvaluationResponse::winner(candidate_b.id),
                Ordering::Equal => EvaluationResponse::tie(),
            }
        }

        EvaluationRequest::BatchSelection {
            candidates,
            select_count,
            ..
        } => {
            // Sort by rating and select top N
            let mut rated: Vec<_> = candidates
                .iter()
                .map(|c| (c.id, user.rate(&c.genome)))
                .collect();
            rated.sort_by(|a, b| b.1.partial_cmp(&a.1).unwrap());

            let selected: Vec<_> = rated
                .iter()
                .take(*select_count)
                .map(|(id, _)| *id)
                .collect();
            EvaluationResponse::selected(selected)
        }
    }
}

Source: examples/interactive_evolution.rs

Running the Example

cargo run --example interactive_evolution

This example simulates user feedback. In a real application, you would replace the simulation with actual UI interaction.

Key Components

Building an Interactive GA

let mut iga = InteractiveGABuilder::<RealVector, (), (), ()>::new()
    .population_size(12)
    .elitism_count(2)
    .evaluation_mode(EvaluationMode::Rating)
    .batch_size(4)
    .min_coverage(0.8)
    .max_generations(5)
    .aggregation_model(AggregationModel::DirectRating {
        default_rating: 5.0,
    })
    .bounds(bounds)
    .selection(TournamentSelection::new(2))
    .crossover(SbxCrossover::new(15.0))
    .mutation(PolynomialMutation::new(20.0))
    .build()?;

Key parameters:

  • evaluation_mode: How users provide feedback
  • batch_size: Candidates shown per evaluation round
  • min_coverage: Fraction of population needing evaluation
  • aggregation_model: How to combine multiple evaluations

The Step Loop

loop {
    match iga.step(&mut rng) {
        StepResult::NeedsEvaluation(request) => {
            // Present to user, get feedback
            let response = get_user_response(&request);
            iga.provide_response(response);
        }

        StepResult::GenerationComplete { generation, best_fitness, coverage } => {
            println!("Generation {} complete", generation);
        }

        StepResult::Complete(result) => {
            println!("Evolution complete!");
            break;
        }
    }
}

Handling Evaluation Requests

Rating Mode:

EvaluationRequest::RateCandidates { candidates, .. } => {
    // Show candidates to user
    for candidate in candidates {
        display_candidate(&candidate.genome);
    }
    // Collect ratings
    let ratings: Vec<(CandidateId, f64)> = /* user input */;
    EvaluationResponse::ratings(ratings)
}

Pairwise Mode:

EvaluationRequest::PairwiseComparison { candidate_a, candidate_b, .. } => {
    // Show both candidates
    display_comparison(&candidate_a.genome, &candidate_b.genome);
    // Get user's choice
    let winner = /* user choice */;
    EvaluationResponse::winner(winner)
}

Batch Selection:

EvaluationRequest::BatchSelection { candidates, select_count, .. } => {
    // Show all candidates
    for c in candidates { display_candidate(&c.genome); }
    // User selects best N
    let selected: Vec<CandidateId> = /* user picks */;
    EvaluationResponse::selected(selected)
}

Aggregation Models

How to combine feedback into fitness estimates:

Direct Rating

AggregationModel::DirectRating { default_rating: 5.0 }

Uses ratings directly as fitness. Unevaluated candidates get the default.

Implicit Ranking

AggregationModel::ImplicitRanking {
    selected_bonus: 1.0,
    not_selected_penalty: 0.3,
    base_fitness: 5.0,
}

For batch selection mode:

  • Selected candidates get bonus
  • Non-selected get penalty
  • Accumulates over evaluations

Bradley-Terry Model

For pairwise comparisons, estimates latent "skill" from win/loss records using the Bradley-Terry statistical model.

Reducing User Fatigue

Smaller Population

.population_size(12)

Fewer candidates = fewer evaluations needed.

Coverage Threshold

.min_coverage(0.5)  // Only evaluate 50% of population

Not every candidate needs evaluation each generation.

Batch Size Tuning

.batch_size(4)  // Show 4 at a time
  • Too small: Many rounds, tedious
  • Too large: Overwhelming, poor decisions

Evaluation Budget

.max_evaluations(100)  // Stop after 100 user interactions

Simulating User Feedback

For testing, simulate user preferences:

struct SimulatedUser {
    target: Vec<f64>,
}

impl SimulatedUser {
    fn rate(&self, genome: &RealVector) -> f64 {
        let distance: f64 = genome.genes().iter()
            .zip(self.target.iter())
            .map(|(g, t)| (g - t).powi(2))
            .sum::<f64>()
            .sqrt();

        // Closer to target = higher rating
        10.0 - distance.min(9.0)
    }
}

This allows testing IGA logic without human interaction.

Real-World Integration

Web Application

// Pseudocode for web integration
async fn evolution_endpoint(state: &mut IgaState) -> Response {
    match state.iga.step(&mut state.rng) {
        StepResult::NeedsEvaluation(request) => {
            // Return candidates to frontend
            Json(CandidatesForEvaluation::from(request))
        }
        // ...
    }
}

async fn feedback_endpoint(feedback: UserFeedback, state: &mut IgaState) {
    let response = EvaluationResponse::from(feedback);
    state.iga.provide_response(response);
}

GUI Application

fn update(&mut self, message: Message) {
    match message {
        Message::NextStep => {
            if let StepResult::NeedsEvaluation(req) = self.iga.step(&mut self.rng) {
                self.current_request = Some(req);
            }
        }
        Message::UserRated(ratings) => {
            let response = EvaluationResponse::ratings(ratings);
            self.iga.provide_response(response);
        }
    }
}

Exercises

  1. Different modes: Compare Rating vs Batch Selection modes
  2. Noisy preferences: Add randomness to simulated user, observe robustness
  3. Visualization: Display RealVector as colors/shapes for visual evaluation

Next Steps

Hyperparameter Learning Tutorial

GA performance depends heavily on hyperparameters like mutation rate, crossover probability, and population size. This tutorial demonstrates online Bayesian learning of hyperparameters during evolution.

The Problem

Traditional approach: Set parameters once, hope they work.

Better approach: Learn optimal parameters from feedback during evolution.

Hyperparameter Control Methods

Fugue-evo supports several approaches (following Eiben et al.'s classification):

MethodDescriptionExample
DeterministicPre-defined scheduleDecay mutation over time
AdaptiveRule-based adjustmentIncrease mutation if stagnant
Self-AdaptiveEncode in genomeParameters evolve with solutions
BayesianStatistical learningUpdate beliefs from observations

This tutorial focuses on Bayesian learning with conjugate priors.

Complete Example

//! Hyperparameter Learning with Bayesian Adaptation
//!
//! This example demonstrates online Bayesian learning of GA hyperparameters.
//! The system learns optimal mutation rates based on observed fitness improvements.

use fugue_evo::prelude::*;
use rand::rngs::StdRng;
use rand::SeedableRng;

fn main() -> Result<(), Box<dyn std::error::Error>> {
    println!("=== Bayesian Hyperparameter Learning ===\n");

    let mut rng = StdRng::seed_from_u64(42);

    const DIM: usize = 20;
    let fitness = Rastrigin::new(DIM);
    let bounds = MultiBounds::symmetric(5.12, DIM);

    println!("Problem: {}-D Rastrigin", DIM);
    println!("Learning optimal mutation rate via Beta posterior\n");

    // Initialize Bayesian learner for mutation rate
    // Prior: Beta(2, 2) centered around 0.5
    let mut mutation_posterior = BetaPosterior::new(2.0, 2.0);

    // Track statistics
    let mut successful_mutations = 0;
    let mut total_mutations = 0;

    // Initialize population
    let mut population: Population<RealVector, f64> = Population::random(100, &bounds, &mut rng);
    population.evaluate(&fitness);

    let selection = TournamentSelection::new(3);
    let crossover = SbxCrossover::new(15.0);

    // Initial mutation rate from prior
    let mut current_mutation_rate = mutation_posterior.mean();

    println!(
        "Initial mutation rate (prior mean): {:.4}",
        current_mutation_rate
    );
    println!();

    let max_generations = 200;
    let adaptation_interval = 20;

    for gen in 0..max_generations {
        // Sample mutation rate from current posterior every adaptation interval
        if gen > 0 && gen % adaptation_interval == 0 {
            current_mutation_rate = mutation_posterior.sample(&mut rng);
            println!(
                "Gen {:3}: Sampled mutation rate = {:.4} (posterior mean = {:.4})",
                gen,
                current_mutation_rate,
                mutation_posterior.mean()
            );
        }

        let selection_pool: Vec<_> = population.as_fitness_pairs();
        let mut new_pop: Population<RealVector, f64> = Population::with_capacity(100);

        // Elitism
        if let Some(best) = population.best() {
            new_pop.push(best.clone());
        }

        while new_pop.len() < 100 {
            let p1_idx = selection.select(&selection_pool, &mut rng);
            let p2_idx = selection.select(&selection_pool, &mut rng);

            let (mut c1, mut c2) = crossover
                .crossover(
                    &selection_pool[p1_idx].0,
                    &selection_pool[p2_idx].0,
                    &mut rng,
                )
                .genome()
                .unwrap_or_else(|| {
                    (
                        selection_pool[p1_idx].0.clone(),
                        selection_pool[p2_idx].0.clone(),
                    )
                });

            let parent1_fitness = selection_pool[p1_idx].1;
            let parent2_fitness = selection_pool[p2_idx].1;

            // Apply mutation with learned rate
            let mutation = GaussianMutation::new(0.1).with_probability(current_mutation_rate);
            mutation.mutate(&mut c1, &mut rng);
            mutation.mutate(&mut c2, &mut rng);

            // Evaluate children
            let child1_fitness = fitness.evaluate(&c1);
            let child2_fitness = fitness.evaluate(&c2);

            // Update Bayesian posterior based on improvement
            let improved1 = child1_fitness > parent1_fitness;
            let improved2 = child2_fitness > parent2_fitness;

            mutation_posterior.observe(improved1);
            mutation_posterior.observe(improved2);

            total_mutations += 2;
            if improved1 {
                successful_mutations += 1;
            }
            if improved2 {
                successful_mutations += 1;
            }

            new_pop.push(Individual::with_fitness(c1, child1_fitness));
            if new_pop.len() < 100 {
                new_pop.push(Individual::with_fitness(c2, child2_fitness));
            }
        }

        new_pop.set_generation(gen + 1);
        population = new_pop;
    }

    // Results
    println!("\n=== Results ===");
    let best = population.best().unwrap();
    println!("Best fitness: {:.6}", best.fitness_value());
    println!();

    println!("Learned hyperparameters:");
    println!(
        "  Final mutation rate (posterior mean): {:.4}",
        mutation_posterior.mean()
    );
    let ci = mutation_posterior.credible_interval(0.95);
    println!("  95% credible interval: [{:.4}, {:.4}]", ci.0, ci.1);
    println!();

    println!("Mutation statistics:");
    println!("  Total mutations: {}", total_mutations);
    println!("  Successful mutations: {}", successful_mutations);
    println!(
        "  Observed success rate: {:.4}",
        successful_mutations as f64 / total_mutations as f64
    );

    // Compare with fixed rates
    println!("\n--- Comparison with fixed mutation rates ---\n");

    for fixed_rate in [0.05, 0.1, 0.2, 0.5] {
        let result = run_with_fixed_rate(fixed_rate, DIM)?;
        println!("Fixed rate {:.2}: Best = {:.6}", fixed_rate, result);
    }

    println!(
        "\nLearned rate {:.2}: Best = {:.6}",
        mutation_posterior.mean(),
        best.fitness_value()
    );

    Ok(())
}

fn run_with_fixed_rate(rate: f64, dim: usize) -> Result<f64, Box<dyn std::error::Error>> {
    let mut rng = StdRng::seed_from_u64(42); // Same seed for fair comparison

    let fitness = Rastrigin::new(dim);
    let bounds = MultiBounds::symmetric(5.12, dim);

    let mut population: Population<RealVector, f64> = Population::random(100, &bounds, &mut rng);
    population.evaluate(&fitness);

    let selection = TournamentSelection::new(3);
    let crossover = SbxCrossover::new(15.0);
    let mutation = GaussianMutation::new(0.1).with_probability(rate);

    for gen in 0..200 {
        let selection_pool: Vec<_> = population.as_fitness_pairs();
        let mut new_pop: Population<RealVector, f64> = Population::with_capacity(100);

        if let Some(best) = population.best() {
            new_pop.push(best.clone());
        }

        while new_pop.len() < 100 {
            let p1_idx = selection.select(&selection_pool, &mut rng);
            let p2_idx = selection.select(&selection_pool, &mut rng);

            let (mut c1, mut c2) = crossover
                .crossover(
                    &selection_pool[p1_idx].0,
                    &selection_pool[p2_idx].0,
                    &mut rng,
                )
                .genome()
                .unwrap_or_else(|| {
                    (
                        selection_pool[p1_idx].0.clone(),
                        selection_pool[p2_idx].0.clone(),
                    )
                });

            mutation.mutate(&mut c1, &mut rng);
            mutation.mutate(&mut c2, &mut rng);

            new_pop.push(Individual::new(c1));
            if new_pop.len() < 100 {
                new_pop.push(Individual::new(c2));
            }
        }

        new_pop.evaluate(&fitness);
        new_pop.set_generation(gen + 1);
        population = new_pop;
    }

    Ok(*population.best().unwrap().fitness_value())
}

Source: examples/hyperparameter_learning.rs

Running the Example

cargo run --example hyperparameter_learning

Key Components

Beta Posterior for Mutation Rate

// Prior: Beta(2, 2) centered around 0.5
let mut mutation_posterior = BetaPosterior::new(2.0, 2.0);

The Beta distribution is perfect for learning probabilities:

  • Domain: [0, 1] (valid probability range)
  • Conjugate to Bernoulli outcomes (success/failure)
  • Prior parameters encode initial beliefs

Beta(2, 2):

  • Mean = 0.5 (start uncertain)
  • Moderate confidence (equivalent to 4 observations)

Observing Outcomes

// Check if mutation improved fitness
let improved = child_fitness > parent_fitness;

// Update posterior with observation
mutation_posterior.observe(improved);

Each observation updates the distribution:

  • Success (improvement): Increases mean
  • Failure: Decreases mean
  • More observations → narrower distribution

Sampling Parameters

// Sample mutation rate from current posterior
current_mutation_rate = mutation_posterior.sample(&mut rng);

Thompson Sampling: Sample from posterior, use as parameter.

  • Balances exploration (uncertainty) and exploitation (best estimate)
  • Naturally adapts as confidence grows

Adaptation Interval

if gen % adaptation_interval == 0 {
    current_mutation_rate = mutation_posterior.sample(&mut rng);
}

Don't update every generation:

  • Too frequent: Not enough signal
  • Too rare: Slow adaptation
  • Typical: Every 10-50 generations

Understanding the Output

Initial mutation rate (prior mean): 0.5000

Gen  20: Sampled mutation rate = 0.4123 (posterior mean = 0.3876)
Gen  40: Sampled mutation rate = 0.3456 (posterior mean = 0.3245)
...

=== Results ===
Learned hyperparameters:
  Final mutation rate (posterior mean): 0.2134
  95% credible interval: [0.1823, 0.2445]

Mutation statistics:
  Total mutations: 20000
  Successful mutations: 4268
  Observed success rate: 0.2134

The posterior converges toward the empirically optimal rate.

Credible Intervals

let ci = mutation_posterior.credible_interval(0.95);
println!("95% CI: [{:.4}, {:.4}]", ci.0, ci.1);

Unlike frequentist confidence intervals, Bayesian credible intervals have a direct interpretation: "95% probability the true value is in this range (given our data)."

Comparing with Fixed Rates

for fixed_rate in [0.05, 0.1, 0.2, 0.5] {
    let result = run_with_fixed_rate(fixed_rate)?;
    println!("Fixed rate {:.2}: Best = {:.6}", fixed_rate, result);
}

The learned rate often outperforms any single fixed rate because:

  1. It adapts to the problem
  2. It can change as evolution progresses
  3. It handles different phases (exploration vs. exploitation)

Other Learnable Parameters

Crossover Probability

let mut crossover_posterior = BetaPosterior::new(2.0, 2.0);

// Observe: did crossover produce better offspring than parents?
let offspring_better = child_fitness > max(parent1_fitness, parent2_fitness);
crossover_posterior.observe(offspring_better);

Tournament Size

Use a categorical posterior for discrete choices:

let tournament_sizes = [2, 3, 5, 7];
let mut size_weights = vec![1.0; tournament_sizes.len()];

// Update weights based on selection quality
// ... observe which sizes produce better offspring

Multiple Parameters

Learn multiple parameters simultaneously:

struct AdaptiveGA {
    mutation_posterior: BetaPosterior,
    crossover_posterior: BetaPosterior,
    // ... other parameters
}

impl AdaptiveGA {
    fn adapt(&mut self, gen: usize, rng: &mut Rng) {
        if gen % interval == 0 {
            self.mutation_rate = self.mutation_posterior.sample(rng);
            self.crossover_prob = self.crossover_posterior.sample(rng);
        }
    }
}

Deterministic Schedules

For simpler adaptation, use time-based schedules:

use fugue_evo::hyperparameter::schedules::*;

// Linear decay: 0.5 → 0.05 over 500 generations
let schedule = LinearSchedule::new(0.5, 0.05, 500);
let rate = schedule.value_at(gen);

// Exponential decay
let schedule = ExponentialSchedule::new(0.5, 0.99, 500);

// Sigmoid decay
let schedule = SigmoidSchedule::new(0.5, 0.05, 500);

When to Use Which Method

ScenarioRecommended
Known good parametersFixed
Exploration→exploitationDeterministic schedule
Problem-dependent optimalBayesian learning
No prior knowledgeBayesian with weak prior
Fast prototypingAdaptive rules

Exercises

  1. Prior sensitivity: Try Beta(1,1), Beta(5,5), Beta(10,1) priors
  2. Learning speed: Vary adaptation interval (5, 20, 50, 100)
  3. Multiple parameters: Learn both mutation and crossover rates

Next Steps

Choosing an Algorithm

This guide helps you select the right optimization algorithm for your problem.

Quick Decision Guide

By Problem Type

Problem TypeRecommendedAlternative
Continuous, unimodalSimpleGA or CMA-ES-
Continuous, multimodalIsland ModelCMA-ES with restarts
Multi-objectiveNSGA-II-
Discrete/binarySimpleGA + BitStringEDA
PermutationSimpleGA + Permutation-
Symbolic/GPSimpleGA + TreeGenome-
Subjective fitnessInteractive GA-

By Problem Size

DimensionsRecommendedNotes
1-10AnyAll algorithms work well
10-100CMA-ESLearns correlations efficiently
100-1000SimpleGACMA-ES memory-intensive
1000+SimpleGA + parallelConsider dimensionality reduction

By Evaluation Cost

CostRecommendedStrategy
Very cheap (< 1ms)AnyLarge populations OK
Cheap (1-100ms)Any with parallelEnable Rayon parallelism
Expensive (1-10s)CMA-ESFewer evaluations needed
Very expensive (> 10s)Surrogate + CMA-ESApproximate fitness

Algorithm Details

SimpleGA

Best for: General-purpose optimization with custom requirements

SimpleGABuilder::<RealVector, f64, _, _, _, _, _>::new()
    .population_size(100)
    .bounds(bounds)
    .selection(TournamentSelection::new(3))
    .crossover(SbxCrossover::new(20.0))
    .mutation(PolynomialMutation::new(20.0))
    .fitness(fitness)
    .max_generations(200)
    .build()?

Pros:

  • Highly configurable
  • Works with any genome type
  • Custom operators supported

Cons:

  • Requires operator selection
  • May need parameter tuning

CMA-ES

Best for: Non-separable continuous optimization

let mut cmaes = CmaEs::new(initial_mean, initial_sigma)
    .with_bounds(bounds);
let best = cmaes.run_generations(&fitness, 1000, &mut rng)?;

Pros:

  • Learns problem structure automatically
  • Often needs fewer evaluations
  • Self-adapting parameters

Cons:

  • O(n²) memory for covariance matrix
  • Only works with continuous problems
  • Fixed algorithm structure

NSGA-II

Best for: Multi-objective optimization

let result = Nsga2Builder::new()
    .population_size(100)
    .objectives(objectives)
    .max_generations(200)
    .build()?
    .run(&mut rng)?;

// Access Pareto front
for individual in result.pareto_front {
    println!("{:?}", individual.objectives);
}

Pros:

  • Finds entire Pareto front
  • Maintains diversity
  • Well-established algorithm

Cons:

  • Only for multi-objective
  • More complex setup

Island Model

Best for: Highly multimodal problems

let model = IslandModelBuilder::new()
    .num_islands(4)
    .island_population_size(50)
    .topology(MigrationTopology::Ring)
    .migration_interval(25)
    .build(&mut rng)?;

Pros:

  • Maintains diversity
  • Natural parallelism
  • Escapes local optima

Cons:

  • More parameters to tune
  • Higher total population
  • Migration overhead

Interactive GA

Best for: Subjective or hard-to-formalize fitness

let mut iga = InteractiveGABuilder::new()
    .population_size(12)
    .evaluation_mode(EvaluationMode::Rating)
    .batch_size(4)
    .build()?;

Pros:

  • No fitness function needed
  • Captures human preferences
  • Creative exploration

Cons:

  • Slow (human in loop)
  • Limited evaluations
  • Inconsistent feedback

Decision Flowchart

START
  │
  ▼
Is fitness subjective/aesthetic?
  │
  ├─ Yes → Interactive GA
  │
  └─ No
      │
      ▼
    Multiple conflicting objectives?
      │
      ├─ Yes → NSGA-II
      │
      └─ No
          │
          ▼
        Problem type?
          │
          ├─ Continuous → Is it non-separable or ill-conditioned?
          │                 │
          │                 ├─ Yes → CMA-ES
          │                 │
          │                 └─ No → Is it highly multimodal?
          │                           │
          │                           ├─ Yes → Island Model or CMA-ES + restarts
          │                           │
          │                           └─ No → SimpleGA
          │
          ├─ Discrete/Binary → SimpleGA + BitString
          │
          ├─ Permutation → SimpleGA + Permutation
          │
          └─ Trees/Programs → SimpleGA + TreeGenome

Combining Algorithms

Sometimes the best approach combines multiple algorithms:

CMA-ES After GA

Use GA for global exploration, CMA-ES for local refinement:

// Phase 1: Broad search with GA
let ga_result = SimpleGABuilder::new()
    .max_generations(50)
    .build()?.run(&mut rng)?;

// Phase 2: Refine best solution with CMA-ES
let mut cmaes = CmaEs::new(ga_result.best_genome.genes().to_vec(), 0.1);
let final_result = cmaes.run_generations(&fitness, 100, &mut rng)?;

Multiple Restarts

Run algorithm multiple times with different seeds:

let mut best_overall = f64::NEG_INFINITY;
for seed in 0..10 {
    let mut rng = StdRng::seed_from_u64(seed);
    let result = run_optimization(&mut rng)?;
    best_overall = best_overall.max(result.best_fitness);
}

Performance Benchmarks

Typical performance on standard benchmarks (your results may vary):

FunctionDimsSimpleGACMA-ESIsland
Sphere10~200 gen~50 gen~150 gen
Rastrigin10~300 gen~100 gen~200 gen
Rosenbrock10~500 gen~80 gen~400 gen

Note: Generations aren't directly comparable (different evaluation counts).

Next Steps

Once you've chosen an algorithm:

Custom Fitness Functions

This guide shows how to implement fitness functions for your optimization problems.

Basic Pattern

Implement the Fitness trait for your fitness function:

use fugue_evo::prelude::*;

struct MyFitness {
    // Your fitness function parameters
}

impl Fitness<RealVector> for MyFitness {
    type Value = f64;

    fn evaluate(&self, genome: &RealVector) -> f64 {
        // Your evaluation logic
        // Return higher values for better solutions
        let genes = genome.genes();
        // ...compute fitness...
        fitness_value
    }
}

Minimization vs Maximization

Fugue-evo maximizes by default. For minimization problems, negate the objective:

impl Fitness<RealVector> for MinimizeProblem {
    type Value = f64;

    fn evaluate(&self, genome: &RealVector) -> f64 {
        let objective = compute_objective(genome); // Value to minimize
        -objective // Negate for maximization
    }
}

Constrained Optimization

Penalty Method

Add penalties for constraint violations:

struct ConstrainedProblem {
    penalty_weight: f64,
}

impl Fitness<RealVector> for ConstrainedProblem {
    type Value = f64;

    fn evaluate(&self, genome: &RealVector) -> f64 {
        let genes = genome.genes();

        // Objective to maximize
        let objective = compute_objective(genes);

        // Constraints (should be >= 0 for feasible solutions)
        let g1 = genes[0] + genes[1] - 10.0; // x0 + x1 <= 10
        let g2 = genes[0] * genes[1] - 5.0;  // x0 * x1 >= 5

        // Penalty for violations
        let violation1 = g1.max(0.0);           // Penalize if > 0
        let violation2 = (-g2).max(0.0);        // Penalize if < 0

        let total_penalty = violation1.powi(2) + violation2.powi(2);

        objective - self.penalty_weight * total_penalty
    }
}

Choosing penalty weight:

  • Too small: Constraints ignored
  • Too large: Search biased toward feasibility over quality
  • Typical: 100-10000 depending on objective scale

Death Penalty

Reject infeasible solutions entirely:

impl Fitness<RealVector> for StrictConstraints {
    type Value = f64;

    fn evaluate(&self, genome: &RealVector) -> f64 {
        let genes = genome.genes();

        // Check constraints
        if !self.is_feasible(genes) {
            return f64::NEG_INFINITY; // Or a very low value
        }

        compute_objective(genes)
    }

    fn is_feasible(&self, genes: &[f64]) -> bool {
        genes[0] + genes[1] <= 10.0 && genes[0] * genes[1] >= 5.0
    }
}

External Simulations

When fitness requires running external code:

struct SimulationFitness {
    simulator_path: PathBuf,
}

impl Fitness<RealVector> for SimulationFitness {
    type Value = f64;

    fn evaluate(&self, genome: &RealVector) -> f64 {
        // Write parameters to file
        let params_file = write_params(genome.genes());

        // Run external simulation
        let output = std::process::Command::new(&self.simulator_path)
            .arg(&params_file)
            .output()
            .expect("Failed to run simulator");

        // Parse result
        let result: f64 = parse_output(&output.stdout);

        // Clean up
        std::fs::remove_file(params_file).ok();

        result
    }
}

Data-Driven Fitness

When fitness is computed from data:

struct RegressionFitness {
    x_data: Vec<f64>,
    y_data: Vec<f64>,
}

impl RegressionFitness {
    fn new(x: Vec<f64>, y: Vec<f64>) -> Self {
        assert_eq!(x.len(), y.len());
        Self { x_data: x, y_data: y }
    }
}

impl Fitness<RealVector> for RegressionFitness {
    type Value = f64;

    fn evaluate(&self, genome: &RealVector) -> f64 {
        let genes = genome.genes();
        let a = genes[0];
        let b = genes[1];

        // Model: y = a*x + b
        let mse: f64 = self.x_data.iter()
            .zip(self.y_data.iter())
            .map(|(x, y)| {
                let predicted = a * x + b;
                (y - predicted).powi(2)
            })
            .sum::<f64>() / self.x_data.len() as f64;

        -mse // Minimize MSE
    }
}

Multi-Objective Fitness

For NSGA-II, return multiple objectives:

struct MultiObjective;

impl Fitness<RealVector> for MultiObjective {
    type Value = ParetoFitness;

    fn evaluate(&self, genome: &RealVector) -> ParetoFitness {
        let genes = genome.genes();

        // Two conflicting objectives
        let obj1 = genes.iter().sum::<f64>();      // Maximize sum
        let obj2 = -genes.iter().product::<f64>(); // Maximize product

        ParetoFitness::new(vec![obj1, obj2])
    }
}

Caching Fitness

For expensive evaluations, cache results:

use std::collections::HashMap;
use std::sync::RwLock;

struct CachedFitness<F> {
    inner: F,
    cache: RwLock<HashMap<Vec<u64>, f64>>,
}

impl<F: Fitness<RealVector, Value = f64>> CachedFitness<F> {
    fn cache_key(genome: &RealVector) -> Vec<u64> {
        genome.genes().iter()
            .map(|f| f.to_bits())
            .collect()
    }
}

impl<F: Fitness<RealVector, Value = f64>> Fitness<RealVector> for CachedFitness<F> {
    type Value = f64;

    fn evaluate(&self, genome: &RealVector) -> f64 {
        let key = Self::cache_key(genome);

        // Check cache
        if let Ok(cache) = self.cache.read() {
            if let Some(&cached) = cache.get(&key) {
                return cached;
            }
        }

        // Compute and cache
        let result = self.inner.evaluate(genome);

        if let Ok(mut cache) = self.cache.write() {
            cache.insert(key, result);
        }

        result
    }
}

Fitness for Different Genome Types

BitString

impl Fitness<BitString> for KnapsackFitness {
    type Value = f64;

    fn evaluate(&self, genome: &BitString) -> f64 {
        let mut total_value = 0.0;
        let mut total_weight = 0.0;

        for (i, bit) in genome.bits().iter().enumerate() {
            if *bit {
                total_value += self.values[i];
                total_weight += self.weights[i];
            }
        }

        if total_weight > self.capacity {
            0.0 // Infeasible
        } else {
            total_value
        }
    }
}

Permutation

impl Fitness<Permutation> for TSPFitness {
    type Value = f64;

    fn evaluate(&self, genome: &Permutation) -> f64 {
        let order = genome.as_slice();
        let mut total_distance = 0.0;

        for i in 0..order.len() {
            let from = order[i];
            let to = order[(i + 1) % order.len()];
            total_distance += self.distance_matrix[from][to];
        }

        -total_distance // Minimize distance
    }
}

Testing Fitness Functions

Always test your fitness function:

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn test_known_optimum() {
        let fitness = MyFitness::new();
        let optimum = RealVector::new(vec![0.0, 0.0, 0.0]);
        assert!((fitness.evaluate(&optimum) - expected_value).abs() < 1e-6);
    }

    #[test]
    fn test_fitness_ordering() {
        let fitness = MyFitness::new();
        let good = RealVector::new(vec![0.1, 0.1, 0.1]);
        let bad = RealVector::new(vec![5.0, 5.0, 5.0]);
        assert!(fitness.evaluate(&good) > fitness.evaluate(&bad));
    }

    #[test]
    fn test_constraints() {
        let fitness = ConstrainedProblem::new();
        let feasible = RealVector::new(vec![3.0, 2.0]);
        let infeasible = RealVector::new(vec![10.0, 10.0]);
        assert!(fitness.evaluate(&feasible) > fitness.evaluate(&infeasible));
    }
}

Next Steps

Custom Genome Types

This guide shows how to create custom genome representations for domain-specific optimization problems.

When to Create Custom Genomes

Create a custom genome when:

  • Built-in types don't match your problem structure
  • You need domain-specific constraints
  • You want optimized operators for your representation
  • Your solution has a unique structure

Basic Pattern

Implement the EvolutionaryGenome trait:

use fugue_evo::prelude::*;
use fugue::Trace;
use serde::{Deserialize, Serialize};

#[derive(Clone, Debug, Serialize, Deserialize)]
struct MyGenome {
    // Your genome data
}

impl EvolutionaryGenome for MyGenome {
    fn to_trace(&self) -> Trace {
        // Convert genome to Fugue trace
        let mut trace = Trace::new();
        // ... add values to trace
        trace
    }

    fn from_trace(trace: &Trace) -> Result<Self, GenomeError> {
        // Reconstruct genome from trace
        // ... extract values from trace
        Ok(MyGenome { /* ... */ })
    }
}

Example: Schedule Genome

Let's create a genome for job scheduling problems:

use fugue_evo::prelude::*;
use fugue::{Trace, addr};
use serde::{Deserialize, Serialize};

/// A schedule assigns jobs to time slots on machines
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct ScheduleGenome {
    /// assignments[job] = (machine, start_time)
    pub assignments: Vec<(usize, f64)>,
    pub num_machines: usize,
}

impl ScheduleGenome {
    pub fn new(num_jobs: usize, num_machines: usize) -> Self {
        Self {
            assignments: vec![(0, 0.0); num_jobs],
            num_machines,
        }
    }

    pub fn random(num_jobs: usize, num_machines: usize, rng: &mut impl Rng) -> Self {
        let assignments = (0..num_jobs)
            .map(|_| {
                let machine = rng.gen_range(0..num_machines);
                let start = rng.gen_range(0.0..100.0);
                (machine, start)
            })
            .collect();

        Self { assignments, num_machines }
    }

    /// Get the makespan (total schedule length)
    pub fn makespan(&self) -> f64 {
        // Simplified: just max end time
        self.assignments.iter()
            .map(|(_, start)| start + 1.0) // Assume unit job duration
            .fold(0.0, f64::max)
    }
}

impl EvolutionaryGenome for ScheduleGenome {
    fn to_trace(&self) -> Trace {
        let mut trace = Trace::new();

        for (i, (machine, start)) in self.assignments.iter().enumerate() {
            // Store machine assignment
            trace.insert(addr!("machine", i), *machine as f64);
            // Store start time
            trace.insert(addr!("start", i), *start);
        }

        // Store metadata
        trace.insert(addr!("num_machines"), self.num_machines as f64);

        trace
    }

    fn from_trace(trace: &Trace) -> Result<Self, GenomeError> {
        let num_machines = trace
            .get(&addr!("num_machines"))
            .ok_or(GenomeError::InvalidTrace("missing num_machines".into()))?
            .round() as usize;

        // Find number of jobs by counting machine entries
        let num_jobs = (0..)
            .take_while(|i| trace.get(&addr!("machine", *i)).is_some())
            .count();

        let assignments = (0..num_jobs)
            .map(|i| {
                let machine = trace
                    .get(&addr!("machine", i))
                    .unwrap()
                    .round() as usize;
                let start = *trace
                    .get(&addr!("start", i))
                    .unwrap();
                (machine, start)
            })
            .collect();

        Ok(Self { assignments, num_machines })
    }
}

Custom Operators

Create operators tailored to your genome:

/// Mutation: randomly reassign one job
pub struct ScheduleMutation {
    pub probability: f64,
}

impl MutationOperator<ScheduleGenome> for ScheduleMutation {
    fn mutate(&self, genome: &mut ScheduleGenome, rng: &mut impl Rng) {
        if rng.gen::<f64>() < self.probability {
            // Pick random job
            let job = rng.gen_range(0..genome.assignments.len());

            // Reassign to random machine and time
            genome.assignments[job] = (
                rng.gen_range(0..genome.num_machines),
                rng.gen_range(0.0..100.0),
            );
        }
    }
}

/// Crossover: combine schedules by job subset
pub struct ScheduleCrossover;

impl CrossoverOperator<ScheduleGenome> for ScheduleCrossover {
    type Output = CrossoverResult<ScheduleGenome>;

    fn crossover(
        &self,
        parent1: &ScheduleGenome,
        parent2: &ScheduleGenome,
        rng: &mut impl Rng,
    ) -> Self::Output {
        let n = parent1.assignments.len();
        let crossover_point = rng.gen_range(0..n);

        // Child 1: first part from p1, second from p2
        let mut child1 = parent1.clone();
        for i in crossover_point..n {
            child1.assignments[i] = parent2.assignments[i];
        }

        // Child 2: first part from p2, second from p1
        let mut child2 = parent2.clone();
        for i in crossover_point..n {
            child2.assignments[i] = parent1.assignments[i];
        }

        CrossoverResult::new(child1, child2)
    }
}

Validity Constraints

Enforce constraints in your genome:

impl ScheduleGenome {
    /// Ensure all assignments are valid
    pub fn repair(&mut self) {
        for (machine, start) in &mut self.assignments {
            // Clamp machine index
            *machine = (*machine).min(self.num_machines - 1);
            // Clamp start time
            *start = start.max(0.0);
        }
    }

    /// Check if schedule is valid
    pub fn is_valid(&self) -> bool {
        self.assignments.iter().all(|(m, s)| {
            *m < self.num_machines && *s >= 0.0
        })
    }
}

// Apply repair in mutation
impl MutationOperator<ScheduleGenome> for ScheduleMutation {
    fn mutate(&self, genome: &mut ScheduleGenome, rng: &mut impl Rng) {
        // ... mutation logic ...
        genome.repair(); // Ensure validity
    }
}

Using Custom Genome with SimpleGA

fn main() -> Result<(), Box<dyn std::error::Error>> {
    let mut rng = StdRng::seed_from_u64(42);

    let num_jobs = 20;
    let num_machines = 4;

    // Create initial population
    let initial_pop: Vec<ScheduleGenome> = (0..100)
        .map(|_| ScheduleGenome::random(num_jobs, num_machines, &mut rng))
        .collect();

    // Define fitness
    let fitness = ScheduleFitness::new(/* job durations, dependencies, etc. */);

    // Run GA with custom operators
    let result = SimpleGABuilder::<ScheduleGenome, f64, _, _, _, _, _>::new()
        .population_size(100)
        .initial_population(initial_pop)
        .selection(TournamentSelection::new(3))
        .crossover(ScheduleCrossover)
        .mutation(ScheduleMutation { probability: 0.1 })
        .fitness(fitness)
        .max_generations(200)
        .build()?
        .run(&mut rng)?;

    println!("Best makespan: {}", result.best_genome.makespan());
    Ok(())
}

Composite Genomes

Combine multiple genome types:

#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct HybridGenome {
    pub continuous: RealVector,
    pub discrete: BitString,
}

impl EvolutionaryGenome for HybridGenome {
    fn to_trace(&self) -> Trace {
        let mut trace = Trace::new();

        // Namespace continuous genes
        for (i, val) in self.continuous.genes().iter().enumerate() {
            trace.insert(addr!("continuous", i), *val);
        }

        // Namespace discrete genes
        for (i, bit) in self.discrete.bits().iter().enumerate() {
            trace.insert(addr!("discrete", i), if *bit { 1.0 } else { 0.0 });
        }

        trace
    }

    fn from_trace(trace: &Trace) -> Result<Self, GenomeError> {
        // Extract continuous part
        let continuous_len = (0..)
            .take_while(|i| trace.get(&addr!("continuous", *i)).is_some())
            .count();

        let continuous_genes: Vec<f64> = (0..continuous_len)
            .map(|i| *trace.get(&addr!("continuous", i)).unwrap())
            .collect();

        // Extract discrete part
        let discrete_len = (0..)
            .take_while(|i| trace.get(&addr!("discrete", *i)).is_some())
            .count();

        let discrete_bits: Vec<bool> = (0..discrete_len)
            .map(|i| *trace.get(&addr!("discrete", i)).unwrap() > 0.5)
            .collect();

        Ok(Self {
            continuous: RealVector::new(continuous_genes),
            discrete: BitString::new(discrete_bits),
        })
    }
}

Testing Custom Genomes

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn test_trace_roundtrip() {
        let original = ScheduleGenome::random(10, 4, &mut StdRng::seed_from_u64(42));
        let trace = original.to_trace();
        let reconstructed = ScheduleGenome::from_trace(&trace).unwrap();

        assert_eq!(original.assignments, reconstructed.assignments);
        assert_eq!(original.num_machines, reconstructed.num_machines);
    }

    #[test]
    fn test_mutation_validity() {
        let mut genome = ScheduleGenome::random(10, 4, &mut StdRng::seed_from_u64(42));
        let mutation = ScheduleMutation { probability: 1.0 };

        for _ in 0..100 {
            mutation.mutate(&mut genome, &mut StdRng::seed_from_u64(42));
            assert!(genome.is_valid());
        }
    }
}

Next Steps

Custom Operators

This guide shows how to create custom selection, crossover, and mutation operators.

Selection Operators

Selection chooses parents for reproduction based on fitness.

Trait Definition

pub trait SelectionOperator<G>: Send + Sync {
    /// Select an individual from the population
    /// Returns the index of the selected individual
    fn select<R: Rng>(
        &self,
        population: &[(G, f64)], // (genome, fitness) pairs
        rng: &mut R,
    ) -> usize;
}

Example: Boltzmann Selection

Temperature-controlled selection pressure:

use fugue_evo::prelude::*;

pub struct BoltzmannSelection {
    temperature: f64,
}

impl BoltzmannSelection {
    pub fn new(temperature: f64) -> Self {
        Self { temperature }
    }
}

impl<G> SelectionOperator<G> for BoltzmannSelection {
    fn select<R: Rng>(&self, population: &[(G, f64)], rng: &mut R) -> usize {
        // Compute Boltzmann probabilities
        let max_fitness = population.iter()
            .map(|(_, f)| *f)
            .fold(f64::NEG_INFINITY, f64::max);

        let weights: Vec<f64> = population.iter()
            .map(|(_, f)| ((f - max_fitness) / self.temperature).exp())
            .collect();

        let total: f64 = weights.iter().sum();

        // Roulette wheel selection
        let mut target = rng.gen::<f64>() * total;
        for (i, w) in weights.iter().enumerate() {
            target -= w;
            if target <= 0.0 {
                return i;
            }
        }

        population.len() - 1
    }
}

Usage:

let selection = BoltzmannSelection::new(1.0); // Higher temp = more random

Crossover Operators

Crossover combines two parents to create offspring.

Trait Definition

pub trait CrossoverOperator<G>: Send + Sync {
    type Output;

    fn crossover<R: Rng>(
        &self,
        parent1: &G,
        parent2: &G,
        rng: &mut R,
    ) -> Self::Output;
}

Example: Blend Crossover (BLX-α)

Creates offspring in an expanded range around parents:

pub struct BlendCrossover {
    alpha: f64,
}

impl BlendCrossover {
    pub fn new(alpha: f64) -> Self {
        Self { alpha }
    }
}

impl CrossoverOperator<RealVector> for BlendCrossover {
    type Output = CrossoverResult<RealVector>;

    fn crossover<R: Rng>(
        &self,
        parent1: &RealVector,
        parent2: &RealVector,
        rng: &mut R,
    ) -> Self::Output {
        let g1 = parent1.genes();
        let g2 = parent2.genes();

        let mut child1_genes = Vec::with_capacity(g1.len());
        let mut child2_genes = Vec::with_capacity(g1.len());

        for i in 0..g1.len() {
            let min_val = g1[i].min(g2[i]);
            let max_val = g1[i].max(g2[i]);
            let range = max_val - min_val;

            // Expanded range: [min - α*range, max + α*range]
            let low = min_val - self.alpha * range;
            let high = max_val + self.alpha * range;

            child1_genes.push(rng.gen_range(low..=high));
            child2_genes.push(rng.gen_range(low..=high));
        }

        CrossoverResult::new(
            RealVector::new(child1_genes),
            RealVector::new(child2_genes),
        )
    }
}

Usage:

let crossover = BlendCrossover::new(0.5); // α = 0.5 is common

Bounded Crossover

For crossover that needs bounds information:

pub trait BoundedCrossoverOperator<G>: Send + Sync {
    type Output;

    fn crossover_bounded<R: Rng>(
        &self,
        parent1: &G,
        parent2: &G,
        bounds: &MultiBounds,
        rng: &mut R,
    ) -> Self::Output;
}

impl BoundedCrossoverOperator<RealVector> for BlendCrossover {
    type Output = CrossoverResult<RealVector>;

    fn crossover_bounded<R: Rng>(
        &self,
        parent1: &RealVector,
        parent2: &RealVector,
        bounds: &MultiBounds,
        rng: &mut R,
    ) -> Self::Output {
        let result = self.crossover(parent1, parent2, rng);

        // Clamp to bounds
        let child1 = clamp_to_bounds(result.genome().unwrap().0, bounds);
        let child2 = clamp_to_bounds(result.genome().unwrap().1, bounds);

        CrossoverResult::new(child1, child2)
    }
}

Mutation Operators

Mutation introduces random variation.

Trait Definition

pub trait MutationOperator<G>: Send + Sync {
    fn mutate<R: Rng>(&self, genome: &mut G, rng: &mut R);
}

Example: Cauchy Mutation

Heavy-tailed mutation for escaping local optima:

use rand_distr::{Cauchy, Distribution};

pub struct CauchyMutation {
    scale: f64,
    probability: f64,
}

impl CauchyMutation {
    pub fn new(scale: f64) -> Self {
        Self {
            scale,
            probability: 1.0,
        }
    }

    pub fn with_probability(mut self, p: f64) -> Self {
        self.probability = p;
        self
    }
}

impl MutationOperator<RealVector> for CauchyMutation {
    fn mutate<R: Rng>(&self, genome: &mut RealVector, rng: &mut R) {
        let cauchy = Cauchy::new(0.0, self.scale).unwrap();

        for gene in genome.genes_mut() {
            if rng.gen::<f64>() < self.probability {
                *gene += cauchy.sample(rng);
            }
        }
    }
}

Adaptive Mutation

Mutation that changes based on progress:

pub struct AdaptiveMutation {
    initial_sigma: f64,
    final_sigma: f64,
    current_generation: usize,
    max_generations: usize,
}

impl AdaptiveMutation {
    pub fn new(initial: f64, final_val: f64, max_gen: usize) -> Self {
        Self {
            initial_sigma: initial,
            final_sigma: final_val,
            current_generation: 0,
            max_generations: max_gen,
        }
    }

    pub fn set_generation(&mut self, gen: usize) {
        self.current_generation = gen;
    }

    fn current_sigma(&self) -> f64 {
        let progress = self.current_generation as f64 / self.max_generations as f64;
        self.initial_sigma + (self.final_sigma - self.initial_sigma) * progress
    }
}

impl MutationOperator<RealVector> for AdaptiveMutation {
    fn mutate<R: Rng>(&self, genome: &mut RealVector, rng: &mut R) {
        let sigma = self.current_sigma();
        let normal = rand_distr::Normal::new(0.0, sigma).unwrap();

        for gene in genome.genes_mut() {
            *gene += normal.sample(rng);
        }
    }
}

Problem-Specific Operators

Permutation: Order Crossover (OX)

pub struct OrderCrossover;

impl CrossoverOperator<Permutation> for OrderCrossover {
    type Output = CrossoverResult<Permutation>;

    fn crossover<R: Rng>(
        &self,
        parent1: &Permutation,
        parent2: &Permutation,
        rng: &mut R,
    ) -> Self::Output {
        let n = parent1.len();
        let p1 = parent1.as_slice();
        let p2 = parent2.as_slice();

        // Select crossover segment
        let mut start = rng.gen_range(0..n);
        let mut end = rng.gen_range(0..n);
        if start > end {
            std::mem::swap(&mut start, &mut end);
        }

        // Child 1: segment from p1, rest from p2 in order
        let mut child1 = vec![usize::MAX; n];
        let mut used1: HashSet<usize> = HashSet::new();

        // Copy segment
        for i in start..=end {
            child1[i] = p1[i];
            used1.insert(p1[i]);
        }

        // Fill rest from p2
        let mut pos = (end + 1) % n;
        for &val in p2.iter().cycle().skip(end + 1).take(n) {
            if !used1.contains(&val) {
                child1[pos] = val;
                used1.insert(val);
                pos = (pos + 1) % n;
                if pos == start {
                    break;
                }
            }
        }

        // Similarly for child2...
        let child2 = /* symmetric construction */;

        CrossoverResult::new(
            Permutation::new(child1),
            Permutation::new(child2),
        )
    }
}

BitString: Intelligent Flip

pub struct IntelligentBitFlip {
    /// Probability of flipping 0→1
    p_set: f64,
    /// Probability of flipping 1→0
    p_clear: f64,
}

impl MutationOperator<BitString> for IntelligentBitFlip {
    fn mutate<R: Rng>(&self, genome: &mut BitString, rng: &mut R) {
        for bit in genome.bits_mut() {
            let p = if *bit { self.p_clear } else { self.p_set };
            if rng.gen::<f64>() < p {
                *bit = !*bit;
            }
        }
    }
}

Combining Operators

Composite Mutation

Apply multiple mutations:

pub struct CompositeMutation<M1, M2> {
    mutation1: M1,
    mutation2: M2,
    p1: f64, // Probability of using mutation1
}

impl<G, M1, M2> MutationOperator<G> for CompositeMutation<M1, M2>
where
    M1: MutationOperator<G>,
    M2: MutationOperator<G>,
{
    fn mutate<R: Rng>(&self, genome: &mut G, rng: &mut R) {
        if rng.gen::<f64>() < self.p1 {
            self.mutation1.mutate(genome, rng);
        } else {
            self.mutation2.mutate(genome, rng);
        }
    }
}

Testing Operators

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn test_crossover_preserves_genes() {
        let p1 = RealVector::new(vec![1.0, 2.0, 3.0]);
        let p2 = RealVector::new(vec![4.0, 5.0, 6.0]);

        let crossover = BlendCrossover::new(0.0); // No expansion
        let mut rng = StdRng::seed_from_u64(42);

        let result = crossover.crossover(&p1, &p2, &mut rng);
        let (c1, c2) = result.genome().unwrap();

        // Children should be within parent ranges
        for i in 0..3 {
            let min = p1.genes()[i].min(p2.genes()[i]);
            let max = p1.genes()[i].max(p2.genes()[i]);
            assert!(c1.genes()[i] >= min && c1.genes()[i] <= max);
        }
    }

    #[test]
    fn test_mutation_changes_genome() {
        let mut genome = RealVector::new(vec![0.0; 10]);
        let mutation = CauchyMutation::new(1.0);
        let mut rng = StdRng::seed_from_u64(42);

        let original = genome.clone();
        mutation.mutate(&mut genome, &mut rng);

        assert_ne!(genome.genes(), original.genes());
    }
}

Next Steps

Checkpointing & Recovery

This guide shows how to save and restore evolution state for long-running optimizations.

Why Checkpointing?

  • Resume interrupted runs: Continue after crashes or shutdowns
  • Experiment branching: Try different strategies from the same point
  • Progress monitoring: Analyze intermediate states
  • Resource limits: Work within time-limited environments

Basic Checkpointing

Creating Checkpoints

use fugue_evo::prelude::*;
use std::path::PathBuf;

// Create checkpoint manager
let checkpoint_dir = PathBuf::from("./checkpoints");
let mut manager = CheckpointManager::new(&checkpoint_dir, "my_evolution")
    .every(50)    // Save every 50 generations
    .keep(3);     // Keep last 3 checkpoints

// In evolution loop
for gen in 0..max_generations {
    // ... evolution step ...

    if manager.should_save(gen + 1) {
        let individuals: Vec<Individual<RealVector>> = population.iter().cloned().collect();
        let checkpoint = Checkpoint::new(gen + 1, individuals)
            .with_evaluations((gen + 1) * population_size);

        manager.save(&checkpoint)?;
        println!("Saved checkpoint at generation {}", gen + 1);
    }
}

Loading Checkpoints

use fugue_evo::checkpoint::load_checkpoint;

// Load specific checkpoint
let checkpoint: Checkpoint<RealVector> = load_checkpoint("./checkpoints/my_evolution_gen_100.ckpt")?;

println!("Loaded generation: {}", checkpoint.generation);
println!("Population size: {}", checkpoint.population.len());
println!("Evaluations: {}", checkpoint.evaluations);

// Reconstruct population
let mut population: Population<RealVector, f64> =
    Population::with_capacity(checkpoint.population.len());
for ind in checkpoint.population {
    population.push(ind);
}

Complete Example

//! Checkpointing and Recovery
//!
//! This example demonstrates how to save and restore evolution state
//! using checkpoints. This is essential for long-running optimizations
//! that may need to be interrupted and resumed.

use fugue_evo::prelude::*;
use rand::rngs::StdRng;
use rand::SeedableRng;
use std::path::PathBuf;

fn main() -> Result<(), Box<dyn std::error::Error>> {
    println!("=== Checkpointing and Recovery ===\n");

    let checkpoint_dir = PathBuf::from("/tmp/fugue_evo_checkpoints");

    // Clean up any existing checkpoints first
    if checkpoint_dir.exists() {
        std::fs::remove_dir_all(&checkpoint_dir)?;
    }

    // Run with checkpoints
    run_with_checkpoints(&checkpoint_dir)?;

    // Demonstrate resuming (in real usage, this would be after a restart)
    println!("\n--- Simulating resume from checkpoint ---\n");
    resume_from_checkpoint(&checkpoint_dir)?;

    // Clean up
    if checkpoint_dir.exists() {
        std::fs::remove_dir_all(&checkpoint_dir)?;
        println!("\nCheckpoint directory cleaned up.");
    }

    Ok(())
}

fn run_with_checkpoints(checkpoint_dir: &PathBuf) -> Result<(), Box<dyn std::error::Error>> {
    let mut rng = StdRng::seed_from_u64(42);

    const DIM: usize = 10;
    let fitness = Sphere::new(DIM);
    let bounds = MultiBounds::symmetric(5.12, DIM);

    // Run evolution with periodic checkpoints
    let mut population: Population<RealVector, f64> = Population::random(100, &bounds, &mut rng);
    population.evaluate(&fitness);

    let selection = TournamentSelection::new(3);
    let crossover = SbxCrossover::new(20.0);
    let mutation = PolynomialMutation::new(20.0);

    // Create checkpoint manager
    let mut manager = CheckpointManager::new(checkpoint_dir, "evolution")
        .every(50) // Save every 50 generations
        .keep(3); // Keep last 3 checkpoints

    let max_generations = 200;

    for gen in 0..max_generations {
        // Evolution step
        let selection_pool: Vec<_> = population.as_fitness_pairs();
        let mut new_pop: Population<RealVector, f64> = Population::with_capacity(100);

        // Elitism
        if let Some(best) = population.best() {
            new_pop.push(best.clone());
        }

        while new_pop.len() < 100 {
            let p1_idx = selection.select(&selection_pool, &mut rng);
            let p2_idx = selection.select(&selection_pool, &mut rng);

            let (mut c1, mut c2) = crossover
                .crossover(
                    &selection_pool[p1_idx].0,
                    &selection_pool[p2_idx].0,
                    &mut rng,
                )
                .genome()
                .unwrap_or_else(|| {
                    (
                        selection_pool[p1_idx].0.clone(),
                        selection_pool[p2_idx].0.clone(),
                    )
                });

            mutation.mutate(&mut c1, &mut rng);
            mutation.mutate(&mut c2, &mut rng);

            new_pop.push(Individual::new(c1));
            if new_pop.len() < 100 {
                new_pop.push(Individual::new(c2));
            }
        }

        new_pop.evaluate(&fitness);
        new_pop.set_generation(gen + 1);
        population = new_pop;

        // Save checkpoint periodically
        if manager.should_save(gen + 1) {
            let best = population.best().unwrap();
            println!(
                "Gen {:3}: Best = {:.6} - Saving checkpoint...",
                gen + 1,
                best.fitness_value()
            );

            // Create checkpoint with current population
            let individuals: Vec<Individual<RealVector>> = population.iter().cloned().collect();
            let checkpoint =
                Checkpoint::new(gen + 1, individuals).with_evaluations((gen + 1) * 100);

            manager.save(&checkpoint)?;
        }
    }

    let best = population.best().unwrap();
    println!("\nFinal result:");
    println!("  Best fitness: {:.6}", best.fitness_value());
    println!("  Generations:  {}", max_generations);

    Ok(())
}

fn resume_from_checkpoint(checkpoint_dir: &PathBuf) -> Result<(), Box<dyn std::error::Error>> {
    // Find the latest checkpoint file
    let entries: Vec<_> = std::fs::read_dir(checkpoint_dir)?
        .filter_map(|e| e.ok())
        .filter(|e| e.path().extension().is_some_and(|ext| ext == "ckpt"))
        .collect();

    if entries.is_empty() {
        println!("No checkpoint files found!");
        return Ok(());
    }

    // Sort by name to get the latest
    let mut paths: Vec<_> = entries.iter().map(|e| e.path()).collect();
    paths.sort();
    let latest_checkpoint = paths.last().unwrap();

    println!("Loading checkpoint: {:?}", latest_checkpoint);

    // Load checkpoint
    let checkpoint: Checkpoint<RealVector> = load_checkpoint(latest_checkpoint)?;

    println!("Loaded checkpoint:");
    println!("  Generation: {}", checkpoint.generation);
    println!("  Population size: {}", checkpoint.population.len());
    println!("  Evaluations: {}", checkpoint.evaluations);

    // Find best in loaded population
    let best_individual = checkpoint
        .population
        .iter()
        .filter_map(|ind| ind.fitness.as_ref().map(|f| (ind, f.to_f64())))
        .max_by(|(_, f1), (_, f2)| f1.partial_cmp(f2).unwrap());

    if let Some((_best, fitness)) = best_individual {
        println!("  Best fitness at checkpoint: {:.6}", fitness);
    }

    // Continue evolution...
    let mut rng = StdRng::seed_from_u64(12345); // Different seed for continuation
    let fitness = Sphere::new(10);

    // Reconstruct population from checkpoint
    let mut population: Population<RealVector, f64> =
        Population::with_capacity(checkpoint.population.len());
    for ind in checkpoint.population {
        population.push(ind);
    }

    let selection = TournamentSelection::new(3);
    let crossover = SbxCrossover::new(20.0);
    let mutation = PolynomialMutation::new(20.0);

    let remaining_gens = 200 - checkpoint.generation;
    println!("\nContinuing for {} more generations...\n", remaining_gens);

    for gen in checkpoint.generation..200 {
        let selection_pool: Vec<_> = population.as_fitness_pairs();
        let mut new_pop: Population<RealVector, f64> = Population::with_capacity(100);

        if let Some(best) = population.best() {
            new_pop.push(best.clone());
        }

        while new_pop.len() < 100 {
            let p1_idx = selection.select(&selection_pool, &mut rng);
            let p2_idx = selection.select(&selection_pool, &mut rng);

            let (mut c1, mut c2) = crossover
                .crossover(
                    &selection_pool[p1_idx].0,
                    &selection_pool[p2_idx].0,
                    &mut rng,
                )
                .genome()
                .unwrap_or_else(|| {
                    (
                        selection_pool[p1_idx].0.clone(),
                        selection_pool[p2_idx].0.clone(),
                    )
                });

            mutation.mutate(&mut c1, &mut rng);
            mutation.mutate(&mut c2, &mut rng);

            new_pop.push(Individual::new(c1));
            if new_pop.len() < 100 {
                new_pop.push(Individual::new(c2));
            }
        }

        new_pop.evaluate(&fitness);
        new_pop.set_generation(gen + 1);
        population = new_pop;

        if (gen + 1) % 50 == 0 {
            let best = population.best().unwrap();
            println!("Gen {:3}: Best = {:.6}", gen + 1, best.fitness_value());
        }
    }

    let best = population.best().unwrap();
    println!("\nFinal result after resumption:");
    println!("  Best fitness: {:.6}", best.fitness_value());

    Ok(())
}

Source: examples/checkpointing.rs

Running the Example

cargo run --example checkpointing

Checkpoint Manager Options

Save Frequency

// Every N generations
CheckpointManager::new(&dir, "name").every(50);

// Only at specific generations
CheckpointManager::new(&dir, "name").at_generations(&[100, 200, 500]);

Retention Policy

// Keep last N checkpoints
CheckpointManager::new(&dir, "name").keep(3);

// Keep all checkpoints
CheckpointManager::new(&dir, "name").keep_all();

// Custom retention
CheckpointManager::new(&dir, "name").keep_every(100); // Keep every 100th

Custom Naming

// Default: name_gen_N.ckpt
let manager = CheckpointManager::new(&dir, "experiment_1");
// Creates: experiment_1_gen_50.ckpt, experiment_1_gen_100.ckpt, etc.

Checkpoint Contents

The Checkpoint struct stores:

pub struct Checkpoint<G: EvolutionaryGenome> {
    /// Current generation number
    pub generation: usize,

    /// Full population with fitness values
    pub population: Vec<Individual<G>>,

    /// Total fitness evaluations so far
    pub evaluations: usize,

    /// Optional metadata
    pub metadata: Option<CheckpointMetadata>,
}

Adding Metadata

let checkpoint = Checkpoint::new(gen, individuals)
    .with_evaluations(evaluations)
    .with_metadata(CheckpointMetadata {
        timestamp: chrono::Utc::now(),
        best_fitness: population.best().map(|b| *b.fitness_value()),
        config: serde_json::to_string(&config).ok(),
    });

Resume Strategy

Find Latest Checkpoint

fn find_latest_checkpoint(dir: &Path, prefix: &str) -> Option<PathBuf> {
    std::fs::read_dir(dir)
        .ok()?
        .filter_map(|e| e.ok())
        .filter(|e| {
            e.path()
                .file_name()
                .and_then(|n| n.to_str())
                .map(|n| n.starts_with(prefix) && n.ends_with(".ckpt"))
                .unwrap_or(false)
        })
        .max_by_key(|e| e.path())
        .map(|e| e.path())
}

Resume or Start Fresh

fn run_evolution(checkpoint_dir: &Path) -> Result<(), Box<dyn Error>> {
    let latest = find_latest_checkpoint(checkpoint_dir, "my_evolution");

    let (mut population, start_gen) = if let Some(path) = latest {
        println!("Resuming from {:?}", path);
        let ckpt: Checkpoint<RealVector> = load_checkpoint(&path)?;
        let pop = reconstruct_population(ckpt.population);
        (pop, ckpt.generation)
    } else {
        println!("Starting fresh");
        let pop = Population::random(100, &bounds, &mut rng);
        (pop, 0)
    };

    // Continue evolution from start_gen
    for gen in start_gen..max_generations {
        // ... evolution ...
    }

    Ok(())
}

Saving Algorithm State

For algorithms with internal state (like CMA-ES):

#[derive(Serialize, Deserialize)]
struct CmaEsCheckpoint {
    generation: usize,
    mean: Vec<f64>,
    sigma: f64,
    covariance: Vec<Vec<f64>>,
    // ... other CMA-ES state
}

impl CmaEsCheckpoint {
    fn from_cmaes(cmaes: &CmaEs) -> Self {
        Self {
            generation: cmaes.state.generation,
            mean: cmaes.state.mean.clone(),
            sigma: cmaes.state.sigma,
            covariance: cmaes.state.covariance.clone(),
        }
    }

    fn restore(&self) -> CmaEs {
        let mut cmaes = CmaEs::new(self.mean.clone(), self.sigma);
        cmaes.state.generation = self.generation;
        cmaes.state.covariance = self.covariance.clone();
        cmaes
    }
}

Error Handling

match load_checkpoint::<RealVector>(&path) {
    Ok(checkpoint) => {
        println!("Loaded successfully");
    }
    Err(CheckpointError::FileNotFound(path)) => {
        println!("Checkpoint not found: {:?}", path);
    }
    Err(CheckpointError::DeserializationFailed(err)) => {
        println!("Corrupted checkpoint: {}", err);
    }
    Err(e) => {
        println!("Unknown error: {}", e);
    }
}

Best Practices

1. Checkpoint Frequently Enough

// For long runs, checkpoint every ~5-10% of expected runtime
let interval = max_generations / 20;
manager.every(interval);

2. Verify Checkpoints

// After saving, verify it can be loaded
manager.save(&checkpoint)?;
let verified: Checkpoint<RealVector> = load_checkpoint(&manager.latest_path())?;
assert_eq!(verified.generation, checkpoint.generation);

3. Include Random State

For reproducible resumption, save RNG state:

use rand::SeedableRng;

#[derive(Serialize, Deserialize)]
struct FullCheckpoint<G> {
    evolution: Checkpoint<G>,
    rng_seed: u64, // Or full RNG state
}

4. Use Atomic Writes

Prevent corruption from interrupted saves:

// Write to temp file, then rename
let temp_path = path.with_extension("tmp");
write_checkpoint(&checkpoint, &temp_path)?;
std::fs::rename(temp_path, path)?;

Feature Flag

Checkpointing requires the checkpoint feature:

[dependencies]
fugue-evo = { version = "0.1", features = ["checkpoint"] }

Next Steps

Parallel Evolution

This guide shows how to speed up evolution using parallel computation.

Enabling Parallelism

Parallel evaluation is enabled by default with the parallel feature:

[dependencies]
fugue-evo = { version = "0.1", features = ["parallel"] }

Parallel Fitness Evaluation

The simplest form of parallelism: evaluate multiple individuals simultaneously.

Using SimpleGA

let result = SimpleGABuilder::<RealVector, f64, _, _, _, _, _>::new()
    .population_size(100)
    .parallel(true)  // Enable parallel evaluation
    // ... other configuration
    .build()?
    .run(&mut rng)?;

Manual Parallel Evaluation

Using Rayon directly:

use rayon::prelude::*;

// Parallel evaluation
let fitnesses: Vec<f64> = population
    .par_iter()  // Parallel iterator
    .map(|individual| fitness.evaluate(individual.genome()))
    .collect();

// Update population with fitnesses
for (individual, fit) in population.iter_mut().zip(fitnesses) {
    individual.set_fitness(fit);
}

When Parallelism Helps

Good Candidates

ScenarioSpeedup Potential
Expensive fitness (>10ms)High
Large population (>100)Medium-High
Independent evaluationsHigh
Many CPU coresScales with cores

Poor Candidates

ScenarioWhy
Very cheap fitness (<1ms)Parallelization overhead dominates
Small population (<20)Not enough work to distribute
Dependent evaluationsCan't parallelize
Memory-bound fitnessCores compete for memory

Tuning Parallelism

Thread Pool Size

// Set number of threads
rayon::ThreadPoolBuilder::new()
    .num_threads(4)
    .build_global()
    .unwrap();

Chunk Size

For fine-grained control:

use rayon::prelude::*;

// Process in chunks
population
    .par_chunks(10)  // Process 10 at a time
    .for_each(|chunk| {
        for individual in chunk {
            // Evaluate
        }
    });

Island Model Parallelism

Islands naturally parallelize across populations:

use rayon::prelude::*;

// Each island evolves independently
islands.par_iter_mut().for_each(|island| {
    island.evolve_generation(&mut thread_rng());
});

// Synchronous migration
migrate_between_islands(&mut islands);

Thread Safety Considerations

Fitness Functions

Your fitness function must be thread-safe (Send + Sync):

// Good: Stateless fitness
struct MyFitness;

impl Fitness<RealVector> for MyFitness {
    type Value = f64;

    fn evaluate(&self, genome: &RealVector) -> f64 {
        // Pure computation, no shared state
        genome.genes().iter().sum()
    }
}

// Good: Shared read-only data
struct DataDrivenFitness {
    data: Arc<Vec<f64>>,  // Shared immutable data
}

// Bad: Mutable shared state
struct BadFitness {
    counter: Cell<usize>,  // Not thread-safe!
}

Thread-Local State

For fitness functions needing mutable state:

use std::cell::RefCell;

thread_local! {
    static CACHE: RefCell<HashMap<u64, f64>> = RefCell::new(HashMap::new());
}

impl Fitness<RealVector> for CachedFitness {
    type Value = f64;

    fn evaluate(&self, genome: &RealVector) -> f64 {
        let key = hash(genome);

        CACHE.with(|cache| {
            if let Some(&cached) = cache.borrow().get(&key) {
                return cached;
            }

            let result = expensive_compute(genome);
            cache.borrow_mut().insert(key, result);
            result
        })
    }
}

Benchmarking Parallelism

Measure actual speedup:

use std::time::Instant;

fn benchmark_parallel_vs_sequential(population: &[Individual<RealVector>], fitness: &impl Fitness<RealVector>) {
    // Sequential
    let start = Instant::now();
    for ind in population {
        fitness.evaluate(ind.genome());
    }
    let sequential_time = start.elapsed();

    // Parallel
    let start = Instant::now();
    population.par_iter().for_each(|ind| {
        fitness.evaluate(ind.genome());
    });
    let parallel_time = start.elapsed();

    println!("Sequential: {:?}", sequential_time);
    println!("Parallel: {:?}", parallel_time);
    println!("Speedup: {:.2}x", sequential_time.as_secs_f64() / parallel_time.as_secs_f64());
}

GPU Acceleration

For very expensive computations, consider GPU:

// Pseudocode for GPU fitness
struct GpuFitness {
    device: CudaDevice,
    kernel: CompiledKernel,
}

impl GpuFitness {
    fn evaluate_batch(&self, genomes: &[RealVector]) -> Vec<f64> {
        // Copy genomes to GPU
        let gpu_data = self.device.copy_to_device(genomes);

        // Run kernel
        let gpu_results = self.kernel.run(&gpu_data);

        // Copy results back
        self.device.copy_from_device(&gpu_results)
    }
}

Parallel Evolution Patterns

Pattern 1: Parallel Evaluation Only

for gen in 0..max_generations {
    // Sequential selection and reproduction
    let offspring = create_offspring(&population, &mut rng);

    // Parallel evaluation
    offspring.par_iter_mut().for_each(|ind| {
        let fit = fitness.evaluate(ind.genome());
        ind.set_fitness(fit);
    });

    population = offspring;
}

Pattern 2: Multiple Independent Runs

let results: Vec<_> = (0..num_runs)
    .into_par_iter()
    .map(|run| {
        let mut rng = StdRng::seed_from_u64(run as u64);
        run_evolution(&fitness, &mut rng)
    })
    .collect();

let best = results.iter()
    .max_by(|a, b| a.best_fitness.partial_cmp(&b.best_fitness).unwrap());

Pattern 3: Speculative Execution

Run multiple strategies, keep best:

let strategies = vec![
    Strategy::HighMutation,
    Strategy::HighCrossover,
    Strategy::Balanced,
];

let results: Vec<_> = strategies
    .par_iter()
    .map(|strategy| {
        run_with_strategy(&fitness, strategy, &mut thread_rng())
    })
    .collect();

Common Issues

Race Conditions

Symptom: Non-deterministic results, crashes

Solution: Ensure thread-safe fitness functions

High Overhead

Symptom: Parallel slower than sequential

Solution:

  • Increase population size
  • Use more expensive fitness
  • Reduce thread count

Memory Pressure

Symptom: Slowdown with many threads

Solution:

  • Limit thread count
  • Reduce population size
  • Stream results instead of collecting

Next Steps

Multi-Objective Optimization

This guide shows how to optimize problems with multiple conflicting objectives using NSGA-II.

When to Use Multi-Objective Optimization

Use multi-objective optimization when:

  • You have 2+ objectives that conflict
  • You need the full trade-off surface (Pareto front)
  • No single "best" solution exists
  • Stakeholders need to choose among alternatives

Examples:

  • Cost vs. quality
  • Speed vs. accuracy
  • Performance vs. power consumption

Pareto Dominance

Solution A dominates solution B if:

  • A is at least as good as B in all objectives
  • A is strictly better than B in at least one objective

The Pareto front contains all non-dominated solutions.

Objective 2
    ↑
    │   ★ ← Non-dominated (Pareto front)
    │  ★
    │ ★   ● ← Dominated
    │★      ●
    │         ●
    └──────────→ Objective 1

★ = Pareto-optimal solutions
● = Dominated solutions

Basic NSGA-II Usage

use fugue_evo::prelude::*;

// Define multi-objective fitness
struct BiObjective;

impl Fitness<RealVector> for BiObjective {
    type Value = ParetoFitness;

    fn evaluate(&self, genome: &RealVector) -> ParetoFitness {
        let genes = genome.genes();

        // Objective 1: Minimize sum of squares
        let obj1 = -genes.iter().map(|x| x * x).sum::<f64>();

        // Objective 2: Minimize distance from (1,1,...)
        let obj2 = -genes.iter().map(|x| (x - 1.0).powi(2)).sum::<f64>();

        ParetoFitness::new(vec![obj1, obj2])
    }
}

fn main() -> Result<(), Box<dyn std::error::Error>> {
    let mut rng = StdRng::seed_from_u64(42);
    let bounds = MultiBounds::symmetric(5.0, 10);
    let fitness = BiObjective;

    let result = Nsga2Builder::<RealVector, _, _, _, _>::new()
        .population_size(100)
        .bounds(bounds)
        .selection(TournamentSelection::new(2))
        .crossover(SbxCrossover::new(20.0))
        .mutation(PolynomialMutation::new(20.0))
        .fitness(fitness)
        .max_generations(200)
        .build()?
        .run(&mut rng)?;

    // Access Pareto front
    println!("Pareto front size: {}", result.pareto_front.len());
    for (i, solution) in result.pareto_front.iter().enumerate() {
        println!(
            "Solution {}: objectives = {:?}",
            i, solution.fitness_value().objectives()
        );
    }

    Ok(())
}

ZDT Test Problems

Fugue-evo includes standard ZDT test problems:

// ZDT1: Convex Pareto front
let fitness = Zdt1::new(30);

// ZDT2: Non-convex Pareto front
let fitness = Zdt2::new(30);

// ZDT3: Disconnected Pareto front
let fitness = Zdt3::new(30);

Analyzing Results

Pareto Front Coverage

// Spread metric: how well-distributed are solutions?
let spread = compute_spread(&result.pareto_front);

// Hypervolume: area dominated by Pareto front
let reference_point = vec![0.0, 0.0]; // Worst case
let hypervolume = compute_hypervolume(&result.pareto_front, &reference_point);

Visualizing Trade-offs

// Extract objectives for plotting
let points: Vec<(f64, f64)> = result.pareto_front.iter()
    .map(|sol| {
        let objs = sol.fitness_value().objectives();
        (objs[0], objs[1])
    })
    .collect();

// Export to CSV for plotting
let mut file = File::create("pareto_front.csv")?;
writeln!(file, "obj1,obj2")?;
for (o1, o2) in points {
    writeln!(file, "{},{}", o1, o2)?;
}

Decision Making

After obtaining the Pareto front, choose a solution:

Weighted Sum

fn weighted_solution(pareto_front: &[Individual<RealVector>], weights: &[f64]) -> &Individual<RealVector> {
    pareto_front.iter()
        .max_by(|a, b| {
            let score_a: f64 = a.fitness_value().objectives().iter()
                .zip(weights)
                .map(|(o, w)| o * w)
                .sum();
            let score_b: f64 = b.fitness_value().objectives().iter()
                .zip(weights)
                .map(|(o, w)| o * w)
                .sum();
            score_a.partial_cmp(&score_b).unwrap()
        })
        .unwrap()
}

// Equal weight to both objectives
let balanced = weighted_solution(&result.pareto_front, &[0.5, 0.5]);

Knee Point

Find the "knee" - maximum curvature in Pareto front:

fn find_knee(pareto_front: &[Individual<RealVector>]) -> &Individual<RealVector> {
    // Simplified: find point furthest from line connecting extremes
    let min_obj1 = pareto_front.iter()
        .min_by(|a, b| a.objectives()[0].partial_cmp(&b.objectives()[0]).unwrap())
        .unwrap();
    let min_obj2 = pareto_front.iter()
        .min_by(|a, b| a.objectives()[1].partial_cmp(&b.objectives()[1]).unwrap())
        .unwrap();

    // Find point with maximum perpendicular distance to line
    pareto_front.iter()
        .max_by(|a, b| {
            let dist_a = perpendicular_distance(a, min_obj1, min_obj2);
            let dist_b = perpendicular_distance(b, min_obj1, min_obj2);
            dist_a.partial_cmp(&dist_b).unwrap()
        })
        .unwrap()
}

Constraint-Based Selection

fn select_with_constraints(
    pareto_front: &[Individual<RealVector>],
    obj1_threshold: f64,
) -> Vec<&Individual<RealVector>> {
    pareto_front.iter()
        .filter(|sol| sol.objectives()[0] >= obj1_threshold)
        .collect()
}

Constrained Multi-Objective

Handle constraints with penalty or constraint-dominance:

impl Fitness<RealVector> for ConstrainedMultiObj {
    type Value = ParetoFitness;

    fn evaluate(&self, genome: &RealVector) -> ParetoFitness {
        let genes = genome.genes();

        // Objectives
        let obj1 = compute_obj1(genes);
        let obj2 = compute_obj2(genes);

        // Constraint violation
        let violation = compute_constraint_violation(genes);

        if violation > 0.0 {
            // Infeasible: assign poor objectives
            ParetoFitness::new(vec![f64::NEG_INFINITY, f64::NEG_INFINITY])
                .with_constraint_violation(violation)
        } else {
            ParetoFitness::new(vec![obj1, obj2])
        }
    }
}

Many-Objective Optimization

For 3+ objectives, NSGA-II may struggle. Consider:

Reference Point Methods

// Define reference directions
let reference_directions = generate_reference_directions(3, 12);

// Use reference-point based selection
let result = Nsga3Builder::new()
    .reference_directions(reference_directions)
    // ...
    .run(&mut rng)?;

Decomposition

Convert to multiple single-objective problems:

let weight_vectors = vec![
    vec![1.0, 0.0, 0.0],
    vec![0.0, 1.0, 0.0],
    vec![0.0, 0.0, 1.0],
    vec![0.33, 0.33, 0.34],
    // ... more weight combinations
];

// Run separate optimizations
let results: Vec<_> = weight_vectors.par_iter()
    .map(|weights| {
        let scalarized = ScalarizedFitness::new(&multi_obj, weights);
        run_single_objective(&scalarized, &mut thread_rng())
    })
    .collect();

Performance Tips

Population Sizing

More objectives need larger populations:

ObjectivesRecommended Population
2100-200
3200-500
4+500-1000+

Archive Strategy

Maintain external archive of non-dominated solutions:

let mut archive: Vec<Individual<RealVector>> = Vec::new();

for gen in 0..max_generations {
    // Evolution step...

    // Update archive with new non-dominated solutions
    for solution in population.iter() {
        if !is_dominated_by_archive(solution, &archive) {
            // Remove solutions dominated by new one
            archive.retain(|a| !solution.dominates(a));
            archive.push(solution.clone());
        }
    }
}

Next Steps

WASM & Browser Usage

This guide shows how to use fugue-evo in web applications via WebAssembly.

Overview

The fugue-evo-wasm package provides JavaScript/TypeScript bindings for running evolutionary optimization in the browser or Node.js.

Installation

NPM

npm install fugue-evo-wasm

From Source

cd crates/fugue-evo-wasm
wasm-pack build --target web

Basic Usage

JavaScript

import init, { SimpleGAOptimizer, OptimizationConfig } from 'fugue-evo-wasm';

async function runOptimization() {
  // Initialize WASM module
  await init();

  // Define configuration
  const config = new OptimizationConfig();
  config.population_size = 100;
  config.max_generations = 200;
  config.dimensions = 10;
  config.bounds_min = -5.12;
  config.bounds_max = 5.12;

  // Create optimizer
  const optimizer = new SimpleGAOptimizer(config);

  // Define fitness function
  const fitness = (genes) => {
    // Sphere function: minimize sum of squares
    return -genes.reduce((sum, x) => sum + x * x, 0);
  };

  // Run optimization
  const result = optimizer.run(fitness);

  console.log('Best fitness:', result.best_fitness);
  console.log('Best solution:', result.best_genome);
  console.log('Generations:', result.generations);
}

runOptimization();

TypeScript

import init, {
  SimpleGAOptimizer,
  OptimizationConfig,
  OptimizationResult
} from 'fugue-evo-wasm';

async function runOptimization(): Promise<void> {
  await init();

  const config: OptimizationConfig = {
    population_size: 100,
    max_generations: 200,
    dimensions: 10,
    bounds_min: -5.12,
    bounds_max: 5.12,
    mutation_rate: 0.1,
    crossover_rate: 0.9,
  };

  const optimizer = new SimpleGAOptimizer(config);

  const fitness = (genes: Float64Array): number => {
    let sum = 0;
    for (let i = 0; i < genes.length; i++) {
      sum += genes[i] * genes[i];
    }
    return -sum;
  };

  const result: OptimizationResult = optimizer.run(fitness);

  console.log(`Best fitness: ${result.best_fitness}`);
}

Step-by-Step Evolution

For UI updates during evolution:

import init, { SimpleGAOptimizer, OptimizationConfig } from 'fugue-evo-wasm';

async function interactiveEvolution() {
  await init();

  const config = new OptimizationConfig();
  config.population_size = 50;
  config.dimensions = 5;

  const optimizer = new SimpleGAOptimizer(config);

  const fitness = (genes) => -genes.reduce((s, x) => s + x * x, 0);

  // Initialize
  optimizer.initialize(fitness);

  // Step through evolution
  for (let gen = 0; gen < 100; gen++) {
    const state = optimizer.step(fitness);

    // Update UI
    document.getElementById('generation').textContent = gen;
    document.getElementById('best-fitness').textContent = state.best_fitness.toFixed(6);

    // Allow UI to update
    await new Promise(resolve => setTimeout(resolve, 10));
  }

  const result = optimizer.get_result();
  console.log('Final result:', result);
}

Web Worker Integration

For heavy computations, use a Web Worker:

main.js

const worker = new Worker('optimizer-worker.js');

worker.onmessage = (event) => {
  const { type, data } = event.data;

  switch (type) {
    case 'progress':
      updateProgress(data.generation, data.best_fitness);
      break;
    case 'complete':
      displayResult(data.result);
      break;
  }
};

function startOptimization(config) {
  worker.postMessage({ type: 'start', config });
}

optimizer-worker.js

importScripts('fugue_evo_wasm.js');

let optimizer = null;

self.onmessage = async (event) => {
  const { type, config } = event.data;

  if (type === 'start') {
    await wasm_bindgen('fugue_evo_wasm_bg.wasm');

    const optConfig = new wasm_bindgen.OptimizationConfig();
    Object.assign(optConfig, config);

    optimizer = new wasm_bindgen.SimpleGAOptimizer(optConfig);

    const fitness = (genes) => {
      // Your fitness function
      return -genes.reduce((s, x) => s + x * x, 0);
    };

    optimizer.initialize(fitness);

    for (let gen = 0; gen < config.max_generations; gen++) {
      const state = optimizer.step(fitness);

      // Report progress every 10 generations
      if (gen % 10 === 0) {
        self.postMessage({
          type: 'progress',
          data: { generation: gen, best_fitness: state.best_fitness }
        });
      }
    }

    self.postMessage({
      type: 'complete',
      data: { result: optimizer.get_result() }
    });
  }
};

Interactive Evolution in Browser

For human-in-the-loop optimization:

import init, { InteractiveOptimizer, EvaluationMode } from 'fugue-evo-wasm';

class InteractiveEvolutionUI {
  constructor() {
    this.optimizer = null;
    this.currentCandidates = [];
  }

  async initialize() {
    await init();

    this.optimizer = new InteractiveOptimizer({
      population_size: 12,
      evaluation_mode: EvaluationMode.Rating,
      batch_size: 4,
    });

    this.optimizer.initialize();
    this.showNextBatch();
  }

  showNextBatch() {
    const request = this.optimizer.get_evaluation_request();

    if (request.type === 'complete') {
      this.showResults(request.result);
      return;
    }

    this.currentCandidates = request.candidates;
    this.renderCandidates(request.candidates);
  }

  renderCandidates(candidates) {
    const container = document.getElementById('candidates');
    container.innerHTML = '';

    candidates.forEach((candidate, index) => {
      const div = document.createElement('div');
      div.className = 'candidate';
      div.innerHTML = `
        <div class="visualization">${this.visualize(candidate.genome)}</div>
        <input type="range" min="1" max="10" value="5"
               data-id="${candidate.id}" class="rating">
      `;
      container.appendChild(div);
    });
  }

  submitRatings() {
    const ratings = [];
    document.querySelectorAll('.rating').forEach(input => {
      ratings.push({
        id: parseInt(input.dataset.id),
        rating: parseFloat(input.value)
      });
    });

    this.optimizer.provide_ratings(ratings);
    this.showNextBatch();
  }

  visualize(genome) {
    // Convert genome to visual representation
    const colors = genome.map(g => {
      const hue = ((g + 5) / 10) * 360;
      return `hsl(${hue}, 70%, 50%)`;
    });
    return colors.map(c => `<span style="background:${c}">■</span>`).join('');
  }

  showResults(result) {
    console.log('Evolution complete!', result);
  }
}

Performance Considerations

Memory Management

WASM has limited memory. For large populations:

// Free memory when done
optimizer.free();
config.free();

Typed Arrays

Use typed arrays for efficiency:

// Efficient: Float64Array
const genes = new Float64Array([1.0, 2.0, 3.0]);

// Less efficient: regular array (converted internally)
const genes = [1.0, 2.0, 3.0];

Batch Fitness Evaluation

Reduce JS↔WASM calls:

// Less efficient: evaluate one at a time
for (const individual of population) {
  const fitness = evaluateFitness(individual);
}

// More efficient: batch evaluation
const fitnesses = evaluateBatch(population);

Framework Integration

React

import { useEffect, useState } from 'react';
import init, { SimpleGAOptimizer, OptimizationConfig } from 'fugue-evo-wasm';

function EvolutionComponent() {
  const [result, setResult] = useState(null);
  const [progress, setProgress] = useState(0);

  useEffect(() => {
    let cancelled = false;

    async function runEvolution() {
      await init();

      const config = new OptimizationConfig();
      config.population_size = 100;
      config.dimensions = 10;
      config.max_generations = 100;

      const optimizer = new SimpleGAOptimizer(config);
      const fitness = (genes) => -genes.reduce((s, x) => s + x * x, 0);

      optimizer.initialize(fitness);

      for (let gen = 0; gen < 100 && !cancelled; gen++) {
        optimizer.step(fitness);
        setProgress(gen + 1);
        await new Promise(r => setTimeout(r, 0)); // Yield to React
      }

      if (!cancelled) {
        setResult(optimizer.get_result());
      }

      optimizer.free();
      config.free();
    }

    runEvolution();
    return () => { cancelled = true; };
  }, []);

  return (
    <div>
      <p>Progress: {progress}/100</p>
      {result && <p>Best fitness: {result.best_fitness}</p>}
    </div>
  );
}

Vue

<template>
  <div>
    <p>Progress: {{ progress }}/{{ maxGenerations }}</p>
    <p v-if="result">Best fitness: {{ result.best_fitness }}</p>
  </div>
</template>

<script setup>
import { ref, onMounted, onUnmounted } from 'vue';
import init, { SimpleGAOptimizer, OptimizationConfig } from 'fugue-evo-wasm';

const progress = ref(0);
const maxGenerations = ref(100);
const result = ref(null);
let cancelled = false;

onMounted(async () => {
  await init();

  const config = new OptimizationConfig();
  config.population_size = 100;
  config.dimensions = 10;
  config.max_generations = maxGenerations.value;

  const optimizer = new SimpleGAOptimizer(config);
  const fitness = (genes) => -genes.reduce((s, x) => s + x * x, 0);

  optimizer.initialize(fitness);

  for (let gen = 0; gen < maxGenerations.value && !cancelled; gen++) {
    optimizer.step(fitness);
    progress.value = gen + 1;
    await new Promise(r => setTimeout(r, 0));
  }

  if (!cancelled) {
    result.value = optimizer.get_result();
  }

  optimizer.free();
  config.free();
});

onUnmounted(() => {
  cancelled = true;
});
</script>

Limitations

  • No file I/O: Can't use checkpointing
  • Single-threaded: No Rayon parallelism
  • Memory limits: ~4GB maximum
  • Fitness callback overhead: JS calls add latency

Next Steps

Algorithms Reference

This section provides detailed reference documentation for all optimization algorithms in fugue-evo.

Algorithm Overview

AlgorithmModuleUse Case
SimpleGAalgorithms::simple_gaGeneral-purpose optimization
CMA-ESalgorithms::cmaesContinuous non-separable optimization
NSGA-IIalgorithms::nsga2Multi-objective optimization
Island Modelalgorithms::islandParallel multimodal optimization
EDA/UMDAalgorithms::edaProbabilistic model-based optimization

Common Patterns

Builder Pattern

All algorithms use type-safe builders:

let result = SomeAlgorithmBuilder::new()
    .population_size(100)
    .max_generations(200)
    // ... configuration
    .build()?
    .run(&mut rng)?;

Result Structure

Most algorithms return an EvolutionResult:

pub struct EvolutionResult<G, F> {
    pub best_genome: G,
    pub best_fitness: F,
    pub generations: usize,
    pub evaluations: usize,
    pub stats: EvolutionStats,
}

Termination

Algorithms support pluggable termination criteria:

.max_generations(200)
// or
.termination(TargetFitness::new(-0.001))
// or
.termination(AnyOf::new(vec![
    Box::new(MaxGenerations::new(500)),
    Box::new(FitnessStagnation::new(50)),
]))

Algorithm Selection

                  Problem Type
                       │
    ┌──────────────────┼──────────────────┐
    │                  │                  │
Multi-Objective    Continuous        Discrete/Other
    │                  │                  │
    ▼                  ▼                  ▼
 NSGA-II          Separable?          SimpleGA
                       │
              ┌────────┼────────┐
              │ Yes    │        │ No
              ▼        │        ▼
          SimpleGA     │     CMA-ES
           or EDA      │
                       │
                  Multimodal?
                       │
              ┌────────┼────────┐
              │ Yes            │ No
              ▼                ▼
        Island Model       SimpleGA

See Also

SimpleGA Reference

The Simple Genetic Algorithm is a flexible, general-purpose evolutionary optimization algorithm.

Module

use fugue_evo::algorithms::simple_ga::{SimpleGA, SimpleGABuilder, SimpleGAConfig};

Builder API

Required Configuration

MethodTypeDescription
bounds(bounds)MultiBoundsSearch space bounds
selection(sel)impl SelectionOperatorSelection operator
crossover(cx)impl CrossoverOperatorCrossover operator
mutation(mut)impl MutationOperatorMutation operator
fitness(fit)impl FitnessFitness function

Optional Configuration

MethodTypeDefaultDescription
population_size(n)usize100Population size
max_generations(n)usize100Max generations
elitism(b)boolfalseEnable elitism
elite_count(n)usize1Number of elites
parallel(b)boolfalseParallel evaluation
initial_population(pop)Vec<G>RandomCustom initial population

Usage

Basic Example

use fugue_evo::prelude::*;

let result = SimpleGABuilder::<RealVector, f64, _, _, _, _, _>::new()
    .population_size(100)
    .bounds(MultiBounds::symmetric(5.12, 10))
    .selection(TournamentSelection::new(3))
    .crossover(SbxCrossover::new(20.0))
    .mutation(PolynomialMutation::new(20.0))
    .fitness(Sphere::new(10))
    .max_generations(200)
    .elitism(true)
    .build()?
    .run(&mut rng)?;

With Custom Termination

let result = SimpleGABuilder::<RealVector, f64, _, _, _, _, _>::new()
    // ... configuration
    .termination(AnyOf::new(vec![
        Box::new(MaxGenerations::new(1000)),
        Box::new(TargetFitness::new(-0.001)),
        Box::new(FitnessStagnation::new(50)),
    ]))
    .build()?
    .run(&mut rng)?;

Step-by-Step Execution

let mut ga = SimpleGABuilder::new()
    // ... configuration
    .build()?;

ga.initialize(&mut rng)?;

while !ga.should_terminate() {
    ga.step(&mut rng)?;

    // Access current state
    println!("Generation {}: best = {:.6}",
        ga.generation(),
        ga.best_fitness().unwrap_or(0.0));
}

let result = ga.into_result();

Configuration Struct

pub struct SimpleGAConfig {
    pub population_size: usize,
    pub elitism: bool,
    pub elite_count: usize,
    pub parallel: bool,
}

Generic Parameters

The builder has extensive generics for type safety:

SimpleGABuilder::<G, F, S, C, M, Fit, Term>
ParameterConstraintDescription
GEvolutionaryGenomeGenome type
FFitnessValueFitness value type
SSelectionOperator<G>Selection operator
CCrossoverOperator<G>Crossover operator
MMutationOperator<G>Mutation operator
FitFitness<G, Value=F>Fitness function
TermTerminationCriterionTermination criterion

Algorithm Flow

┌─────────────────────────────────────────┐
│           SimpleGA Flow                  │
│                                          │
│  1. Initialize random population         │
│  2. Evaluate fitness                     │
│  3. while not terminated:                │
│     a. Select parents                    │
│     b. Apply crossover                   │
│     c. Apply mutation                    │
│     d. Evaluate offspring                │
│     e. Replace population (with elitism) │
│     f. Update statistics                 │
│  4. Return best solution                 │
└─────────────────────────────────────────┘

Error Handling

let result = SimpleGABuilder::new()
    .population_size(100)
    // Missing required configuration
    .build(); // Returns Err(ConfigurationError)

match result {
    Ok(ga) => { /* run */ }
    Err(e) => eprintln!("Configuration error: {}", e),
}

Performance Tips

  1. Population size: Start with 100, increase for multimodal problems
  2. Selection pressure: Higher tournament size = faster convergence
  3. Elitism: Almost always beneficial
  4. Parallelism: Enable for expensive fitness functions

See Also

CMA-ES Reference

Covariance Matrix Adaptation Evolution Strategy for continuous optimization.

Module

use fugue_evo::algorithms::cmaes::{CmaEs, CmaEsState, CmaEsFitness};

Constructor

pub fn new(initial_mean: Vec<f64>, initial_sigma: f64) -> Self
ParameterDescription
initial_meanStarting point (center of search distribution)
initial_sigmaInitial step size (standard deviation)

Configuration Methods

MethodDescription
with_bounds(bounds)Set search space bounds
with_population_size(n)Override default population size

Fitness Trait

CMA-ES uses a different fitness trait (minimization):

pub trait CmaEsFitness: Send + Sync {
    /// Evaluate solution - return value to MINIMIZE
    fn evaluate(&self, x: &RealVector) -> f64;
}

Usage

Basic Example

use fugue_evo::prelude::*;

struct MyFitness;

impl CmaEsFitness for MyFitness {
    fn evaluate(&self, x: &RealVector) -> f64 {
        // Return value to minimize
        x.genes().iter().map(|g| g * g).sum()
    }
}

let mut cmaes = CmaEs::new(vec![0.0; 10], 0.5)
    .with_bounds(MultiBounds::symmetric(5.0, 10));

let best = cmaes.run_generations(&MyFitness, 1000, &mut rng)?;

println!("Best fitness: {}", best.fitness_value());

Step-by-Step

let mut cmaes = CmaEs::new(vec![0.0; 10], 1.0);

for _ in 0..100 {
    cmaes.step(&fitness, &mut rng)?;

    println!("Generation {}: sigma = {:.6}",
        cmaes.state.generation,
        cmaes.state.sigma);

    // Early stopping
    if cmaes.state.sigma < 1e-10 {
        break;
    }
}

State Structure

pub struct CmaEsState {
    pub generation: usize,
    pub evaluations: usize,
    pub mean: Vec<f64>,
    pub sigma: f64,
    pub covariance: Vec<Vec<f64>>,
    // ... internal adaptation parameters
}

Algorithm Parameters

CMA-ES automatically sets most parameters based on problem dimension:

ParameterDefaultFormula
λ (population)auto4 + floor(3 ln(n))
μ (parents)autoλ / 2
c_σauto(μ_eff + 2) / (n + μ_eff + 5)
d_σauto1 + 2 max(0, √((μ_eff-1)/(n+1)) - 1) + c_σ

Convergence Criteria

CMA-ES converges when:

  • sigma becomes very small (< 1e-12)
  • Condition number of covariance matrix is too high
  • No improvement for many generations

Memory Usage

CMA-ES stores a full covariance matrix:

DimensionsMemory (approx)
10~1 KB
100~80 KB
1000~8 MB
10000~800 MB

For high dimensions (>1000), consider alternatives.

Best Practices

  1. Initial sigma: Should cover ~1/3 of the search range
  2. Initial mean: Start near expected optimum if known
  3. Bounds: Use soft bounds (CMA-ES handles them gracefully)
  4. Restarts: For multimodal problems, use multiple restarts

See Also

NSGA-II Reference

Non-dominated Sorting Genetic Algorithm II for multi-objective optimization.

Module

use fugue_evo::algorithms::nsga2::{Nsga2, Nsga2Builder, Nsga2Result};
use fugue_evo::fitness::ParetoFitness;

Builder API

Required Configuration

MethodTypeDescription
bounds(bounds)MultiBoundsSearch space bounds
selection(sel)impl SelectionOperatorSelection operator
crossover(cx)impl CrossoverOperatorCrossover operator
mutation(mut)impl MutationOperatorMutation operator
fitness(fit)impl Fitness<Value=ParetoFitness>Multi-objective fitness

Optional Configuration

MethodTypeDefaultDescription
population_size(n)usize100Population size
max_generations(n)usize100Max generations

ParetoFitness

Multi-objective fitness values:

pub struct ParetoFitness {
    objectives: Vec<f64>,
    constraint_violation: Option<f64>,
}

impl ParetoFitness {
    pub fn new(objectives: Vec<f64>) -> Self;
    pub fn with_constraint_violation(self, violation: f64) -> Self;
    pub fn objectives(&self) -> &[f64];
    pub fn dominates(&self, other: &Self) -> bool;
}

Usage

Basic Example

use fugue_evo::prelude::*;

struct BiObjective;

impl Fitness<RealVector> for BiObjective {
    type Value = ParetoFitness;

    fn evaluate(&self, genome: &RealVector) -> ParetoFitness {
        let g = genome.genes();
        ParetoFitness::new(vec![
            g.iter().sum(),        // Maximize sum
            g.iter().product(),    // Maximize product
        ])
    }
}

let result = Nsga2Builder::<RealVector, _, _, _, _>::new()
    .population_size(100)
    .bounds(MultiBounds::symmetric(5.0, 10))
    .selection(TournamentSelection::new(2))
    .crossover(SbxCrossover::new(20.0))
    .mutation(PolynomialMutation::new(20.0))
    .fitness(BiObjective)
    .max_generations(200)
    .build()?
    .run(&mut rng)?;

// Access Pareto front
for solution in &result.pareto_front {
    println!("{:?}", solution.fitness_value().objectives());
}

Result Structure

pub struct Nsga2Result<G> {
    pub pareto_front: Vec<Individual<G>>,
    pub generations: usize,
    pub evaluations: usize,
}

Algorithm Details

Non-dominated Sorting

Ranks population into fronts:

  • Front 0: All non-dominated solutions
  • Front 1: Non-dominated after removing Front 0
  • etc.

Crowding Distance

Within each front, calculates spacing:

  • Higher crowding distance = more isolated
  • Preserves diversity on Pareto front

Selection

Binary tournament using:

  1. Lower rank preferred
  2. If same rank, higher crowding distance preferred

Constraints

Handle constraints via constraint-dominance:

impl Fitness<RealVector> for ConstrainedBiObjective {
    type Value = ParetoFitness;

    fn evaluate(&self, genome: &RealVector) -> ParetoFitness {
        let g = genome.genes();
        let violation = compute_violation(g);

        ParetoFitness::new(vec![obj1, obj2])
            .with_constraint_violation(violation)
    }
}

Constraint-dominance rules:

  1. Feasible always dominates infeasible
  2. Between infeasible: lower violation wins
  3. Between feasible: normal Pareto dominance

See Also

Island Model Reference

Parallel evolution with multiple populations and migration.

Module

use fugue_evo::algorithms::island::{
    IslandModel, IslandModelBuilder, IslandConfig,
    MigrationTopology, MigrationPolicy
};

Builder API

Required Configuration

MethodTypeDescription
bounds(bounds)MultiBoundsSearch space bounds
selection(sel)impl SelectionOperatorSelection operator
crossover(cx)impl CrossoverOperatorCrossover operator
mutation(mut)impl MutationOperatorMutation operator
fitness(fit)impl FitnessFitness function

Island Configuration

MethodTypeDefaultDescription
num_islands(n)usize4Number of islands
island_population_size(n)usize50Population per island
topology(t)MigrationTopologyRingConnection pattern
migration_interval(n)usize25Generations between migrations
migration_policy(p)MigrationPolicyBest(2)What to migrate

Migration Topology

pub enum MigrationTopology {
    /// Each island connects to 2 neighbors
    Ring,
    /// Central hub connects to all
    Star,
    /// Every island connects to every other
    FullyConnected,
}

Ring Topology

1 ←→ 2
↕     ↕
4 ←→ 3

Star Topology

  2   3
   \ /
    1 (hub)
   / \
  5   4

Fully Connected

1 ←→ 2
↕ ✕ ↕
4 ←→ 3

Migration Policy

pub enum MigrationPolicy {
    /// Send n best individuals
    Best(usize),
    /// Send n random individuals
    Random(usize),
    /// Replace n worst with immigrants
    ReplaceWorst(usize),
}

Usage

Basic Example

use fugue_evo::prelude::*;

let mut model = IslandModelBuilder::<RealVector, _, _, _, _, f64>::new()
    .num_islands(4)
    .island_population_size(50)
    .topology(MigrationTopology::Ring)
    .migration_interval(25)
    .migration_policy(MigrationPolicy::Best(2))
    .bounds(MultiBounds::symmetric(5.12, 20))
    .selection(TournamentSelection::new(3))
    .crossover(SbxCrossover::new(15.0))
    .mutation(PolynomialMutation::new(20.0))
    .fitness(Rastrigin::new(20))
    .build(&mut rng)?;

let best = model.run(200, &mut rng)?;
println!("Best fitness: {}", best.fitness_value());

Step-by-Step

let mut model = IslandModelBuilder::new()
    // ... configuration
    .build(&mut rng)?;

for gen in 0..max_generations {
    model.step(&mut rng)?;

    if gen % 50 == 0 {
        println!("Generation {}: best on each island:", gen);
        for (i, island) in model.islands.iter().enumerate() {
            let best = island.best().unwrap();
            println!("  Island {}: {:.6}", i, best.fitness_value());
        }
    }
}

Force Migration

// Trigger migration outside normal interval
model.migrate(&mut rng);

State Access

pub struct IslandModel<G, F, S, C, M, Fit> {
    pub islands: Vec<Population<G, F>>,
    pub generation: usize,
    pub config: IslandConfig,
    // ...
}

Configuration Struct

pub struct IslandConfig {
    pub num_islands: usize,
    pub island_population_size: usize,
    pub topology: MigrationTopology,
    pub migration_interval: usize,
    pub migration_policy: MigrationPolicy,
}

Migration Mechanics

Each migration event:

  1. Each island selects emigrants (per policy)
  2. Emigrants sent to neighbors (per topology)
  3. Immigrants replace worst individuals

Total population remains constant.

Performance Tips

  1. Island count: 4-8 typically sufficient
  2. Migration interval: 20-50 generations
  3. Migration size: 1-5 individuals
  4. Topology: Ring for diversity, Fully Connected for speed

See Also

EDA/UMDA Reference

Estimation of Distribution Algorithms using probabilistic models.

Module

use fugue_evo::algorithms::eda::{Umda, UmdaBuilder, UmdaConfig};

Overview

EDAs replace crossover/mutation with probabilistic model building:

  1. Select promising individuals
  2. Build probabilistic model from selected
  3. Sample new individuals from model
  4. Repeat

UMDA (Univariate Marginal Distribution Algorithm) assumes variables are independent.

Builder API

Required Configuration

MethodTypeDescription
bounds(bounds)MultiBoundsSearch space bounds
fitness(fit)impl FitnessFitness function

Optional Configuration

MethodTypeDefaultDescription
population_size(n)usize100Population size
selection_size(n)usize50Individuals for model building
max_generations(n)usize100Max generations

Usage

Basic Example

use fugue_evo::prelude::*;

let result = UmdaBuilder::<RealVector, f64, _>::new()
    .population_size(100)
    .selection_size(30)  // Top 30% for model
    .bounds(MultiBounds::symmetric(5.12, 10))
    .fitness(Sphere::new(10))
    .max_generations(200)
    .build()?
    .run(&mut rng)?;

Algorithm Details

Model Building (Univariate)

For continuous variables:

μᵢ = mean of selected individuals' gene i
σᵢ = std dev of selected individuals' gene i

Sampling

New individuals sampled from:

xᵢ ~ N(μᵢ, σᵢ²)

When to Use

Good for:

  • Separable problems (variables independent)
  • Problems where crossover is disruptive
  • Understanding variable distributions

Not good for:

  • Non-separable problems (use CMA-ES instead)
  • Problems with variable interactions

Comparison with GA

AspectGAEDA (UMDA)
VariationCrossover + MutationModel sampling
InteractionsCrossover preserves building blocksAssumes independence
ParametersCrossover rate, mutation rateSelection ratio

See Also

Genome Types Reference

This section documents all built-in genome types in fugue-evo.

Overview

GenomeModuleUse Case
RealVectorgenome::real_vectorContinuous optimization
BitStringgenome::bit_stringBinary/discrete optimization
Permutationgenome::permutationOrdering problems
TreeGenomegenome::treeGenetic programming

Core Trait

All genomes implement EvolutionaryGenome:

pub trait EvolutionaryGenome: Clone + Send + Sync + Serialize + DeserializeOwned {
    /// Convert to Fugue trace representation
    fn to_trace(&self) -> Trace;

    /// Reconstruct from trace
    fn from_trace(trace: &Trace) -> Result<Self, GenomeError>;
}

Specialized Traits

Additional traits for specific genome types:

/// Real-valued genomes
pub trait RealValuedGenome: EvolutionaryGenome {
    fn genes(&self) -> &[f64];
    fn genes_mut(&mut self) -> &mut [f64];
    fn dimension(&self) -> usize;
}

/// Binary genomes
pub trait BinaryGenome: EvolutionaryGenome {
    fn bits(&self) -> &[bool];
    fn bits_mut(&mut self) -> &mut [bool];
    fn length(&self) -> usize;
}

/// Permutation genomes
pub trait PermutationGenome: EvolutionaryGenome {
    fn as_slice(&self) -> &[usize];
    fn len(&self) -> usize;
}

Genome Selection Guide

What are you optimizing?
        │
        ├── Continuous variables → RealVector
        │
        ├── Yes/No choices → BitString
        │
        ├── Ordering/arrangement → Permutation
        │
        └── Programs/expressions → TreeGenome

See Also

RealVector Reference

Fixed-length vector of real-valued genes for continuous optimization.

Module

use fugue_evo::genome::real_vector::RealVector;

Construction

// From vector
let genome = RealVector::new(vec![1.0, 2.0, 3.0]);

// With dimension
let genome = RealVector::zeros(10);

// Random within bounds
let genome = RealVector::random(&bounds, &mut rng);

Methods

Access

MethodReturnDescription
genes()&[f64]Immutable gene access
genes_mut()&mut [f64]Mutable gene access
dimension()usizeNumber of genes
get(i)Option<f64>Get gene at index

Vector Operations

MethodDescription
add(&other)Element-wise addition
sub(&other)Element-wise subtraction
scale(factor)Multiply all genes by factor
norm()Euclidean norm
euclidean_distance(&other)Distance between vectors
dot(&other)Dot product

Statistics

MethodDescription
mean()Mean of genes
variance()Variance of genes
sum()Sum of genes

Trace Representation

Genes are stored at indexed addresses:

// Trace structure
addr!("gene", 0) → genes[0]
addr!("gene", 1) → genes[1]
// ...

Usage with Operators

// Simulated Binary Crossover
SbxCrossover::new(20.0)

// Blend Crossover
BlendCrossover::new(0.5)

// Uniform Crossover
UniformCrossover::new()
// Polynomial Mutation
PolynomialMutation::new(20.0)

// Gaussian Mutation
GaussianMutation::new(0.1)

Example

use fugue_evo::prelude::*;

let bounds = MultiBounds::symmetric(5.12, 10);
let mut genome = RealVector::random(&bounds, &mut rng);

// Modify genes
genome.genes_mut()[0] = 1.0;

// Vector operations
let other = RealVector::random(&bounds, &mut rng);
let distance = genome.euclidean_distance(&other);

// Use in GA
let result = SimpleGABuilder::<RealVector, f64, _, _, _, _, _>::new()
    .bounds(bounds)
    // ...

See Also

BitString Reference

Fixed-length string of bits for binary/discrete optimization.

Module

use fugue_evo::genome::bit_string::BitString;

Construction

// From vector
let genome = BitString::new(vec![true, false, true, true]);

// All zeros
let genome = BitString::zeros(10);

// All ones
let genome = BitString::ones(10);

// Random
let genome = BitString::random(10, &mut rng);

// Random with probability
let genome = BitString::random_with_probability(10, 0.3, &mut rng);

Methods

Access

MethodReturnDescription
bits()&[bool]Immutable bit access
bits_mut()&mut [bool]Mutable bit access
length()usizeNumber of bits
get(i)Option<bool>Get bit at index

Bit Operations

MethodDescription
flip(i)Flip bit at index
set(i, val)Set bit to value
count_ones()Number of true bits
count_zeros()Number of false bits

Binary Operations

MethodDescription
and(&other)Bitwise AND
or(&other)Bitwise OR
xor(&other)Bitwise XOR
not()Bitwise NOT

Distance

MethodDescription
hamming_distance(&other)Number of differing bits

Trace Representation

Bits stored as 0.0 or 1.0:

addr!("bit", 0) → 0.0 or 1.0
addr!("bit", 1) → 0.0 or 1.0
// ...

Crossover

// Uniform: each bit from random parent
UniformCrossover::new()

// One-point crossover
OnePointCrossover::new()

// Two-point crossover
TwoPointCrossover::new()

Mutation

// Flip each bit with probability
BitFlipMutation::new(0.01)  // 1% per bit

Example: Knapsack Problem

use fugue_evo::prelude::*;

struct KnapsackFitness {
    values: Vec<f64>,
    weights: Vec<f64>,
    capacity: f64,
}

impl Fitness<BitString> for KnapsackFitness {
    type Value = f64;

    fn evaluate(&self, genome: &BitString) -> f64 {
        let mut total_value = 0.0;
        let mut total_weight = 0.0;

        for (i, &selected) in genome.bits().iter().enumerate() {
            if selected {
                total_value += self.values[i];
                total_weight += self.weights[i];
            }
        }

        if total_weight > self.capacity {
            0.0  // Infeasible
        } else {
            total_value
        }
    }
}

// Usage
let fitness = KnapsackFitness {
    values: vec![60.0, 100.0, 120.0],
    weights: vec![10.0, 20.0, 30.0],
    capacity: 50.0,
};

let result = SimpleGABuilder::<BitString, f64, _, _, _, _, _>::new()
    .population_size(50)
    .initial_population((0..50).map(|_| BitString::random(3, &mut rng)).collect())
    .selection(TournamentSelection::new(3))
    .crossover(UniformCrossover::new())
    .mutation(BitFlipMutation::new(0.05))
    .fitness(fitness)
    .max_generations(100)
    .build()?
    .run(&mut rng)?;

See Also

Permutation Reference

Ordered arrangement of elements for ordering/scheduling problems.

Module

use fugue_evo::genome::permutation::Permutation;

Construction

// From vector
let genome = Permutation::new(vec![2, 0, 3, 1]);

// Identity permutation [0, 1, 2, ..., n-1]
let genome = Permutation::identity(5);

// Random permutation
let genome = Permutation::random(5, &mut rng);

Methods

Access

MethodReturnDescription
as_slice()&[usize]Immutable element access
as_mut_slice()&mut [usize]Mutable element access
len()usizeNumber of elements
get(i)Option<usize>Get element at position

Permutation Operations

MethodDescription
swap(i, j)Swap elements at positions
reverse_segment(i, j)Reverse segment [i, j]
insert(from, to)Remove from from, insert at to
inverse()Compute inverse permutation
compose(&other)Compose permutations

Validation

MethodReturnDescription
is_valid()boolCheck if valid permutation
contains(elem)boolCheck if contains element

Distance

MethodDescription
kendall_tau_distance(&other)Number of pairwise disagreements

Trace Representation

Elements stored at indexed addresses:

addr!("pos", 0) → permutation[0]
addr!("pos", 1) → permutation[1]
// ...

Crossover

// Order Crossover (OX)
OrderCrossover::new()

// Partially Mapped Crossover (PMX)
PmxCrossover::new()

// Cycle Crossover
CycleCrossover::new()

Mutation

// Swap two random positions
SwapMutation::new(0.1)

// Reverse a segment
InversionMutation::new(0.1)

// Move element to new position
InsertionMutation::new(0.1)

Example: TSP

use fugue_evo::prelude::*;

struct TSPFitness {
    distances: Vec<Vec<f64>>,  // Distance matrix
}

impl Fitness<Permutation> for TSPFitness {
    type Value = f64;

    fn evaluate(&self, genome: &Permutation) -> f64 {
        let tour = genome.as_slice();
        let n = tour.len();

        let mut total_distance = 0.0;
        for i in 0..n {
            let from = tour[i];
            let to = tour[(i + 1) % n];
            total_distance += self.distances[from][to];
        }

        -total_distance  // Negate for maximization
    }
}

// Usage
let n_cities = 10;
let distances = generate_distance_matrix(n_cities);
let fitness = TSPFitness { distances };

let result = SimpleGABuilder::<Permutation, f64, _, _, _, _, _>::new()
    .population_size(100)
    .initial_population((0..100).map(|_| Permutation::random(n_cities, &mut rng)).collect())
    .selection(TournamentSelection::new(5))
    .crossover(OrderCrossover::new())
    .mutation(InversionMutation::new(0.2))
    .fitness(fitness)
    .max_generations(500)
    .build()?
    .run(&mut rng)?;

println!("Best tour: {:?}", result.best_genome.as_slice());
println!("Tour length: {}", -result.best_fitness);

Validity Guarantees

Permutation-specific operators maintain validity:

  • Every element appears exactly once
  • No duplicates
  • Range is [0, n)

See Also

TreeGenome Reference

Tree-structured genome for genetic programming.

Module

use fugue_evo::genome::tree::{TreeGenome, TreeNode, ArithmeticTerminal, ArithmeticFunction};

Construction

// Generate full tree (all branches same depth)
let tree = TreeGenome::generate_full(&mut rng, min_depth, max_depth);

// Generate grow tree (variable depth)
let tree = TreeGenome::generate_grow(&mut rng, max_depth, terminal_prob);

// From root node
let root = TreeNode::function(Add, vec![
    TreeNode::terminal(X),
    TreeNode::terminal(Constant(1.0)),
]);
let tree = TreeGenome::new(root, 10);

TreeNode Types

pub enum TreeNode<T, F> {
    Terminal(T),           // Leaf node
    Function(F, Vec<Self>), // Internal node with children
}

Built-in Terminals

pub enum ArithmeticTerminal {
    X,              // Variable
    Constant(f64),  // Numeric constant
}

Built-in Functions

pub enum ArithmeticFunction {
    Add,  // Binary +
    Sub,  // Binary -
    Mul,  // Binary *
    Div,  // Protected division
}

Methods

Tree Properties

MethodReturnDescription
size()usizeTotal number of nodes
depth()usizeMaximum depth
height()usizeSame as depth

Evaluation

MethodDescription
evaluate(inputs)Evaluate tree with input values

Representation

MethodReturnDescription
to_sexpr()StringS-expression format
to_infix()StringInfix notation
MethodDescription
root.positions()Get all node positions
root.get_subtree(pos)Get subtree at position
root.replace_subtree(pos, new)Replace subtree

Operators

Crossover

fn subtree_crossover<T, F>(
    parent1: &TreeGenome<T, F>,
    parent2: &TreeGenome<T, F>,
    rng: &mut impl Rng,
) -> TreeGenome<T, F> {
    // Select random subtrees and exchange
}

Mutation

// Point mutation: change single node
fn point_mutate<T, F>(tree: &mut TreeGenome<T, F>, rng: &mut impl Rng);

// Subtree mutation: replace subtree with random
fn subtree_mutate<T, F>(tree: &mut TreeGenome<T, F>, rng: &mut impl Rng);

// Hoist mutation: replace tree with subtree
fn hoist_mutate<T, F>(tree: &mut TreeGenome<T, F>, rng: &mut impl Rng);

Custom Primitives

Define your own terminals and functions:

#[derive(Clone, Debug)]
pub enum MyTerminal {
    X, Y,  // Two variables
    Constant(f64),
}

#[derive(Clone, Debug)]
pub enum MyFunction {
    Add, Sub, Mul, Div,
    Sin, Cos, Exp, Log,
}

impl MyFunction {
    fn arity(&self) -> usize {
        match self {
            Add | Sub | Mul | Div => 2,
            Sin | Cos | Exp | Log => 1,
        }
    }

    fn apply(&self, args: &[f64]) -> f64 {
        match self {
            Add => args[0] + args[1],
            Sub => args[0] - args[1],
            Mul => args[0] * args[1],
            Div => if args[1].abs() < 1e-10 { 1.0 } else { args[0] / args[1] },
            Sin => args[0].sin(),
            Cos => args[0].cos(),
            Exp => args[0].exp().min(1e10),
            Log => if args[0] > 0.0 { args[0].ln() } else { 0.0 },
        }
    }
}

Bloat Control

Depth Limiting

if child.depth() > max_depth {
    child = parent.clone();  // Reject oversized trees
}

Parsimony Pressure

let fitness = raw_fitness - tree.size() as f64 * parsimony_coeff;

Example

See Genetic Programming Tutorial for a complete symbolic regression example.

See Also

Operators Reference

Genetic operators for selection, crossover, and mutation.

Overview

CategoryPurposeTrait
SelectionChoose parentsSelectionOperator
CrossoverCombine parentsCrossoverOperator
MutationAdd variationMutationOperator

Operator Traits

Selection

pub trait SelectionOperator<G>: Send + Sync {
    fn select<R: Rng>(&self, population: &[(G, f64)], rng: &mut R) -> usize;
}

Crossover

pub trait CrossoverOperator<G>: Send + Sync {
    type Output;
    fn crossover<R: Rng>(&self, p1: &G, p2: &G, rng: &mut R) -> Self::Output;
}

// For bounded genomes
pub trait BoundedCrossoverOperator<G>: Send + Sync {
    type Output;
    fn crossover_bounded<R: Rng>(
        &self, p1: &G, p2: &G, bounds: &MultiBounds, rng: &mut R
    ) -> Self::Output;
}

Mutation

pub trait MutationOperator<G>: Send + Sync {
    fn mutate<R: Rng>(&self, genome: &mut G, rng: &mut R);
}

// For bounded genomes
pub trait BoundedMutationOperator<G>: Send + Sync {
    fn mutate_bounded<R: Rng>(
        &self, genome: &mut G, bounds: &MultiBounds, rng: &mut R
    );
}

Operator Selection Guide

For RealVector

OperationRecommendedAlternative
SelectionTournamentSelection(3-5)RankSelection
CrossoverSbxCrossover(15-25)BlendCrossover(0.5)
MutationPolynomialMutation(15-25)GaussianMutation(0.1)

For BitString

OperationRecommendedAlternative
SelectionTournamentSelection(3)RouletteWheelSelection
CrossoverUniformCrossoverOnePointCrossover
MutationBitFlipMutation(0.01)-

For Permutation

OperationRecommendedAlternative
SelectionTournamentSelection(5)-
CrossoverOrderCrossoverPmxCrossover
MutationInversionMutationSwapMutation

See Also

Selection Operators Reference

Selection operators choose parents for reproduction.

Module

use fugue_evo::operators::selection::{
    TournamentSelection, RouletteWheelSelection, RankSelection
};

Tournament Selection

Select best from k random individuals.

let selection = TournamentSelection::new(k);
ParameterTypeDescription
kusizeTournament size

Characteristics:

  • Higher k = more selection pressure
  • k=2 is minimal pressure
  • k=5-7 for high pressure
  • Efficient: O(k) per selection

Example:

let selection = TournamentSelection::new(3);
// Selects best of 3 random individuals

Roulette Wheel Selection

Probability proportional to fitness.

let selection = RouletteWheelSelection::new();

Characteristics:

  • Fitness proportionate
  • Can lose selection pressure with similar fitnesses
  • Sensitive to fitness scaling
  • O(n) setup, O(log n) per selection

Requirements:

  • Fitness must be positive
  • Works best with transformed fitness

Rank Selection

Probability based on fitness rank.

let selection = RankSelection::new();
// or with pressure parameter
let selection = RankSelection::with_pressure(2.0);
ParameterTypeDefaultDescription
pressuref642.0Selection pressure (1.0-2.0)

Characteristics:

  • Robust to fitness scaling
  • Maintains selection pressure
  • O(n log n) for ranking
  • Good for multimodal problems

Comparison

OperatorPressure ControlScaling SensitiveEfficiency
TournamentTournament sizeNoO(k)
RouletteFitness transformationYesO(log n)
RankPressure parameterNoO(n log n)

Usage

SimpleGABuilder::<RealVector, f64, _, _, _, _, _>::new()
    .selection(TournamentSelection::new(3))
    // ...

See Also

Crossover Operators Reference

Crossover operators combine two parents to create offspring.

Module

use fugue_evo::operators::crossover::{
    SbxCrossover, UniformCrossover, OnePointCrossover, TwoPointCrossover
};

SBX (Simulated Binary Crossover)

For real-valued genomes. Simulates single-point crossover behavior.

let crossover = SbxCrossover::new(eta);
ParameterTypeRangeDescription
etaf641-100+Distribution index

Distribution index effects:

  • Low eta (1-5): Large spread, more exploration
  • Medium eta (15-25): Balanced
  • High eta (50+): Children near parents

Example:

let crossover = SbxCrossover::new(20.0);

Uniform Crossover

Each gene from random parent with 50% probability.

let crossover = UniformCrossover::new();
// or with custom probability
let crossover = UniformCrossover::with_probability(0.6);

Works with: RealVector, BitString

One-Point Crossover

Single crossover point, swap tails.

let crossover = OnePointCrossover::new();
Parent 1: A A A | B B B
Parent 2: 1 1 1 | 2 2 2
                ↓
Child 1:  A A A | 2 2 2
Child 2:  1 1 1 | B B B

Works with: RealVector, BitString

Two-Point Crossover

Two crossover points, swap middle segment.

let crossover = TwoPointCrossover::new();
Parent 1: A A | B B B | C C
Parent 2: 1 1 | 2 2 2 | 3 3
              ↓
Child 1:  A A | 2 2 2 | C C
Child 2:  1 1 | B B B | 3 3

Order Crossover (OX)

For permutations. Preserves relative ordering.

let crossover = OrderCrossover::new();

Preserves: Relative order of elements not in copied segment.

PMX (Partially Mapped Crossover)

For permutations. Preserves absolute positions.

let crossover = PmxCrossover::new();

Preserves: Absolute positions where possible.

Crossover Result

All crossover operators return:

pub struct CrossoverResult<G> {
    child1: Option<G>,
    child2: Option<G>,
}

impl<G> CrossoverResult<G> {
    pub fn genome(self) -> Option<(G, G)>;
    pub fn first(self) -> Option<G>;
    pub fn second(self) -> Option<G>;
}

Usage

SimpleGABuilder::<RealVector, f64, _, _, _, _, _>::new()
    .crossover(SbxCrossover::new(20.0))
    // ...

Comparison

OperatorGenome TypeExplorationPreserves
SBXRealVectorControllableValue ranges
UniformAnyHighNothing specific
One-PointSequenceMediumSegments
Two-PointSequenceMediumSegments
OXPermutationMediumRelative order
PMXPermutationMediumPositions

See Also

Mutation Operators Reference

Mutation operators add random variation to genomes.

Module

use fugue_evo::operators::mutation::{
    PolynomialMutation, GaussianMutation, BitFlipMutation
};

Polynomial Mutation

For real-valued genomes. Bounded perturbations.

let mutation = PolynomialMutation::new(eta);
let mutation = PolynomialMutation::new(eta).with_probability(0.1);
ParameterTypeRangeDescription
etaf641-100+Distribution index
probabilityf640-1Per-gene mutation probability

Distribution index effects:

  • Low eta (1-5): Large mutations
  • Medium eta (15-25): Balanced
  • High eta (50+): Small mutations

Default probability: 1/n (one gene on average)

Gaussian Mutation

Adds Gaussian noise to genes.

let mutation = GaussianMutation::new(sigma);
let mutation = GaussianMutation::new(sigma).with_probability(0.1);
ParameterTypeDescription
sigmaf64Standard deviation of noise
probabilityf64Per-gene mutation probability

Note: May violate bounds. Use with BoundedMutationOperator or clamp manually.

Bit-Flip Mutation

For binary genomes. Flips each bit with probability.

let mutation = BitFlipMutation::new(probability);
ParameterTypeDescription
probabilityf64Per-bit flip probability

Typical values: 0.01-0.05 (1-5% per bit)

Swap Mutation

For permutations. Swaps two random positions.

let mutation = SwapMutation::new(probability);

Inversion Mutation

For permutations. Reverses a random segment.

let mutation = InversionMutation::new(probability);
Before: [1, 2, 3, 4, 5, 6]
              ↓ reverse [2,5]
After:  [1, 5, 4, 3, 2, 6]

Insertion Mutation

For permutations. Moves element to new position.

let mutation = InsertionMutation::new(probability);

Mutation Probability Guidelines

GenomeOperatorTypical Probability
RealVectorPolynomial1/n or 0.1
RealVectorGaussian1/n or 0.1
BitStringBitFlip0.01-0.05
PermutationSwap0.1-0.3
PermutationInversion0.1-0.3

Usage

SimpleGABuilder::<RealVector, f64, _, _, _, _, _>::new()
    .mutation(PolynomialMutation::new(20.0).with_probability(0.1))
    // ...

Bounded Mutation

For mutations respecting bounds:

// Polynomial automatically respects bounds when used with SimpleGA

// Manual bounded mutation
mutation.mutate_bounded(&mut genome, &bounds, &mut rng);

See Also

Fitness Functions Reference

Fitness functions evaluate solution quality.

Module

use fugue_evo::fitness::{Fitness, FitnessValue};
use fugue_evo::fitness::benchmarks::{Sphere, Rastrigin, Rosenbrock, Ackley};

Fitness Trait

pub trait Fitness<G>: Send + Sync {
    type Value: FitnessValue;

    fn evaluate(&self, genome: &G) -> Self::Value;
}

FitnessValue Trait

pub trait FitnessValue: Clone + PartialOrd + Send + Sync {
    fn to_f64(&self) -> f64;
}

Implemented for: f64, f32, i64, i32, ParetoFitness

Built-in Benchmarks

Sphere

let fitness = Sphere::new(dim);

Formula: f(x) = Σxᵢ²

PropertyValue
Optimum0 at origin
ModalityUnimodal
SeparableYes
Bounds[-5.12, 5.12]

Rastrigin

let fitness = Rastrigin::new(dim);

Formula: f(x) = 10n + Σ[xᵢ² - 10cos(2πxᵢ)]

PropertyValue
Optimum0 at origin
ModalityHighly multimodal (~10ⁿ local minima)
SeparableYes
Bounds[-5.12, 5.12]

Rosenbrock

let fitness = Rosenbrock::new(dim);

Formula: f(x) = Σ[100(xᵢ₊₁ - xᵢ²)² + (1 - xᵢ)²]

PropertyValue
Optimum0 at (1,1,...,1)
ModalityUnimodal (curved valley)
SeparableNo
Bounds[-5, 10]

Ackley

let fitness = Ackley::new(dim);
PropertyValue
Optimum0 at origin
ModalityMultimodal
SeparableNo
Bounds[-32.768, 32.768]

ZDT Test Suite (Multi-Objective)

use fugue_evo::fitness::benchmarks::{Zdt1, Zdt2, Zdt3};

let fitness = Zdt1::new(30);  // 30 variables
FunctionPareto Front Shape
ZDT1Convex
ZDT2Non-convex
ZDT3Disconnected

Custom Fitness

Single Objective

struct MyFitness;

impl Fitness<RealVector> for MyFitness {
    type Value = f64;

    fn evaluate(&self, genome: &RealVector) -> f64 {
        // Higher = better
        let genes = genome.genes();
        -genes.iter().map(|x| x * x).sum::<f64>()
    }
}

Multi-Objective

struct MultiObjFitness;

impl Fitness<RealVector> for MultiObjFitness {
    type Value = ParetoFitness;

    fn evaluate(&self, genome: &RealVector) -> ParetoFitness {
        ParetoFitness::new(vec![obj1, obj2])
    }
}

CMA-ES Fitness

CMA-ES uses a separate trait (minimization):

pub trait CmaEsFitness: Send + Sync {
    fn evaluate(&self, x: &RealVector) -> f64;  // Returns value to MINIMIZE
}

See Also

Termination Criteria Reference

Termination criteria define when evolution should stop.

Module

use fugue_evo::termination::{
    MaxGenerations, MaxEvaluations, TargetFitness,
    FitnessStagnation, DiversityThreshold, AnyOf, AllOf
};

Trait

pub trait TerminationCriterion<G, F>: Send + Sync {
    fn should_terminate(&self, state: &EvolutionState<G, F>) -> bool;
}

Built-in Criteria

MaxGenerations

Stop after N generations.

let term = MaxGenerations::new(500);

MaxEvaluations

Stop after N fitness evaluations.

let term = MaxEvaluations::new(100000);

TargetFitness

Stop when fitness threshold reached.

let term = TargetFitness::new(threshold);

For minimization (negated fitness):

let term = TargetFitness::new(-0.001);  // Stop when fitness >= -0.001

FitnessStagnation

Stop if no improvement for N generations.

let term = FitnessStagnation::new(generations);
ParameterDescription
generationsGenerations without improvement

DiversityThreshold

Stop when population diversity falls below threshold.

let term = DiversityThreshold::new(threshold);

Combinators

AnyOf

Stop when ANY criterion is met (OR).

let term = AnyOf::new(vec![
    Box::new(MaxGenerations::new(1000)),
    Box::new(TargetFitness::new(-0.001)),
    Box::new(FitnessStagnation::new(50)),
]);

AllOf

Stop when ALL criteria are met (AND).

let term = AllOf::new(vec![
    Box::new(MinGenerations::new(100)),  // At least 100 generations
    Box::new(FitnessStagnation::new(20)), // And stagnated
]);

Usage

With Builder

SimpleGABuilder::new()
    .max_generations(200)  // Shorthand for MaxGenerations
    // ...

Custom Termination

SimpleGABuilder::new()
    .termination(AnyOf::new(vec![
        Box::new(MaxGenerations::new(1000)),
        Box::new(TargetFitness::new(-0.001)),
    ]))
    // ...

Termination Reason

Results include termination reason:

let result = ga.run(&mut rng)?;
println!("Stopped because: {:?}", result.termination_reason);

Custom Criteria

struct TimeBudget {
    start: Instant,
    budget: Duration,
}

impl<G, F> TerminationCriterion<G, F> for TimeBudget {
    fn should_terminate(&self, _state: &EvolutionState<G, F>) -> bool {
        self.start.elapsed() >= self.budget
    }
}

// Usage
let term = TimeBudget {
    start: Instant::now(),
    budget: Duration::from_secs(60),
};

Recommendations

ScenarioRecommended
Unknown runtimeAnyOf(MaxGen, Stagnation)
Known optimumAnyOf(MaxGen, TargetFitness)
Time-limitedCustom TimeBudget
ResearchMaxGenerations for reproducibility

See Also

Design Philosophy

Fugue-evo is built on several core design principles that distinguish it from traditional GA libraries.

Evolution as Bayesian Inference

The central insight of fugue-evo is that evolutionary algorithms can be understood through a probabilistic lens:

Fitness as Likelihood

In Bayesian terms:

  • Prior: Initial population distribution
  • Likelihood: Fitness function (how well does this solution explain our objective?)
  • Posterior: Population after selection

Selection acts as conditioning: we observe that solutions should have high fitness, and update our population distribution accordingly.

Learnable Operators

Traditional GAs use fixed operators with hand-tuned parameters. Fugue-evo treats operator parameters as uncertain quantities that can be learned:

// Traditional: fixed mutation rate
let mutation_rate = 0.1;

// Fugue-evo: learned mutation rate
let mut posterior = BetaPosterior::new(2.0, 2.0);
// ... observe outcomes, update posterior
let learned_rate = posterior.mean();

Type Safety

Rust's type system enables compile-time correctness guarantees.

Generic Constraints

Operators are typed to their applicable genomes:

// SBX only works with real-valued genomes
impl CrossoverOperator<RealVector> for SbxCrossover { ... }

// Order crossover only works with permutations
impl CrossoverOperator<Permutation> for OrderCrossover { ... }

Builder Pattern

Configuration errors are caught at compile time:

let ga = SimpleGABuilder::new()
    .population_size(100)
    // Missing required configuration = compile error
    .build()?;  // Only compiles when complete

Trait-Based Abstraction

Core abstractions are defined as traits, enabling extensibility.

EvolutionaryGenome

Any type can be a genome if it implements:

pub trait EvolutionaryGenome: Clone + Send + Sync {
    fn to_trace(&self) -> Trace;
    fn from_trace(trace: &Trace) -> Result<Self, GenomeError>;
}

Operator Traits

Custom operators implement standard traits:

pub trait MutationOperator<G>: Send + Sync {
    fn mutate<R: Rng>(&self, genome: &mut G, rng: &mut R);
}

Composability

Components are designed to compose cleanly.

Operator Composition

// Combine mutations
let mutation = CompositeMutation::new(
    GaussianMutation::new(0.1),
    PolynomialMutation::new(20.0),
    0.5, // 50% chance of each
);

Termination Composition

// Complex stopping conditions
let term = AnyOf::new(vec![
    Box::new(MaxGenerations::new(1000)),
    Box::new(AllOf::new(vec![
        Box::new(MinGenerations::new(100)),
        Box::new(FitnessStagnation::new(20)),
    ])),
]);

Reproducibility

Scientific use requires reproducibility.

Seeded RNG

All randomness flows through explicit RNG:

let mut rng = StdRng::seed_from_u64(42);
let result = ga.run(&mut rng)?;
// Same seed = same result

Checkpointing

State can be saved and restored:

manager.save(&checkpoint)?;
// Later...
let checkpoint = load_checkpoint(&path)?;

Performance

Optional Parallelism

Parallelism is opt-in and doesn't affect correctness:

// Sequential (deterministic)
.parallel(false)

// Parallel (faster, same final result distribution)
.parallel(true)

Efficient Representations

Genomes use efficient storage:

// RealVector: contiguous Vec<f64>
// BitString: Vec<bool> (could use bitpacking)
// Permutation: Vec<usize>

Interoperability

Fugue Integration

Deep integration with probabilistic programming:

// Genomes convert to traces
let trace = genome.to_trace();

// Enables trace-based operators
let mutated = trace_mutation(&trace, &mut rng);

WASM Support

Same algorithms run in browser:

const optimizer = new SimpleGAOptimizer(config);
const result = optimizer.run(fitnessFunction);

Simplicity

Avoid Over-Engineering

  • Minimal dependencies
  • Clear, focused APIs
  • Documentation over abstraction

Progressive Disclosure

Simple cases are simple:

// Simplest usage
let result = SimpleGABuilder::new()
    .population_size(100)
    .bounds(bounds)
    .fitness(fitness)
    .max_generations(200)
    .defaults()  // Use sensible defaults
    .run(&mut rng)?;

Advanced features are available when needed:

// Full control
let result = SimpleGABuilder::new()
    .population_size(100)
    .bounds(bounds)
    .selection(custom_selection)
    .crossover(custom_crossover)
    .mutation(custom_mutation)
    .fitness(custom_fitness)
    .termination(complex_termination)
    .parallel(true)
    .elitism(true)
    .elite_count(5)
    .build()?
    .run(&mut rng)?;

Fugue Integration

Fugue-evo integrates deeply with fugue-ppl, a probabilistic programming library. This enables novel trace-based genetic operators.

Core Concept: Genomes as Traces

In fugue-ppl, a trace records all random choices made during program execution. Fugue-evo represents genomes as traces:

// A genome's genes become trace entries
genome.to_trace() → {
    addr!("gene", 0) → 1.234,
    addr!("gene", 1) → -0.567,
    addr!("gene", 2) → 3.890,
    // ...
}

Why Traces?

1. Selective Resampling

Mutation becomes selective resampling of addresses:

// Traditional mutation: perturb random genes
genes[i] += noise;

// Trace mutation: resample selected addresses
let addresses_to_mutate = select_addresses(&trace, probability);
let mutated_trace = resample(trace, addresses_to_mutate);

2. Structured Crossover

Crossover can respect genome structure:

// Exchange subtrees of traces
let child_trace = merge_traces(
    parent1_trace,
    parent2_trace,
    &crossover_points,
);

3. Probabilistic Interpretation

Genetic operators have probabilistic semantics:

  • Selection = Conditioning on high fitness
  • Mutation = Partial resampling from prior
  • Crossover = Trace merging

The EvolutionaryGenome Trait

pub trait EvolutionaryGenome: Clone + Send + Sync {
    /// Convert genome to Fugue trace
    fn to_trace(&self) -> Trace;

    /// Reconstruct genome from trace
    fn from_trace(trace: &Trace) -> Result<Self, GenomeError>;
}

Example: RealVector Implementation

impl EvolutionaryGenome for RealVector {
    fn to_trace(&self) -> Trace {
        let mut trace = Trace::new();
        for (i, &gene) in self.genes.iter().enumerate() {
            trace.insert(addr!("gene", i), gene);
        }
        trace
    }

    fn from_trace(trace: &Trace) -> Result<Self, GenomeError> {
        let mut genes = Vec::new();
        let mut i = 0;
        while let Some(&value) = trace.get(&addr!("gene", i)) {
            genes.push(value);
            i += 1;
        }
        Ok(RealVector::new(genes))
    }
}

Trace-Based Operators

Trace Mutation

use fugue_evo::fugue_integration::trace_operators::TraceMutation;

let mutation = TraceMutation::new(mutation_probability);
let mutated_genome = mutation.mutate_via_trace(&genome, &bounds, &mut rng);

How it works:

  1. Convert genome to trace
  2. For each address, decide whether to resample
  3. Resample selected addresses from prior (uniform within bounds)
  4. Reconstruct genome from mutated trace

Trace Crossover

use fugue_evo::fugue_integration::trace_operators::TraceCrossover;

let crossover = TraceCrossover::new(crossover_type);
let (child1, child2) = crossover.crossover_via_trace(&p1, &p2, &mut rng);

How it works:

  1. Convert both parents to traces
  2. For each address, select source parent
  3. Merge into child traces
  4. Reconstruct child genomes

Effect Handlers

Fugue uses effect handlers (poutine-style) for program transformation. Fugue-evo provides evolution-specific handlers:

Conditioning Handler

Interprets selection as conditioning:

use fugue_evo::fugue_integration::effect_handlers::ConditioningHandler;

// Selection biases toward high fitness
let handler = ConditioningHandler::new(fitness_function);
let selected_trace = handler.condition(trace, fitness_threshold);

Resampling Handler

Implements mutation as partial resampling:

use fugue_evo::fugue_integration::effect_handlers::ResamplingHandler;

let handler = ResamplingHandler::new(resample_probability);
let mutated_trace = handler.resample(trace, &prior, &mut rng);

Evolution Model

The full probabilistic evolution model:

use fugue_evo::fugue_integration::evolution_model::EvolutionModel;

let model = EvolutionModel::new()
    .with_prior(UniformPrior::new(&bounds))
    .with_likelihood(fitness_function)
    .with_mutation_kernel(GaussianKernel::new(sigma));

// One generation = one inference step
let posterior_population = model.step(prior_population, &mut rng);

Advanced: Custom Trace Structures

For complex genomes, design meaningful trace structures:

// Neural network genome
impl EvolutionaryGenome for NeuralNetwork {
    fn to_trace(&self) -> Trace {
        let mut trace = Trace::new();

        // Layer structure
        for (layer_idx, layer) in self.layers.iter().enumerate() {
            for (neuron_idx, neuron) in layer.neurons.iter().enumerate() {
                // Weight addresses
                for (weight_idx, &weight) in neuron.weights.iter().enumerate() {
                    trace.insert(
                        addr!("layer", layer_idx, "neuron", neuron_idx, "weight", weight_idx),
                        weight,
                    );
                }
                // Bias address
                trace.insert(
                    addr!("layer", layer_idx, "neuron", neuron_idx, "bias"),
                    neuron.bias,
                );
            }
        }

        trace
    }
}

This enables:

  • Layer-aware mutation (mutate one layer at a time)
  • Structural crossover (exchange layers between parents)
  • Hierarchical analysis of evolved networks

See Also

Type System

Fugue-evo uses Rust's type system to ensure correctness and provide good developer experience.

Generic Algorithm Builders

Algorithm builders use extensive generics:

pub struct SimpleGABuilder<G, F, S, C, M, Fit, Term> {
    // G: Genome type
    // F: Fitness value type
    // S: Selection operator
    // C: Crossover operator
    // M: Mutation operator
    // Fit: Fitness function
    // Term: Termination criterion
}

Why So Many Generics?

Type safety: Each component's compatibility is checked at compile time.

// This compiles: SBX works with RealVector
SimpleGABuilder::<RealVector, f64, _, _, _, _, _>::new()
    .crossover(SbxCrossover::new(20.0))

// This wouldn't compile: SBX doesn't implement CrossoverOperator<BitString>
SimpleGABuilder::<BitString, f64, _, _, _, _, _>::new()
    .crossover(SbxCrossover::new(20.0))  // Compile error!

Type Inference

Rust often infers generic parameters:

// Full explicit types (verbose)
SimpleGABuilder::<
    RealVector,
    f64,
    TournamentSelection,
    SbxCrossover,
    PolynomialMutation,
    Sphere,
    MaxGenerations,
>::new()

// With inference (same thing, cleaner)
SimpleGABuilder::<RealVector, f64, _, _, _, _, _>::new()
    .selection(TournamentSelection::new(3))
    .crossover(SbxCrossover::new(20.0))
    // Types inferred from usage

Trait Bounds

Genome Constraints

pub trait EvolutionaryGenome:
    Clone           // Population copying
    + Send          // Thread safety
    + Sync          // Thread safety
    + Serialize     // Checkpointing
    + DeserializeOwned
{
    // ...
}

Operator Constraints

pub trait SelectionOperator<G>: Send + Sync {
    fn select<R: Rng>(&self, population: &[(G, f64)], rng: &mut R) -> usize;
}

The Send + Sync bounds enable parallel evaluation.

Fitness Constraints

pub trait FitnessValue:
    Clone           // Result copying
    + PartialOrd    // Comparison for selection
    + Send + Sync   // Thread safety
{
    fn to_f64(&self) -> f64;
}

Associated Types

Some traits use associated types for output:

pub trait Fitness<G>: Send + Sync {
    type Value: FitnessValue;  // Associated type

    fn evaluate(&self, genome: &G) -> Self::Value;
}

// Usage
impl Fitness<RealVector> for Sphere {
    type Value = f64;  // Sphere returns f64
    // ...
}

impl Fitness<RealVector> for MultiObjective {
    type Value = ParetoFitness;  // Multi-objective returns ParetoFitness
    // ...
}

Error Handling

Result Types

Operations that can fail return Result:

pub type EvoResult<T> = Result<T, EvoError>;

pub fn build(self) -> EvoResult<SimpleGA<G, F, S, C, M, Fit, Term>>;

Error Types

#[derive(Debug, thiserror::Error)]
pub enum EvoError {
    #[error("Configuration error: {0}")]
    Configuration(String),

    #[error("Genome error: {0}")]
    Genome(#[from] GenomeError),

    #[error("Operator error: {0}")]
    Operator(String),

    // ...
}

Marker Traits

Some traits are markers for capabilities:

// Indicates genome supports bounds
pub trait BoundedGenome: EvolutionaryGenome {
    fn clamp_to_bounds(&mut self, bounds: &MultiBounds);
}

// Operators that respect bounds
pub trait BoundedMutationOperator<G: BoundedGenome>: MutationOperator<G> {
    fn mutate_bounded<R: Rng>(
        &self,
        genome: &mut G,
        bounds: &MultiBounds,
        rng: &mut R,
    );
}

Phantom Types

For type-level information without runtime cost:

pub struct Population<G, F> {
    individuals: Vec<Individual<G>>,
    _fitness: PhantomData<F>,  // Track fitness type without storing
}

Conditional Compilation

Features enable/disable functionality:

#[cfg(feature = "parallel")]
impl<G, F, Fit> Population<G, F>
where
    G: EvolutionaryGenome,
    F: FitnessValue,
    Fit: Fitness<G, Value = F>,
{
    pub fn evaluate_parallel(&mut self, fitness: &Fit) {
        // Rayon parallel implementation
    }
}

Type Aliases

For common patterns:

// Convenience aliases
pub type RealGA = SimpleGA<RealVector, f64, TournamentSelection, SbxCrossover, PolynomialMutation, _, MaxGenerations>;

pub type BinaryGA = SimpleGA<BitString, f64, TournamentSelection, UniformCrossover, BitFlipMutation, _, MaxGenerations>;

Best Practices

1. Let Inference Work

// Good: minimal type annotations
let ga = SimpleGABuilder::new()
    .bounds(bounds)
    .fitness(Sphere::new(10))
    // types inferred

// Verbose: unnecessary annotations
let ga: SimpleGA<RealVector, f64, ...> = SimpleGABuilder::new()

2. Use Concrete Types in Applications

// In library code: generic
fn run_optimization<G, F, Fit>(fitness: &Fit) -> EvoResult<G>
where
    G: EvolutionaryGenome,
    F: FitnessValue,
    Fit: Fitness<G, Value = F>,

// In application code: concrete
fn optimize_my_problem() -> EvoResult<RealVector> {
    let fitness = MyFitness::new();
    // ...
}

3. Understand Error Messages

Generic-heavy code can produce complex error messages. Key hints:

  • "trait bound not satisfied" = wrong operator for genome type
  • "cannot infer type" = add type annotations
  • "lifetime" issues = usually Send/Sync bounds

See Also

API Documentation

The complete API documentation is generated from rustdoc comments in the source code.

Viewing API Docs

Online

Visit the docs.rs page (when published):

https://docs.rs/fugue-evo

Local Generation

Generate and view locally:

cargo doc --open

Or generate without opening:

cargo doc --no-deps

Docs will be in target/doc/fugue_evo/index.html.

Module Structure

fugue_evo
├── algorithms          # Optimization algorithms
│   ├── simple_ga       # Simple Genetic Algorithm
│   ├── cmaes           # CMA-ES
│   ├── nsga2           # NSGA-II (multi-objective)
│   ├── island          # Island Model
│   └── eda             # EDA/UMDA
├── genome              # Genome types
│   ├── real_vector     # Continuous optimization
│   ├── bit_string      # Binary optimization
│   ├── permutation     # Ordering problems
│   ├── tree            # Genetic programming
│   └── bounds          # Search space bounds
├── operators           # Genetic operators
│   ├── selection       # Selection operators
│   ├── crossover       # Crossover operators
│   └── mutation        # Mutation operators
├── fitness             # Fitness functions
│   ├── traits          # Fitness trait
│   └── benchmarks      # Standard benchmarks
├── population          # Population management
├── termination         # Stopping criteria
├── hyperparameter      # Parameter adaptation
├── interactive         # Human-in-the-loop
├── checkpoint          # State persistence
├── diagnostics         # Statistics and tracking
├── fugue_integration   # PPL integration
└── error               # Error types

Prelude

For convenience, import everything from the prelude:

use fugue_evo::prelude::*;

This imports:

  • All algorithm builders
  • All genome types
  • All operators
  • All fitness traits and benchmarks
  • Common utility types

Key Entry Points

Algorithms

TypeDescription
SimpleGABuilderGeneral-purpose GA
CmaEsCMA-ES optimizer
Nsga2BuilderMulti-objective NSGA-II
IslandModelBuilderParallel island model
InteractiveGABuilderHuman-in-the-loop

Genomes

TypeDescription
RealVectorReal-valued vector
BitStringBinary string
PermutationOrdering
TreeGenomeTree structure

Operators

TypeDescription
TournamentSelectionTournament selection
SbxCrossoverSimulated binary crossover
PolynomialMutationPolynomial mutation

Documentation Conventions

Example Code

All examples in rustdoc are tested (where possible):

/// # Example
///
/// ```
/// use fugue_evo::prelude::*;
/// let genome = RealVector::new(vec![1.0, 2.0, 3.0]);
/// assert_eq!(genome.dimension(), 3);
/// ```

Feature Gates

Feature-gated items are marked:

/// Saves checkpoint to file.
///
/// **Requires feature:** `checkpoint`
#[cfg(feature = "checkpoint")]
pub fn save(&self, path: &Path) -> Result<(), Error>

See Also

Cross-references to related items:

/// Tournament selection operator.
///
/// # See Also
///
/// - [`RouletteWheelSelection`] - Alternative selection
/// - [`RankSelection`] - Rank-based alternative

Contributing

Thank you for your interest in contributing to fugue-evo! This guide will help you get started.

Getting Started

Prerequisites

  • Rust stable (1.70+)
  • Git

Setup

# Clone the repository
git clone https://github.com/fugue-evo/fugue-evo
cd fugue-evo

# Build
cargo build

# Run tests
cargo test

# Run examples
cargo run --example sphere_optimization

Development Workflow

Running Tests

# All tests
cargo test

# Specific test
cargo test test_name

# With output
cargo test -- --nocapture

# Property tests
cargo test --test property_tests

Code Quality

# Format code
cargo fmt

# Lint
cargo clippy

# Check all features
cargo check --all-features

Documentation

# Generate docs
cargo doc --open

# Build mdbook
cd docs && mdbook build

Code Style

Formatting

We use rustfmt with default settings:

cargo fmt

Linting

We use clippy:

cargo clippy -- -D warnings

Documentation

  • All public items should have doc comments
  • Include examples where helpful
  • Use # Panics, # Errors, # Safety sections as appropriate
/// Brief description.
///
/// Longer description if needed.
///
/// # Arguments
///
/// * `param` - Description
///
/// # Returns
///
/// Description of return value.
///
/// # Example
///
/// ```
/// // Example code
/// ```
pub fn function(param: Type) -> ReturnType {
    // ...
}

Making Changes

1. Create a Branch

git checkout -b feature/my-feature
# or
git checkout -b fix/my-fix

2. Make Changes

  • Write tests for new functionality
  • Update documentation
  • Follow existing code patterns

3. Test Thoroughly

cargo test
cargo clippy
cargo fmt -- --check

4. Commit

Use clear, descriptive commit messages:

feat: add new crossover operator for permutations

- Implement cycle crossover (CX)
- Add tests for edge cases
- Update operator reference docs

5. Submit PR

  • Describe what the PR does
  • Reference any related issues
  • Ensure CI passes

Types of Contributions

Bug Fixes

  1. Create an issue describing the bug
  2. Write a failing test
  3. Fix the bug
  4. Ensure test passes

New Features

  1. Discuss in an issue first
  2. Design API carefully
  3. Implement with tests
  4. Add documentation
  5. Add to mdbook if appropriate

Documentation

  • Fix typos and unclear explanations
  • Add examples
  • Improve API docs
  • Expand tutorials

Benchmarks

  • Add new benchmark functions
  • Performance comparisons
  • Algorithm benchmarks

Architecture Guidelines

Adding a New Algorithm

  1. Create module in src/algorithms/
  2. Implement algorithm struct
  3. Create builder with type-safe API
  4. Add to prelude
  5. Write tests
  6. Document thoroughly
  7. Add tutorial if complex

Adding a New Genome Type

  1. Create module in src/genome/
  2. Implement EvolutionaryGenome trait
  3. Implement appropriate operators
  4. Add serialization support
  5. Add to prelude
  6. Write tests including trace roundtrip

Adding a New Operator

  1. Implement appropriate trait(s)
  2. Make Send + Sync
  3. Document parameters and behavior
  4. Add tests
  5. Add to reference docs

Testing Guidelines

Unit Tests

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn test_basic_functionality() {
        // Arrange
        let input = setup();

        // Act
        let result = function_under_test(input);

        // Assert
        assert_eq!(result, expected);
    }
}

Property Tests

We use proptest for property-based testing:

proptest! {
    #[test]
    fn test_trace_roundtrip(genome in any_genome()) {
        let trace = genome.to_trace();
        let reconstructed = Genome::from_trace(&trace).unwrap();
        prop_assert_eq!(genome, reconstructed);
    }
}

Questions?

  • Open an issue for bugs or feature requests
  • Use discussions for questions
  • Check existing issues before creating new ones

License

By contributing, you agree that your contributions will be licensed under the project's license.

Changelog

All notable changes to fugue-evo will be documented here.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

[Unreleased]

Added

  • Comprehensive mdbook documentation
  • Getting Started guide
  • Tutorials for all major features
  • How-To guides for common tasks
  • Reference documentation for all modules
  • Architecture documentation

[0.1.0] - Initial Release

Added

Algorithms

  • SimpleGA: General-purpose genetic algorithm with builder pattern
  • CMA-ES: Covariance Matrix Adaptation Evolution Strategy
  • NSGA-II: Non-dominated Sorting Genetic Algorithm II for multi-objective optimization
  • Island Model: Parallel evolution with migration
  • EDA/UMDA: Estimation of Distribution Algorithm
  • Steady-State GA: Non-generational evolution
  • Evolution Strategy: (μ+λ) and (μ,λ) strategies
  • Interactive GA: Human-in-the-loop optimization

Genome Types

  • RealVector: Continuous optimization
  • BitString: Binary optimization
  • Permutation: Ordering problems
  • TreeGenome: Genetic programming
  • DynamicRealVector: Variable-length real vectors
  • Composite genomes

Operators

  • Selection: Tournament, Roulette Wheel, Rank-based
  • Crossover: SBX, Uniform, One-point, Two-point, Order (OX), PMX
  • Mutation: Polynomial, Gaussian, Bit-flip, Swap, Inversion

Fitness

  • Fitness trait with generic value types
  • ParetoFitness for multi-objective
  • Benchmark functions: Sphere, Rastrigin, Rosenbrock, Ackley, ZDT suite

Features

  • Checkpointing and recovery
  • Parallel evaluation via Rayon
  • Termination criteria combinators
  • Hyperparameter adaptation (schedules, Bayesian learning)
  • Fugue PPL integration for trace-based operators

WASM Support

  • fugue-evo-wasm package for browser/Node.js
  • JavaScript bindings via wasm-bindgen

Dependencies

  • nalgebra 0.33 for linear algebra
  • rand 0.8 for random number generation
  • rayon 1.10 for parallelism
  • serde for serialization
  • fugue-ppl 0.1.0 for probabilistic programming integration

Version History

VersionDateHighlights
0.1.0TBDInitial release

Upgrading

From Pre-release to 0.1.0

If you were using a pre-release version:

  1. Update Cargo.toml:

    fugue-evo = "0.1"
    
  2. Check for API changes in builders

  3. Update any custom operators to new trait signatures

Deprecations

None currently.

Security

No security advisories.