Skip to content

Feature Space Illumination with MAPElites

MAPElites (stylized as MAP-Elites in the original paper [1]) is an algorithm where the feature space (or sometimes called the behavior space) is organized into cells, and each cell has its own local competition: if a new solution's feature(s) is/are suitable for a cell and this new solution has a better fitness than the cell's current solution, the new solution replaces the old one. Organizing the population this way forces diversity, and therefore allows one to illuminate the search space according to the criteria specified by the user.

In this notebook, we use MAPElites to illuminate the search space of the bi-objective test problem of Kursawe [2]. Given these two functions:

\[ \begin{array}{r c l} f_1(x) &=& \sum_{i = \{1, 2\}} \Big[-10 \cdot \text{exp} \big(-0.2 \cdot \sqrt{x^2_i + x^2_{i+1}} \big)\Big] \\ f_2(x) &=& \sum_{i = \{1, 2, 3\}} \Big[|x_i|^{0.8} + 5 \text{sin}\big(x^3_i\big)\Big] \end{array} \]

Kursawe's test problem, in its original form, is defined as:

\[ \begin{array}{r l} \text{minimize} & f_1(x) \\ \text{minimize} & f_2(x) \\ \text{subject to} & -5 \leq x_i \leq 5 \quad \forall i \in \{1, 2, 3\} \end{array} \]

In this notebook, however, we modify this problem such that the goal is to minimize \(f_1(x) + f_2(x)\), and the two features are \(f_1(x)\) and \(f_2(x)\).

from evotorch import Problem
from evotorch.algorithms import MAPElites
from evotorch.operators import GaussianMutation
from evotorch.logging import StdOutLogger, PandasLogger
import matplotlib.pyplot as plt
import torch

Below is the definition of the bi-objective Kursawe function. The input of this function is a tensor x of shape (n, 3), n being the number of solutions. The returned tensor is of shape (n, 2), in which the column with index 0 stores \(f_1(\cdot)\) and column with index 1 stores \(f_2(\cdot)\).

def kursawe(x: torch.Tensor) -> torch.Tensor:
    f1 = torch.sum(
        -10 * torch.exp(
            -0.2 * torch.sqrt(x[:, 0:2] ** 2.0 + x[:, 1:3] ** 2.0)
        ),
        dim=-1,
    )
    f2 = torch.sum(
        (torch.abs(x) ** 0.8) + (5 * torch.sin(x ** 3)),
        dim=-1,
    )
    fitnesses = torch.stack([f1, f2], dim=-1)
    return fitnesses

Below is a wrapper for the kursawe function. The return value of the wrapped/modified Kursawe function is a tensor of shape (n, 3) where the column with index 0 stores \(f_1(\cdot)+f_2(\cdot)\), column with index 1 stores \(f_1(\cdot)\), and column with index 2 stores \(f_2(\cdot)\).

def kursawe_with_features(x: torch.Tensor) -> torch.Tensor:
    fitnesses = kursawe(x)
    total_fitness = torch.sum(fitnesses, dim=-1).reshape(-1, 1)
    return torch.hstack([total_fitness, fitnesses])

We now define our optimization problem. Notice that:

  • We have only one objective with objective sense "min"
  • We have extra evaluation data whose length is 2 (eval_data_length=2)

With this configuration, we declare that we expect evaluation result tensors to have 3 columns: leftmost column for the objective, and the remaining columns for features.

problem = Problem(
    "min",
    kursawe_with_features,
    solution_length=3,
    eval_data_length=2,
    bounds=(-5.0, 5.0),
    vectorized=True,
)

problem

Below, we create a hypergrid for our feature space. In our hypergrid, the global lower bound for each feature is -20, and the global upper bound for each feature is 20. For each feature, we declare that we want 50 bins.

feature_grid = MAPElites.make_feature_grid(
    lower_bounds=[-20, -14],
    upper_bounds=[-10, 4],
    num_bins=50,
    dtype="float32",
)

feature_grid.shape

Now that we have our hypergrid, we can instantiate MAPElites.

searcher = MAPElites(
    problem,
    operators=[GaussianMutation(problem, stdev=0.1)],
    re_evaluate=False,
    feature_grid=feature_grid,
)

searcher

Now we run our evolutionary search.

StdOutLogger(searcher, interval=10)
pandas_logger = PandasLogger(searcher)

searcher.run(100)

Below we can see how many of the cells within our hypergrid are filled:

torch.count_nonzero(searcher.filled)

Now we plot our solutions. In the plot, dark colors represent better solutions. The darkest solutions, therefore, should give us an outline of the pareto-front.

all_f1 = []
all_f2 = []
all_sumf = []

for i, solution in enumerate(searcher.population):
    if searcher.filled[i]:
        f1 = float(solution.evals[1])
        f2 = float(solution.evals[2])
        all_f1.append(f1)
        all_f2.append(f2)
        all_sumf.append(f1 + f2)

min_sumf = min(all_sumf)
max_sumf = max(all_sumf)
diff_sumf = max_sumf - min_sumf

c = []
for i in range(len(all_f1)):
    norm_f = (all_sumf[i] - min_sumf) / (max_sumf - min_sumf)
    color = hex(int(norm_f * 255))[2:]
    if len(color) == 1:
        color = "0" + color
    color = f"#{color}{color}{color}"
    c.append(color)

plt.scatter(all_f1, all_f2, c=c)

References

[1] Mouret, J. B., & Clune, J. (2015). Illuminating search spaces by mapping elites. arXiv preprint arXiv:1504.04909.

[2] Kursawe, F. (1991). A variant of evolution strategies for vector optimization. In: Schwefel, HP., Männer, R. (eds) Parallel Problem Solving from Nature. PPSN 1990. Lecture Notes in Computer Science, vol 496. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0029752


See this notebook on GitHub