Index
Purely functional implementations of optimization algorithms.
Reasoning.
PyTorch has a functional API within its namespace torch.func
.
In addition to allowing one to choose a pure functional programming style,
torch.func
enables powerful batched operations via torch.func.vmap
.
To be able to work with the functional programming style of torch.func
,
EvoTorch introduces functional implementations of evolutionary search
algorithms and optimizers within the namespace
evotorch.algorithms.functional
.
These algorithm implementations are compatible with torch.func.vmap
,
and therefore they can perform batched evolutionary searches
(e.g. they can work on not just a single population, but on batches
of populations). Such batched searches can be helpful in the following
scenarios:
Scenario 1: Nested optimization. The main optimization problem at hand might have internal optimization problems. Therefore, when the main optimization problem's fitness function is reached, the internal optimization problem will have to be solved for each solution of the main problem. In such a scenario, one might want to use a functional evolutionary search for the inner optimization problem, so that a batch of populations is formed where each batch item represents a separate population associated with a separate solution of the main problem.
Scenario 2: Batched hyperparameter search. If the user is interested in using a search algorithm that has a functional implementation, the user might want to implement a hyperparameter search in such a way that there is a batch of hyperparameters (instead of just a single set of hyperparameters), and the search is performed on a batch of populations. In such a setting, each population within the population batch is associated with a different hyperparameter set within the hyperparameter batch.
Example: cross entropy method. Let us assume that we have the following fitness function, whose output we wish to minimize:
import torch
def f(x: torch.Tensor) -> torch.Tensor:
assert x.ndim == 2, "Please pass `x` as a 2-dimensional tensor"
return torch.sum(x**2, dim=-1)
Let us initialize our search from a random point:
Now we can initialize our cross entropy method like this:
from evotorch.algorithms.functional import cem, cem_ask, cem_tell
state = cem(
#
# Center point of the initial search distribution:
center_init=center_init,
#
#
# Standard deviation of the initial search distribution:
stdev_init=10.0,
#
#
# Top half of the population are to be chosen as parents:
parenthood_ratio=0.5,
#
#
# We wish to minimize the fitnesses:
objective_sense="min",
#
#
# A standard deviation item is not allowed to change more than
# 1% of its original value:
stdev_max_change=0.01,
)
At this point, we have an initial state of our cross entropy method search,
stored by the variable state
. Now, we can implement a loop and perform
multiple generations of evolutionary search like this:
num_generations = 1000
for generation in range(1, 1 + num_generations):
# Ask for a new population (of size 1000) from cross entropy method
solutions = cem_ask(state, popsize=1000)
# At this point, `solutions` is a regular PyTorch tensor, ready to be
# passed to the function `f`.
# `solutions` is a 2-dimensional tensor of shape (N, L) where `N`
# is the number of solutions, and `L` is the length of a solution.
# Our example fitness function `f` is implemented in such a way that
# we can pass our 2-dimensional `solutions` tensor into it directly.
# We will receive `fitnesses` as a 1-dimensional tensor of length `N`.
fitnesses = f(solutions)
# Let us report the mean of fitnesses to see the progress
print("Generation:", generation, " Mean of fitnesses:", torch.mean(fitnesses))
# Now, we inform cross entropy method of the latest state of the search,
# the latest population, and the latest fitnesses, so that it can give us
# the next state of the search.
state = cem_tell(state, solutions, fitnesses)
At the end of the evolutionary search (or, actually, at any point), one can
analyze the state
tuple to get information about the current status of the
search distribution. These state tuples are named tuples, and therefore, the
data they store are labeled.
In the case of cross entropy method, the latest center of the search
distribution can be obtained via:
latest_center = state.center
# Note, in the case of pgpe, this would be:
# latest_center = state.optimizer_state.center
Notes on manipulating the evolutionary search.
If, at any point of the search, you would like to change a hyperparameter,
you can do so by creating a modified copy of your latest state
tuple,
and pass it to the ask method of your evolutionary search (which,
in the case of cross entropy method, is cem_ask
).
Similarly, if you wish to change the center point of the search, you can
pass a modified state tuple containing the new center point to cem_ask
.
Notes on batching.
In regular non-batched cases, functional search algorithms expect the
center_init
argument as a 1-dimensional tensor. If center_init
is given
as a tensor with 2 or more dimensions, the extra leftmost dimensions will
be considered as batch dimensions, and therefore the evolutionary search
itself will be batched (which means that the ask method of the search
algorithm will return a batch of populations). Furthermore, certain
hyperparameters can also be given in batches. See the specific
documentation of the functional algorithms to see which hyperparameters
support batching.
When working with batched populations, it is important to make sure that the fitness function can work with arbitrary amount of dimensions (not just 2 dimensions). One way to implement such fitness functions with the help of the rowwise decorator:
from evotorch.decorators import rowwise
@rowwise
def f(x: torch.Tensor) -> torch.Tensor:
return torch.sum(x**2)
When decorated with @rowwise
, we can implement our function as if the
tensor x
is a 1-dimensional tensor. If the decorated f
receives x
not as a vector, but as a matrix, then it will do the same operation
on each row of the matrix, in a vectorized manner. If x
has 3 or
more dimensions, they will be considered as extra batch dimensions,
affecting the shape of the resulting tensor.
Example: gradient-based search.
This namespace also provides functional implementations of various gradient
based optimizers. The reasoning behind the existence of these implementations
is two-fold: (i) these optimizers are used by the functional pgpe
implementation (for handling the momentum); and (ii) having these optimizers
with a similar API allows user to switch back-and-forth between evolutionary
and gradient-based search for solving the same problem, hopefully without
having to change the code too much.
Let us consider the same fitness function again, in its @rowwise
form so
that it can work with a single vector or a batch of such vectors:
from evotorch.decorators import rowwise
@rowwise
def f(x: torch.Tensor) -> torch.Tensor:
return torch.sum(x**2)
To solve this optimization problem using the Adam optimizer, one can do the following:
from evotorch.algorithms.functional import adam, adam_ask, adam_tell
from torch.func import grad
# Prepare an initial search point
solution_length = 1000
center_init = torch.randn(solution_length, dtype=torch.float32) * 10
# Initialize the Adam optimizer
state = adam(
center_init=center_init,
center_learning_rate=0.001,
beta1=0.9,
beta2=0.999,
epsilon=1e-8,
)
num_iterations = 1000
for iteration in range(1, 1 + num_iterations):
# Get the current search point of the Adam search
center = adam_ask(state)
# Get the gradient.
# Negative, because we want to minimize f.
gradient = -(grad(f)(center))
# Inform the Adam optimizer of the gradient to follow, and get the next
# state of the search
state = adam_tell(state, follow_grad=gradient)
# Store the final solution
final_solution = adam_ask(state)
# or, alternatively:
# final_solution = state.center
Solving a stateful Problem object using functional algorithms. If you wish to solve a stateful Problem using a functional optimization algorithm, you can obtain a callable evaluator out of that Problem object, and then use it for computing the fitnesses. See the following example:
from evotorch import Problem, SolutionBatch
from evotorch.algorithms.functional import cem, cem_ask, cem_tell
class MyProblem(Problem):
def __init__(self): ...
def _evaluate_batch(self, batch: SolutionBatch):
# Stateful batch evaluation code goes here
...
# Instantiate the problem
problem = MyProblem()
# Make a callable fitness evaluator
fproblem = problem.make_callable_evaluator()
# Make an initial solution
center_init = torch.randn(problem.solution_length, dtype=torch.float32) * 10
# Prepare a cross entropy method search
state = cem(
center_init=center_init,
stdev_init=10.0,
parenthood_ratio=0.5,
objective_sense="min",
stdev_max_change=0.01,
)
num_generations = 1000
for generation in range(1, 1 + num_generations):
# Get a population
solutions = cem_ask(state, popsize=1000)
# Call the evaluator to get the fitnesses
fitnesses = fproblem(solutions)
# Let us report the mean of fitnesses to see the progress
print("Generation:", generation, " Mean of fitnesses:", torch.mean(fitnesses))
# Now, we inform cross entropy method of the latest state of the search,
# the latest population, and the latest fitnesses, so that it can give us
# the next state of the search.
state = cem_tell(state, solutions, fitnesses)
# Center of the latest search distribution
latest_center = state.center
funcadam
¶
AdamState (tuple)
¶
AdamState(center, center_learning_rate, beta1, beta2, epsilon, m, v, t)
Source code in evotorch/algorithms/functional/funcadam.py
adam(*, center_init, center_learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08)
¶
Initialize an Adam optimizer and return its initial state.
Reference:
Kingma, D. P. and J. Ba (2015).
Adam: A method for stochastic optimization.
In Proceedings of 3rd International Conference on Learning Representations.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
center_init |
Union[torch.Tensor, numpy.ndarray] |
Starting point for the Adam search. Expected as a PyTorch tensor with at least 1 dimension. If there are 2 or more dimensions, the extra leftmost dimensions are interpreted as batch dimensions. |
required |
center_learning_rate |
Union[numbers.Number, numpy.ndarray, torch.Tensor] |
Learning rate (i.e. the step size) for the Adam updates. Can be a scalar or a multidimensional tensor. If given as a tensor with multiple dimensions, those dimensions will be interpreted as batch dimensions. |
0.001 |
beta1 |
Union[numbers.Number, numpy.ndarray, torch.Tensor] |
beta1 hyperparameter for the Adam optimizer. Can be a scalar or a multidimensional tensor. If given as a tensor with multiple dimensions, those dimensions will be interpreted as batch dimensions. |
0.9 |
beta2 |
Union[numbers.Number, numpy.ndarray, torch.Tensor] |
beta2 hyperparameter for the Adam optimizer. Can be a scalar or a multidimensional tensor. If given as a tensor with multiple dimensions, those dimensions will be interpreted as batch dimensions. |
0.999 |
epsilon |
Union[numbers.Number, numpy.ndarray, torch.Tensor] |
epsilon hyperparameter for the Adam optimizer. Can be a scalar or a multidimensional tensor. If given as a tensor with multiple dimensions, those dimensions will be interpreted as batch dimensions. |
1e-08 |
Returns:
Type | Description |
---|---|
AdamState |
A named tuple of type |
Source code in evotorch/algorithms/functional/funcadam.py
def adam(
*,
center_init: BatchableVector,
center_learning_rate: BatchableScalar = 0.001,
beta1: BatchableScalar = 0.9,
beta2: BatchableScalar = 0.999,
epsilon: BatchableScalar = 1e-8,
) -> AdamState:
"""
Initialize an Adam optimizer and return its initial state.
Reference:
Kingma, D. P. and J. Ba (2015).
Adam: A method for stochastic optimization.
In Proceedings of 3rd International Conference on Learning Representations.
Args:
center_init: Starting point for the Adam search.
Expected as a PyTorch tensor with at least 1 dimension.
If there are 2 or more dimensions, the extra leftmost dimensions
are interpreted as batch dimensions.
center_learning_rate: Learning rate (i.e. the step size) for the Adam
updates. Can be a scalar or a multidimensional tensor.
If given as a tensor with multiple dimensions, those dimensions
will be interpreted as batch dimensions.
beta1: beta1 hyperparameter for the Adam optimizer.
Can be a scalar or a multidimensional tensor.
If given as a tensor with multiple dimensions, those dimensions
will be interpreted as batch dimensions.
beta2: beta2 hyperparameter for the Adam optimizer.
Can be a scalar or a multidimensional tensor.
If given as a tensor with multiple dimensions, those dimensions
will be interpreted as batch dimensions.
epsilon: epsilon hyperparameter for the Adam optimizer.
Can be a scalar or a multidimensional tensor.
If given as a tensor with multiple dimensions, those dimensions
will be interpreted as batch dimensions.
Returns:
A named tuple of type `AdamState`, representing the initial state
of the Adam optimizer.
"""
center_init = torch.as_tensor(center_init)
dtype = center_init.dtype
device = center_init.device
def as_tensor(x) -> torch.Tensor:
return torch.as_tensor(x, dtype=dtype, device=device)
center_learning_rate = as_tensor(center_learning_rate)
beta1 = as_tensor(beta1)
beta2 = as_tensor(beta2)
epsilon = as_tensor(epsilon)
m = torch.zeros_like(center_init)
v = torch.zeros_like(center_init)
t = torch.zeros(center_init.shape[:-1], dtype=dtype, device=device)
return AdamState(
center=center_init,
center_learning_rate=center_learning_rate,
beta1=beta1,
beta2=beta2,
epsilon=epsilon,
m=m,
v=v,
t=t,
)
adam_ask(state)
¶
Get the search point stored by the given AdamState
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
state |
AdamState |
The current state of the Adam optimizer. |
required |
Returns:
Type | Description |
---|---|
Tensor |
The search point as a 1-dimensional tensor in the non-batched case, or as a multi-dimensional tensor if the Adam search is batched. |
Source code in evotorch/algorithms/functional/funcadam.py
def adam_ask(state: AdamState) -> torch.Tensor:
"""
Get the search point stored by the given `AdamState`.
Args:
state: The current state of the Adam optimizer.
Returns:
The search point as a 1-dimensional tensor in the non-batched case,
or as a multi-dimensional tensor if the Adam search is batched.
"""
return state.center
adam_tell(state, *, follow_grad)
¶
Tell the Adam optimizer the current gradient to get its next state.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
state |
AdamState |
The current state of the Adam optimizer. |
required |
follow_grad |
Union[torch.Tensor, numpy.ndarray] |
Gradient at the current point of the Adam search. Can be a 1-dimensional tensor in the non-batched case, or a multi-dimensional tensor in the batched case. |
required |
Returns:
Type | Description |
---|---|
AdamState |
The updated state of Adam with the given gradient applied. |
Source code in evotorch/algorithms/functional/funcadam.py
def adam_tell(state: AdamState, *, follow_grad: BatchableVector) -> AdamState:
"""
Tell the Adam optimizer the current gradient to get its next state.
Args:
state: The current state of the Adam optimizer.
follow_grad: Gradient at the current point of the Adam search.
Can be a 1-dimensional tensor in the non-batched case,
or a multi-dimensional tensor in the batched case.
Returns:
The updated state of Adam with the given gradient applied.
"""
new_center, new_m, new_v, new_t = _adam_step(
follow_grad,
state.center,
state.center_learning_rate,
state.beta1,
state.beta2,
state.epsilon,
state.m,
state.v,
state.t,
)
return AdamState(
center=new_center,
center_learning_rate=state.center_learning_rate,
beta1=state.beta1,
beta2=state.beta2,
epsilon=state.epsilon,
m=new_m,
v=new_v,
t=new_t,
)
funccem
¶
CEMState (tuple)
¶
CEMState(center, stdev, stdev_min, stdev_max, stdev_max_change, parenthood_ratio, maximize)
Source code in evotorch/algorithms/functional/funccem.py
cem(*, center_init, parenthood_ratio, objective_sense, stdev_init=None, radius_init=None, stdev_min=None, stdev_max=None, stdev_max_change=None)
¶
Get an initial state for the cross entropy method (CEM).
The received initial state, a named tuple of type CEMState
, is to be
passed to the function cem_ask(...)
to receive the solutions belonging
to the first generation of the evolutionary search.
References:
Rubinstein, R. (1999). The cross-entropy method for combinatorial
and continuous optimization.
Methodology and computing in applied probability, 1(2), 127-190.
Duan, Y., Chen, X., Houthooft, R., Schulman, J., Abbeel, P. (2016).
Benchmarking deep reinforcement learning for continuous control.
International conference on machine learning. PMLR, 2016.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
center_init |
Union[torch.Tensor, numpy.ndarray] |
Center (i.e. mean) of the initial search distribution.
Expected as a PyTorch tensor with at least 1 dimension.
If the given |
required |
stdev_init |
Union[float, torch.Tensor, numpy.ndarray] |
Standard deviation of the initial search distribution.
If this is given as a scalar |
None |
radius_init |
Union[float, numbers.Number, numpy.ndarray, torch.Tensor] |
Radius for the initial search distribution, representing
the euclidean norm for the first standard deviation vector.
Setting this value as |
None |
parenthood_ratio |
float |
Proportion of the solutions that will be chosen as the parents for the next generation. For example, if this is given as 0.5, the top 50% of the solutions will be chosen as parents. |
required |
objective_sense |
str |
Expected as a string, either as 'min' or as 'max'. Determines if the goal is to minimize or is to maximize. |
required |
stdev_min |
Union[float, torch.Tensor, numpy.ndarray] |
Minimum allowed standard deviation for the search distribution. Can be given as a scalar or as a tensor with one or more dimensions. When given with at least 2 dimensions, the extra leftmost dimensions will be interpreted as batch dimensions. |
None |
stdev_max |
Union[float, torch.Tensor, numpy.ndarray] |
Maximum allowed standard deviation for the search distribution. Can be given as a scalar or as a tensor with one or more dimensions. When given with at least 2 dimensions, the extra leftmost dimensions will be interpreted as batch dimensions. |
None |
stdev_max_change |
Union[float, torch.Tensor, numpy.ndarray] |
Maximum allowed change for the standard deviation
vector. If this is given as a scalar, this scalar will serve as a
limiter for the change of the entire standard deviation vector.
For example, a scalar value of 0.2 means that the elements of the
standard deviation vector cannot change more than the 20% of their
original values. If this is given as a vector (i.e. as a
1-dimensional tensor), each element of |
None |
Returns:
Type | Description |
---|---|
CEMState |
A named tuple, of type |
Source code in evotorch/algorithms/functional/funccem.py
def cem(
*,
center_init: BatchableVector,
parenthood_ratio: float,
objective_sense: str,
stdev_init: Optional[Union[float, BatchableVector]] = None,
radius_init: Optional[Union[float, BatchableScalar]] = None,
stdev_min: Optional[Union[float, BatchableVector]] = None,
stdev_max: Optional[Union[float, BatchableVector]] = None,
stdev_max_change: Optional[Union[float, BatchableVector]] = None,
) -> CEMState:
"""
Get an initial state for the cross entropy method (CEM).
The received initial state, a named tuple of type `CEMState`, is to be
passed to the function `cem_ask(...)` to receive the solutions belonging
to the first generation of the evolutionary search.
References:
Rubinstein, R. (1999). The cross-entropy method for combinatorial
and continuous optimization.
Methodology and computing in applied probability, 1(2), 127-190.
Duan, Y., Chen, X., Houthooft, R., Schulman, J., Abbeel, P. (2016).
Benchmarking deep reinforcement learning for continuous control.
International conference on machine learning. PMLR, 2016.
Args:
center_init: Center (i.e. mean) of the initial search distribution.
Expected as a PyTorch tensor with at least 1 dimension.
If the given `center` tensor has more than 1 dimensions, the extra
leftmost dimensions will be interpreted as batch dimensions.
stdev_init: Standard deviation of the initial search distribution.
If this is given as a scalar `s`, the standard deviation for the
search distribution will be interpreted as `[s, s, ..., s]` whose
length is the same with the length of `center_init`.
If this is given as a 1-dimensional tensor, the given tensor will
be interpreted as the standard deviation vector.
If this is given as a tensor with at least 2 dimensions, the extra
leftmost dimension(s) will be interpreted as batch dimensions.
If you wish to express the coverage area of the initial search
distribution in terms of "radius" instead, you can leave
`stdev_init` as None, and provide a value for the argument
`radius_init`.
radius_init: Radius for the initial search distribution, representing
the euclidean norm for the first standard deviation vector.
Setting this value as `r` means that the standard deviation
vector will be initialized as a vector `[s, s, ..., s]`
whose norm will be equal to `r`. In the non-batched case,
`radius_init` is expected as a scalar value.
If `radius_init` is given as a tensor with 1 or more
dimensions, those dimensions will be considered as batch
dimensions. If you wish to express the coverage are of the initial
search distribution in terms of the standard deviation values
instead, you can leave `radius_init` as None, and provide a value
for the argument `stdev_init`.
parenthood_ratio: Proportion of the solutions that will be chosen as
the parents for the next generation. For example, if this is
given as 0.5, the top 50% of the solutions will be chosen as
parents.
objective_sense: Expected as a string, either as 'min' or as 'max'.
Determines if the goal is to minimize or is to maximize.
stdev_min: Minimum allowed standard deviation for the search
distribution. Can be given as a scalar or as a tensor with one or
more dimensions. When given with at least 2 dimensions, the extra
leftmost dimensions will be interpreted as batch dimensions.
stdev_max: Maximum allowed standard deviation for the search
distribution. Can be given as a scalar or as a tensor with one or
more dimensions. When given with at least 2 dimensions, the extra
leftmost dimensions will be interpreted as batch dimensions.
stdev_max_change: Maximum allowed change for the standard deviation
vector. If this is given as a scalar, this scalar will serve as a
limiter for the change of the entire standard deviation vector.
For example, a scalar value of 0.2 means that the elements of the
standard deviation vector cannot change more than the 20% of their
original values. If this is given as a vector (i.e. as a
1-dimensional tensor), each element of `stdev_max_change` will
serve as a limiter to its corresponding element within the standard
deviation vector. If `stdev_max_change` is given as a tensor with
at least 2 dimensions, the extra leftmost dimension(s) will be
interpreted as batch dimensions.
If you do not wish to have such a limiter, you can leave this as
None.
Returns:
A named tuple, of type `CEMState`, storing the hyperparameters and the
initial state of the cross entropy method.
"""
from .misc import _get_stdev_init
center_init = torch.as_tensor(center_init)
if center_init.ndim < 1:
raise ValueError(
"The center of the search distribution for the functional CEM was expected"
" as a tensor with at least 1 dimension."
f" However, the encountered `center_init` is {center_init}, of shape {center_init.shape}."
)
solution_length = center_init.shape[-1]
if solution_length == 0:
raise ValueError("Solution length cannot be 0")
stdev_init = _get_stdev_init(center_init=center_init, stdev_init=stdev_init, radius_init=radius_init)
device = center_init.device
dtype = center_init.dtype
def as_vector_like_center(x: Iterable, vector_name: str) -> torch.Tensor:
x = torch.as_tensor(x, dtype=dtype, device=device)
if x.ndim == 0:
x = x.repeat(solution_length)
else:
if x.shape[-1] != solution_length:
raise ValueError(
f"`{vector_name}` has an incompatible length."
f" The length of `{vector_name}`: {x.shape[-1]},"
f" but the solution length implied by the provided `center_init` is {solution_length}."
)
return x
if stdev_min is None:
stdev_min = 0.0
stdev_min = as_vector_like_center(stdev_min, "stdev_min")
if stdev_max is None:
stdev_max = float("inf")
stdev_max = as_vector_like_center(stdev_max, "stdev_max")
if stdev_max_change is None:
stdev_max_change = float("inf")
stdev_max_change = as_vector_like_center(stdev_max_change, "stdev_max_change")
parenthood_ratio = float(parenthood_ratio)
if objective_sense == "min":
maximize = False
elif objective_sense == "max":
maximize = True
else:
raise ValueError(
f"`objective_sense` was expected as 'min' or 'max', but it was received as {repr(objective_sense)}"
)
return CEMState(
center=center_init,
stdev=stdev_init,
stdev_min=stdev_min,
stdev_max=stdev_max,
stdev_max_change=stdev_max_change,
parenthood_ratio=parenthood_ratio,
maximize=maximize,
)
cem_ask(state, *, popsize)
¶
Obtain a population from cross entropy method, given the state.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
state |
CEMState |
The current state of the cross entropy method search. |
required |
popsize |
int |
Number of solutions to be generated for the requested population. |
required |
Returns:
Type | Description |
---|---|
Tensor |
Population, as a tensor of at least 2 dimensions. |
Source code in evotorch/algorithms/functional/funccem.py
def cem_ask(state: CEMState, *, popsize: int) -> torch.Tensor:
"""
Obtain a population from cross entropy method, given the state.
Args:
state: The current state of the cross entropy method search.
popsize: Number of solutions to be generated for the requested
population.
Returns:
Population, as a tensor of at least 2 dimensions.
"""
return _cem_ask(state.center, state.stdev, state.parenthood_ratio, popsize)
cem_tell(state, values, evals)
¶
Given the old state and the evals (fitnesses), obtain the next state.
From this state tuple, the center point of the search distribution can be
obtained via the field .center
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
state |
CEMState |
The old state of the cross entropy method search. |
required |
values |
Tensor |
The most recent population, as a PyTorch tensor. |
required |
evals |
Tensor |
Evaluation results (i.e. fitnesses) for the solutions expressed
by |
required |
Returns:
Type | Description |
---|---|
CEMState |
The new state of the cross entropy method search. |
Source code in evotorch/algorithms/functional/funccem.py
def cem_tell(state: CEMState, values: torch.Tensor, evals: torch.Tensor) -> CEMState:
"""
Given the old state and the evals (fitnesses), obtain the next state.
From this state tuple, the center point of the search distribution can be
obtained via the field `.center`.
Args:
state: The old state of the cross entropy method search.
values: The most recent population, as a PyTorch tensor.
evals: Evaluation results (i.e. fitnesses) for the solutions expressed
by `values`. For example, if `values` is shaped `(N, L)`, this means
that there are `N` solutions (of length `L`). So, `evals` is
expected as a 1-dimensional tensor of length `N`, where `evals[i]`
expresses the fitness of the solution `values[i, :]`.
If `values` is shaped `(B, N, L)`, then there is also a batch
dimension, so, `evals` is expected as a 2-dimensional tensor of
shape `(B, N)`.
Returns:
The new state of the cross entropy method search.
"""
new_center, new_stdev = _cem_tell(
state.stdev_min,
state.stdev_max,
state.stdev_max_change,
state.parenthood_ratio,
state.maximize,
state.center,
state.stdev,
values,
evals,
)
return CEMState(
center=new_center,
stdev=new_stdev,
stdev_min=state.stdev_min,
stdev_max=state.stdev_max,
stdev_max_change=state.stdev_max_change,
parenthood_ratio=state.parenthood_ratio,
maximize=state.maximize,
)
funcclipup
¶
ClipUpState (tuple)
¶
ClipUpState(center, velocity, center_learning_rate, momentum, max_speed)
Source code in evotorch/algorithms/functional/funcclipup.py
clipup(*, center_init, momentum=0.9, center_learning_rate=None, max_speed=None)
¶
Initialize the ClipUp optimizer and return its initial state.
Reference:
Toklu, N. E., Liskowski, P., & Srivastava, R. K. (2020, September).
ClipUp: A Simple and Powerful Optimizer for Distribution-Based Policy Evolution.
In International Conference on Parallel Problem Solving from Nature (pp. 515-527).
Springer, Cham.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
center_init |
Union[torch.Tensor, numpy.ndarray] |
Starting point for the ClipUp search. Expected as a PyTorch tensor with at least 1 dimension. If there are 2 or more dimensions, the extra leftmost dimensions are interpreted as batch dimensions. |
required |
center_learning_rate |
Union[numbers.Number, numpy.ndarray, torch.Tensor] |
Learning rate (i.e. the step size) for the ClipUp updates. Can be a scalar or a multidimensional tensor. If given as a tensor with multiple dimensions, those dimensions will be interpreted as batch dimensions. |
None |
max_speed |
Union[numbers.Number, numpy.ndarray, torch.Tensor] |
Maximum speed, expected as a scalar. The euclidean norm
of the velocity (i.e. of the update vector) is not allowed to
exceed |
None |
Source code in evotorch/algorithms/functional/funcclipup.py
def clipup(
*,
center_init: BatchableVector,
momentum: BatchableScalar = 0.9,
center_learning_rate: Optional[BatchableScalar] = None,
max_speed: Optional[BatchableScalar] = None,
) -> ClipUpState:
"""
Initialize the ClipUp optimizer and return its initial state.
Reference:
Toklu, N. E., Liskowski, P., & Srivastava, R. K. (2020, September).
ClipUp: A Simple and Powerful Optimizer for Distribution-Based Policy Evolution.
In International Conference on Parallel Problem Solving from Nature (pp. 515-527).
Springer, Cham.
Args:
center_init: Starting point for the ClipUp search.
Expected as a PyTorch tensor with at least 1 dimension.
If there are 2 or more dimensions, the extra leftmost dimensions
are interpreted as batch dimensions.
center_learning_rate: Learning rate (i.e. the step size) for the ClipUp
updates. Can be a scalar or a multidimensional tensor.
If given as a tensor with multiple dimensions, those dimensions
will be interpreted as batch dimensions.
max_speed: Maximum speed, expected as a scalar. The euclidean norm
of the velocity (i.e. of the update vector) is not allowed to
exceed `max_speed`.
If given as a tensor with multiple dimensions, those dimensions
will be interpreted as batch dimensions.
"""
center_init = torch.as_tensor(center_init)
dtype = center_init.dtype
device = center_init.device
def as_tensor(x) -> torch.Tensor:
return torch.as_tensor(x, dtype=dtype, device=device)
if (center_learning_rate is None) and (max_speed is None):
raise ValueError("Both `center_learning_rate` and `max_speed` is missing. At least one of them is needed.")
elif (center_learning_rate is not None) and (max_speed is None):
center_learning_rate = as_tensor(center_learning_rate)
max_speed = center_learning_rate * 2.0
elif (center_learning_rate is None) and (max_speed is not None):
max_speed = as_tensor(max_speed)
center_learning_rate = max_speed / 2.0
else:
center_learning_rate = as_tensor(center_learning_rate)
max_speed = as_tensor(max_speed)
velocity = torch.zeros_like(center_init)
momentum = as_tensor(momentum)
return ClipUpState(
center=center_init,
velocity=velocity,
center_learning_rate=center_learning_rate,
momentum=momentum,
max_speed=max_speed,
)
clipup_ask(state)
¶
Get the search point stored by the given ClipUpState
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
state |
ClipUpState |
The current state of the ClipUp optimizer. |
required |
Returns:
Type | Description |
---|---|
Tensor |
The search point as a 1-dimensional tensor in the non-batched case, or as a multi-dimensional tensor if the ClipUp search is batched. |
Source code in evotorch/algorithms/functional/funcclipup.py
def clipup_ask(state: ClipUpState) -> torch.Tensor:
"""
Get the search point stored by the given `ClipUpState`.
Args:
state: The current state of the ClipUp optimizer.
Returns:
The search point as a 1-dimensional tensor in the non-batched case,
or as a multi-dimensional tensor if the ClipUp search is batched.
"""
return state.center
clipup_tell(state, *, follow_grad)
¶
Tell the ClipUp optimizer the current gradient to get its next state.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
state |
ClipUpState |
The current state of the ClipUp optimizer. |
required |
follow_grad |
Union[torch.Tensor, numpy.ndarray] |
Gradient at the current point of the Adam search. Can be a 1-dimensional tensor in the non-batched case, or a multi-dimensional tensor in the batched case. |
required |
Returns:
Type | Description |
---|---|
ClipUpState |
The updated state of ClipUp with the given gradient applied. |
Source code in evotorch/algorithms/functional/funcclipup.py
def clipup_tell(state: ClipUpState, *, follow_grad: BatchableVector) -> ClipUpState:
"""
Tell the ClipUp optimizer the current gradient to get its next state.
Args:
state: The current state of the ClipUp optimizer.
follow_grad: Gradient at the current point of the Adam search.
Can be a 1-dimensional tensor in the non-batched case,
or a multi-dimensional tensor in the batched case.
Returns:
The updated state of ClipUp with the given gradient applied.
"""
velocity, center = _clipup_step(
follow_grad,
state.center,
state.velocity,
state.center_learning_rate,
state.momentum,
state.max_speed,
)
return ClipUpState(
center=center,
velocity=velocity,
center_learning_rate=state.center_learning_rate,
momentum=state.momentum,
max_speed=state.max_speed,
)
funcpgpe
¶
PGPEState (tuple)
¶
PGPEState(optimizer, optimizer_state, stdev, stdev_learning_rate, stdev_min, stdev_max, stdev_max_change, ranking_method, maximize, symmetric)
Source code in evotorch/algorithms/functional/funcpgpe.py
class PGPEState(NamedTuple):
optimizer: Union[str, tuple] # "adam" or (adam, adam_ask, adam_tell)
optimizer_state: tuple
stdev: torch.Tensor
stdev_learning_rate: torch.Tensor
stdev_min: torch.Tensor
stdev_max: torch.Tensor
stdev_max_change: torch.Tensor
ranking_method: str
maximize: bool
symmetric: bool
__getnewargs__(self)
special
¶
__new__(_cls, optimizer, optimizer_state, stdev, stdev_learning_rate, stdev_min, stdev_max, stdev_max_change, ranking_method, maximize, symmetric)
special
staticmethod
¶
Create new instance of PGPEState(optimizer, optimizer_state, stdev, stdev_learning_rate, stdev_min, stdev_max, stdev_max_change, ranking_method, maximize, symmetric)
__repr__(self)
special
¶
pgpe(*, center_init, center_learning_rate, stdev_learning_rate, objective_sense, ranking_method='centered', optimizer='clipup', optimizer_config=None, stdev_init=None, radius_init=None, stdev_min=None, stdev_max=None, stdev_max_change=0.2, symmetric=True)
¶
Get an initial state for the PGPE algorithm.
The received initial state, a named tuple of type PGPEState
, is to be
passed to the function pgpe_ask(...)
to receive the solutions belonging
to the first generation of the evolutionary search.
Inspired by the PGPE implementations used in the studies of Ha (2017, 2019), and by the evolution strategy variant of Salimans et al. (2017), this PGPE implementation uses 0-centered ranking by default. The default optimizer for this PGPE implementation is ClipUp (Toklu et al., 2020).
References:
Frank Sehnke, Christian Osendorfer, Thomas Ruckstiess,
Alex Graves, Jan Peters, Jurgen Schmidhuber (2010).
Parameter-exploring Policy Gradients.
Neural Networks 23(4), 551-559.
David Ha (2017). Evolving Stable Strategies.
<http://blog.otoro.net/2017/11/12/evolving-stable-strategies/>
Salimans, T., Ho, J., Chen, X., Sidor, S. and Sutskever, I. (2017).
Evolution Strategies as a Scalable Alternative to
Reinforcement Learning.
David Ha (2019). Reinforcement Learning for Improving Agent Design.
Artificial life 25 (4), 352-365.
Toklu, N.E., Liskowski, P., Srivastava, R.K. (2020).
ClipUp: A Simple and Powerful Optimizer
for Distribution-based Policy Evolution.
Parallel Problem Solving from Nature (PPSN 2020).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
center_init |
Union[torch.Tensor, numpy.ndarray] |
Center (i.e. mean) of the initial search distribution.
Expected as a PyTorch tensor with at least 1 dimension.
If the given |
required |
center_learning_rate |
Union[numbers.Number, numpy.ndarray, torch.Tensor] |
Learning rate for when updating the center of the search distribution. For normal cases, this is expected as a scalar. If given as an n-dimensional tensor (where n>0), the extra dimensions will be considered as batch dimensions. |
required |
stdev_learning_rate |
Union[numbers.Number, numpy.ndarray, torch.Tensor] |
Learning rate for when updating the standard deviation of the search distribution. For normal cases, this is expected as a scalar. If given as an n-dimensional tensor (where n>0), the extra dimensions will be considered as batch dimensions. |
required |
objective_sense |
str |
Expected as a string, either as 'min' or as 'max'. Determines if the goal is to minimize or is to maximize. |
required |
ranking_method |
str |
Determines how the fitnesses will be ranked before computing the gradients. Among the choices are "centered" (a linear ranking where the worst solution gets the rank -0.5 and the best solution gets the rank +0.5), "linear" (a linear ranking where the worst solution gets the rank 0 and the best solution gets the rank 1), "nes" (the ranking method that is used by the natural evolution strategies), and "raw" (no ranking). |
'centered' |
optimizer |
Union[str, tuple] |
Functional optimizer to use when updating the center of the
search distribution. The functional optimizer can be expressed via
a string, or via a tuple.
If given as string, the valid choices are:
"clipup" (for the ClipUp optimizer),
"adam" (for the Adam optimizer),
"sgd" (for regular gradient ascent/descent).
If given as a tuple, the tuple should be in the form
|
'clipup' |
optimizer_config |
Optional[dict] |
Optionally a dictionary, containing the hyperparameters for the optimizer. |
None |
stdev_init |
Union[float, torch.Tensor, numpy.ndarray] |
Standard deviation of the initial search distribution.
If this is given as a scalar |
None |
radius_init |
Union[float, numbers.Number, numpy.ndarray, torch.Tensor] |
Radius for the initial search distribution, representing
the euclidean norm for the first standard deviation vector.
Setting this value as |
None |
stdev_min |
Union[float, torch.Tensor, numpy.ndarray] |
Minimum allowed standard deviation for the search distribution. Can be given as a scalar or as a tensor with one or more dimensions. When given with at least 2 dimensions, the extra leftmost dimensions will be interpreted as batch dimensions. |
None |
stdev_max |
Union[float, torch.Tensor, numpy.ndarray] |
Maximum allowed standard deviation for the search distribution. Can be given as a scalar or as a tensor with one or more dimensions. When given with at least 2 dimensions, the extra leftmost dimensions will be interpreted as batch dimensions. |
None |
stdev_max_change |
Union[float, torch.Tensor, numpy.ndarray] |
Maximum allowed change for the standard deviation
vector. If this is given as a scalar, this scalar will serve as a
limiter for the change of the entire standard deviation vector.
For example, a scalar value of 0.2 means that the elements of the
standard deviation vector cannot change more than the 20% of their
original values. If this is given as a vector (i.e. as a
1-dimensional tensor), each element of |
0.2 |
symmetric |
bool |
Whether or not symmetric (i.e. antithetic) sampling will be done while generating a new population. |
True |
Returns:
Type | Description |
---|---|
PGPEState |
A named tuple, of type |
Source code in evotorch/algorithms/functional/funcpgpe.py
def pgpe(
*,
center_init: BatchableVector,
center_learning_rate: BatchableScalar,
stdev_learning_rate: BatchableScalar,
objective_sense: str,
ranking_method: str = "centered",
optimizer: Union[str, tuple] = "clipup", # or "adam" or "sgd"
optimizer_config: Optional[dict] = None,
stdev_init: Optional[Union[float, BatchableVector]] = None,
radius_init: Optional[Union[float, BatchableScalar]] = None,
stdev_min: Optional[Union[float, BatchableVector]] = None,
stdev_max: Optional[Union[float, BatchableVector]] = None,
stdev_max_change: Optional[Union[float, BatchableVector]] = 0.2,
symmetric: bool = True,
) -> PGPEState:
"""
Get an initial state for the PGPE algorithm.
The received initial state, a named tuple of type `PGPEState`, is to be
passed to the function `pgpe_ask(...)` to receive the solutions belonging
to the first generation of the evolutionary search.
Inspired by the PGPE implementations used in the studies
of Ha (2017, 2019), and by the evolution strategy variant of
Salimans et al. (2017), this PGPE implementation uses 0-centered
ranking by default.
The default optimizer for this PGPE implementation is ClipUp
(Toklu et al., 2020).
References:
Frank Sehnke, Christian Osendorfer, Thomas Ruckstiess,
Alex Graves, Jan Peters, Jurgen Schmidhuber (2010).
Parameter-exploring Policy Gradients.
Neural Networks 23(4), 551-559.
David Ha (2017). Evolving Stable Strategies.
<http://blog.otoro.net/2017/11/12/evolving-stable-strategies/>
Salimans, T., Ho, J., Chen, X., Sidor, S. and Sutskever, I. (2017).
Evolution Strategies as a Scalable Alternative to
Reinforcement Learning.
David Ha (2019). Reinforcement Learning for Improving Agent Design.
Artificial life 25 (4), 352-365.
Toklu, N.E., Liskowski, P., Srivastava, R.K. (2020).
ClipUp: A Simple and Powerful Optimizer
for Distribution-based Policy Evolution.
Parallel Problem Solving from Nature (PPSN 2020).
Args:
center_init: Center (i.e. mean) of the initial search distribution.
Expected as a PyTorch tensor with at least 1 dimension.
If the given `center` tensor has more than 1 dimensions, the extra
leftmost dimensions will be interpreted as batch dimensions.
center_learning_rate: Learning rate for when updating the center of the
search distribution.
For normal cases, this is expected as a scalar. If given as an
n-dimensional tensor (where n>0), the extra dimensions will be
considered as batch dimensions.
stdev_learning_rate: Learning rate for when updating the standard
deviation of the search distribution.
For normal cases, this is expected as a scalar. If given as an
n-dimensional tensor (where n>0), the extra dimensions will be
considered as batch dimensions.
objective_sense: Expected as a string, either as 'min' or as 'max'.
Determines if the goal is to minimize or is to maximize.
ranking_method: Determines how the fitnesses will be ranked before
computing the gradients. Among the choices are
"centered" (a linear ranking where the worst solution gets the rank
-0.5 and the best solution gets the rank +0.5),
"linear" (a linear ranking where the worst solution gets the rank
0 and the best solution gets the rank 1),
"nes" (the ranking method that is used by the natural evolution
strategies), and
"raw" (no ranking).
optimizer: Functional optimizer to use when updating the center of the
search distribution. The functional optimizer can be expressed via
a string, or via a tuple.
If given as string, the valid choices are:
"clipup" (for the ClipUp optimizer),
"adam" (for the Adam optimizer),
"sgd" (for regular gradient ascent/descent).
If given as a tuple, the tuple should be in the form
`(optim, optim_ask, optim_tell)`, where the objects
`optim`, `optim_ask`, and `optim_tell` are the functions for
initializing the optimizer, asking (for the current search point),
and telling (the gradient to follow).
The function `optim` should expect keyword arguments for its
hyperparameters, and should return a state tuple of the optimizer.
The function `optim_ask` should expect the state tuple of the
optimizer, and should return the current search point as a tensor.
The function `optim_tell` should expect the state tuple of the
optimizer as a positional argument, and the gradient via the
keyword argument `follow_grad`.
optimizer_config: Optionally a dictionary, containing the
hyperparameters for the optimizer.
stdev_init: Standard deviation of the initial search distribution.
If this is given as a scalar `s`, the standard deviation for the
search distribution will be interpreted as `[s, s, ..., s]` whose
length is the same with the length of `center_init`.
If this is given as a 1-dimensional tensor, the given tensor will
be interpreted as the standard deviation vector.
If this is given as a tensor with at least 2 dimensions, the extra
leftmost dimension(s) will be interpreted as batch dimensions.
If you wish to express the coverage area of the initial search
distribution in terms of "radius" instead, you can leave
`stdev_init` as None, and provide a value for the argument
`radius_init`.
radius_init: Radius for the initial search distribution, representing
the euclidean norm for the first standard deviation vector.
Setting this value as `r` means that the standard deviation
vector will be initialized as a vector `[s, s, ..., s]`
whose norm will be equal to `r`. In the non-batched case,
`radius_init` is expected as a scalar value.
If `radius_init` is given as a tensor with 1 or more
dimensions, those dimensions will be considered as batch
dimensions. If you wish to express the coverage are of the initial
search distribution in terms of the standard deviation values
instead, you can leave `radius_init` as None, and provide a value
for the argument `stdev_init`.
stdev_min: Minimum allowed standard deviation for the search
distribution. Can be given as a scalar or as a tensor with one or
more dimensions. When given with at least 2 dimensions, the extra
leftmost dimensions will be interpreted as batch dimensions.
stdev_max: Maximum allowed standard deviation for the search
distribution. Can be given as a scalar or as a tensor with one or
more dimensions. When given with at least 2 dimensions, the extra
leftmost dimensions will be interpreted as batch dimensions.
stdev_max_change: Maximum allowed change for the standard deviation
vector. If this is given as a scalar, this scalar will serve as a
limiter for the change of the entire standard deviation vector.
For example, a scalar value of 0.2 means that the elements of the
standard deviation vector cannot change more than the 20% of their
original values. If this is given as a vector (i.e. as a
1-dimensional tensor), each element of `stdev_max_change` will
serve as a limiter to its corresponding element within the standard
deviation vector. If `stdev_max_change` is given as a tensor with
at least 2 dimensions, the extra leftmost dimension(s) will be
interpreted as batch dimensions.
If you do not wish to have such a limiter, you can leave this as
None.
symmetric: Whether or not symmetric (i.e. antithetic) sampling will be
done while generating a new population.
Returns:
A named tuple, of type `CEMState`, storing the hyperparameters and the
initial state of the cross entropy method.
"""
from .misc import _get_stdev_init, get_functional_optimizer
center_init = torch.as_tensor(center_init)
if center_init.ndim < 1:
raise ValueError(
"The center of the search distribution for the functional PGPE was expected"
" as a tensor with at least 1 dimension."
f" However, the encountered `center` is {center_init}, of shape {center_init.shape}."
)
solution_length = center_init.shape[-1]
if solution_length == 0:
raise ValueError("Solution length cannot be 0")
stdev_init = _get_stdev_init(center_init=center_init, stdev_init=stdev_init, radius_init=radius_init)
device = center_init.device
dtype = center_init.dtype
def as_tensor(x) -> torch.Tensor:
return torch.as_tensor(x, dtype=dtype, device=device)
def as_vector_like_center(x: Iterable, vector_name: str) -> torch.Tensor:
x = as_tensor(x)
if x.ndim == 0:
x = x.repeat(solution_length)
else:
if x.shape[-1] != solution_length:
raise ValueError(
f"`{vector_name}` has an incompatible length."
f" The length of `{vector_name}`: {x.shape[-1]},"
f" but the solution length implied by the provided `center_init` is {solution_length}."
)
return x
center_learning_rate = as_tensor(center_learning_rate)
stdev_learning_rate = as_tensor(stdev_learning_rate)
if objective_sense == "min":
maximize = False
elif objective_sense == "max":
maximize = True
else:
raise ValueError(
f"`objective_sense` was expected as 'min' or 'max', but it was received as {repr(objective_sense)}"
)
ranking_method = str(ranking_method)
if stdev_min is None:
stdev_min = 0.0
stdev_min = as_vector_like_center(stdev_min, "stdev_min")
if stdev_max is None:
stdev_max = float("inf")
stdev_max = as_vector_like_center(stdev_max, "stdev_max")
if stdev_max_change is None:
stdev_max_change = float("inf")
stdev_max_change = as_vector_like_center(stdev_max_change, "stdev_max_change")
if optimizer_config is None:
optimizer_config = {}
optimizer_init_func, _, _ = get_functional_optimizer(optimizer)
optimizer_state = optimizer_init_func(
center_init=center_init, center_learning_rate=center_learning_rate, **optimizer_config
)
symmetric = bool(symmetric)
return PGPEState(
optimizer=optimizer,
optimizer_state=optimizer_state,
stdev=stdev_init,
stdev_learning_rate=stdev_learning_rate,
stdev_min=stdev_min,
stdev_max=stdev_max,
stdev_max_change=stdev_max_change,
ranking_method=ranking_method,
maximize=maximize,
symmetric=symmetric,
)
pgpe_ask(state, *, popsize)
¶
Obtain a population from the PGPE algorithm.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
state |
PGPEState |
The current state of PGPE. |
required |
popsize |
int |
Number of solutions to be generated for the requested population. |
required |
Returns:
Type | Description |
---|---|
Tensor |
Population, as a tensor of at least 2 dimensions. |
Source code in evotorch/algorithms/functional/funcpgpe.py
def pgpe_ask(state: PGPEState, *, popsize: int) -> torch.Tensor:
"""
Obtain a population from the PGPE algorithm.
Args:
state: The current state of PGPE.
popsize: Number of solutions to be generated for the requested
population.
Returns:
Population, as a tensor of at least 2 dimensions.
"""
from .misc import get_functional_optimizer
_, optimizer_ask, _ = get_functional_optimizer(state.optimizer)
center = optimizer_ask(state.optimizer_state)
stdev = state.stdev
sample_func = _symmetic_sample if state.symmetric else _nonsymmetric_sample
return sample_func(popsize, mu=center, sigma=stdev)
pgpe_tell(state, values, evals)
¶
Given the old state and the evals (fitnesses), obtain the next state.
From this state tuple, the center point of the search distribution can be
obtained via the field .optimizer_state.center
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
state |
PGPEState |
The old state of the cross entropy method search. |
required |
values |
Tensor |
The most recent population, as a PyTorch tensor. |
required |
evals |
Tensor |
Evaluation results (i.e. fitnesses) for the solutions expressed
by |
required |
Returns:
Type | Description |
---|---|
PGPEState |
The new state of PGPE. |
Source code in evotorch/algorithms/functional/funcpgpe.py
def pgpe_tell(state: PGPEState, values: torch.Tensor, evals: torch.Tensor) -> PGPEState:
"""
Given the old state and the evals (fitnesses), obtain the next state.
From this state tuple, the center point of the search distribution can be
obtained via the field `.optimizer_state.center`.
Args:
state: The old state of the cross entropy method search.
values: The most recent population, as a PyTorch tensor.
evals: Evaluation results (i.e. fitnesses) for the solutions expressed
by `values`. For example, if `values` is shaped `(N, L)`, this means
that there are `N` solutions (of length `L`). So, `evals` is
expected as a 1-dimensional tensor of length `N`, where `evals[i]`
expresses the fitness of the solution `values[i, :]`.
If `values` is shaped `(B, N, L)`, then there is also a batch
dimension, so, `evals` is expected as a 2-dimensional tensor of
shape `(B, N)`.
Returns:
The new state of PGPE.
"""
from .misc import get_functional_optimizer
_, optimizer_ask, optimizer_tell = get_functional_optimizer(state.optimizer)
grad_func = _symmetric_grad if state.symmetric else _nonsymmetric_grad
objective_sense = "max" if state.maximize else "min"
grads = grad_func(
values,
evals,
mu=optimizer_ask(state.optimizer_state),
sigma=state.stdev,
objective_sense=objective_sense,
ranking_method=state.ranking_method,
)
new_optimizer_state = optimizer_tell(state.optimizer_state, follow_grad=grads["mu"])
target_stdev = _follow_stdev_grad(state.stdev, state.stdev_learning_rate, grads["sigma"])
new_stdev = modify_vector(
state.stdev, target_stdev, lb=state.stdev_min, ub=state.stdev_max, max_change=state.stdev_max_change
)
return PGPEState(
optimizer=state.optimizer,
optimizer_state=new_optimizer_state,
stdev=new_stdev,
stdev_learning_rate=state.stdev_learning_rate,
stdev_min=state.stdev_min,
stdev_max=state.stdev_max,
stdev_max_change=state.stdev_max_change,
ranking_method=state.ranking_method,
maximize=state.maximize,
symmetric=state.symmetric,
)
funcsgd
¶
SGDState (tuple)
¶
SGDState(center, velocity, center_learning_rate, momentum)
Source code in evotorch/algorithms/functional/funcsgd.py
sgd(*, center_init, center_learning_rate, momentum=None)
¶
Initialize the gradient ascent/descent search and get its initial state.
Reference regarding the momentum behavior:
Polyak, B. T. (1964).
Some methods of speeding up the convergence of iteration methods.
USSR Computational Mathematics and Mathematical Physics, 4(5):1–17.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
center_init |
Union[torch.Tensor, numpy.ndarray] |
Starting point for the gradient ascent/descent. Expected as a PyTorch tensor with at least 1 dimension. If there are 2 or more dimensions, the extra leftmost dimensions are interpreted as batch dimensions. |
required |
center_learning_rate |
Union[numbers.Number, numpy.ndarray, torch.Tensor] |
Learning rate (i.e. the step size) for gradient ascent/descent. Can be a scalar or a multidimensional tensor. If given as a tensor with multiple dimensions, those dimensions will be interpreted as batch dimensions. |
required |
momentum |
Union[numbers.Number, numpy.ndarray, torch.Tensor] |
Momentum coefficient, expected as a scalar. If provided as a scalar, Polyak-style momentum will be enabled. If given as a tensor with multiple dimensions, those dimensions will be interpreted as batch dimensions. |
None |
Source code in evotorch/algorithms/functional/funcsgd.py
def sgd(
*,
center_init: BatchableVector,
center_learning_rate: BatchableScalar,
momentum: Optional[BatchableScalar] = None,
) -> SGDState:
"""
Initialize the gradient ascent/descent search and get its initial state.
Reference regarding the momentum behavior:
Polyak, B. T. (1964).
Some methods of speeding up the convergence of iteration methods.
USSR Computational Mathematics and Mathematical Physics, 4(5):1–17.
Args:
center_init: Starting point for the gradient ascent/descent.
Expected as a PyTorch tensor with at least 1 dimension.
If there are 2 or more dimensions, the extra leftmost dimensions
are interpreted as batch dimensions.
center_learning_rate: Learning rate (i.e. the step size) for gradient
ascent/descent. Can be a scalar or a multidimensional tensor.
If given as a tensor with multiple dimensions, those dimensions
will be interpreted as batch dimensions.
momentum: Momentum coefficient, expected as a scalar.
If provided as a scalar, Polyak-style momentum will be enabled.
If given as a tensor with multiple dimensions, those dimensions
will be interpreted as batch dimensions.
"""
center_init = torch.as_tensor(center_init)
dtype = center_init.dtype
device = center_init.device
def as_tensor(x) -> torch.Tensor:
return torch.as_tensor(x, dtype=dtype, device=device)
velocity = torch.zeros_like(center_init)
center_learning_rate = as_tensor(center_learning_rate)
momentum = as_tensor(0.0) if momentum is None else as_tensor(momentum)
return SGDState(
center=center_init,
velocity=velocity,
center_learning_rate=center_learning_rate,
momentum=momentum,
)
sgd_ask(state)
¶
Get the search point stored by the given SGDState
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
state |
SGDState |
The current state of gradient ascent/descent. |
required |
Returns:
Type | Description |
---|---|
Tensor |
The search point as a 1-dimensional tensor in the non-batched case, or as a multi-dimensional tensor if the search is batched. |
Source code in evotorch/algorithms/functional/funcsgd.py
def sgd_ask(state: SGDState) -> torch.Tensor:
"""
Get the search point stored by the given `SGDState`.
Args:
state: The current state of gradient ascent/descent.
Returns:
The search point as a 1-dimensional tensor in the non-batched case,
or as a multi-dimensional tensor if the search is batched.
"""
return state.center
sgd_tell(state, *, follow_grad)
¶
Tell the gradient ascent/descent the current gradient to get its next state.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
state |
SGDState |
The current state of gradient ascent/descent. |
required |
follow_grad |
Union[torch.Tensor, numpy.ndarray] |
Gradient at the current point of the search. Can be a 1-dimensional tensor in the non-batched case, or a multi-dimensional tensor in the batched case. |
required |
Returns:
Type | Description |
---|---|
SGDState |
The updated state of gradient ascent/descent, with the given gradient applied. |
Source code in evotorch/algorithms/functional/funcsgd.py
def sgd_tell(state: SGDState, *, follow_grad: BatchableVector) -> SGDState:
"""
Tell the gradient ascent/descent the current gradient to get its next state.
Args:
state: The current state of gradient ascent/descent.
follow_grad: Gradient at the current point of the search.
Can be a 1-dimensional tensor in the non-batched case,
or a multi-dimensional tensor in the batched case.
Returns:
The updated state of gradient ascent/descent, with the given gradient
applied.
"""
velocity, center = _sgd_step(
follow_grad,
state.center,
state.velocity,
state.center_learning_rate,
state.momentum,
)
return SGDState(
center=center,
velocity=velocity,
center_learning_rate=state.center_learning_rate,
momentum=state.momentum,
)
misc
¶
OptimizerFunctions (tuple)
¶
get_functional_optimizer(optimizer)
¶
Get a tuple of optimizer-related functions, from the given optimizer name.
For example, if the given string is "adam", the returned tuple will be
(adam, adam_ask, adam_tell)
, where
adam
is the function that will initialize the Adam optimizer,
adam_ask
is the function that will get the current search point as a tensor, and
adam_tell
is the function that will expect the gradient and will return the updated
state of the Adam search after applying the given gradient.
In addition to "adam", the strings "clipup" and "sgd" are also supported.
If the given optimizer is a 3-element tuple, then, the three elements within the tuple are assumed to be the initialization, ask, and tell functions of a custom optimizer, and those functions are returned in the same order.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
optimizer |
Union[str, tuple] |
The optimizer name as a string, or a 3-element tuple representing the functions related to the optimizer. |
required |
Returns:
Type | Description |
---|---|
tuple |
A 3-element tuple in the form
|
Source code in evotorch/algorithms/functional/misc.py
def get_functional_optimizer(optimizer: Union[str, tuple]) -> tuple:
"""
Get a tuple of optimizer-related functions, from the given optimizer name.
For example, if the given string is "adam", the returned tuple will be
`(adam, adam_ask, adam_tell)`, where
[adam][evotorch.algorithms.functional.funcadam.adam]
is the function that will initialize the Adam optimizer,
[adam_ask][evotorch.algorithms.functional.funcadam.adam_ask]
is the function that will get the current search point as a tensor, and
[adam_tell][evotorch.algorithms.functional.funcadam.adam_tell]
is the function that will expect the gradient and will return the updated
state of the Adam search after applying the given gradient.
In addition to "adam", the strings "clipup" and "sgd" are also supported.
If the given optimizer is a 3-element tuple, then, the three elements
within the tuple are assumed to be the initialization, ask, and tell
functions of a custom optimizer, and those functions are returned
in the same order.
Args:
optimizer: The optimizer name as a string, or a 3-element tuple
representing the functions related to the optimizer.
Returns:
A 3-element tuple in the form
`(optimizer, optimizer_ask, optimizer_tell)`, where each element
is a function, the first one being responsible for initializing
the optimizer and returning its first state.
"""
from .funcadam import adam, adam_ask, adam_tell
from .funcclipup import clipup, clipup_ask, clipup_tell
from .funcsgd import sgd, sgd_ask, sgd_tell
if optimizer == "adam":
return OptimizerFunctions(initialize=adam, ask=adam_ask, tell=adam_tell)
elif optimizer == "clipup":
return OptimizerFunctions(initialize=clipup, ask=clipup_ask, tell=clipup_tell)
elif optimizer in ("sgd", "sga", "momentum"):
return OptimizerFunctions(initialize=sgd, ask=sgd_ask, tell=sgd_tell)
elif isinstance(optimizer, str):
raise ValueError(f"Unrecognized functional optimizer name: {optimizer}")
elif isinstance(optimizer, Iterable):
a, b, c = optimizer
return OptimizerFunctions(initialize=a, ask=b, tell=c)
else:
raise TypeError(
f"`get_functional_optimizer(...)` received an unrecognized argument: {repr(optimizer)}"
f" (of type {type(optimizer)})"
)