Index
Purely functional implementations of optimization algorithms.
Reasoning.
PyTorch has a functional API within its namespace torch.func
.
In addition to allowing one to choose a pure functional programming style,
torch.func
enables powerful batched operations via torch.func.vmap
.
To be able to work with the functional programming style of torch.func
,
EvoTorch introduces functional implementations of evolutionary search
algorithms and optimizers within the namespace
evotorch.algorithms.functional
.
These algorithm implementations are compatible with torch.func.vmap
,
and therefore they can perform batched evolutionary searches
(e.g. they can work on not just a single population, but on batches
of populations). Such batched searches can be helpful in the following
scenarios:
Scenario 1: Nested optimization. The main optimization problem at hand might have internal optimization problems. Therefore, when the main optimization problem's fitness function is reached, the internal optimization problem will have to be solved for each solution of the main problem. In such a scenario, one might want to use a functional evolutionary search for the inner optimization problem, so that a batch of populations is formed where each batch item represents a separate population associated with a separate solution of the main problem.
Scenario 2: Batched hyperparameter search. If the user is interested in using a search algorithm that has a functional implementation, the user might want to implement a hyperparameter search in such a way that there is a batch of hyperparameters (instead of just a single set of hyperparameters), and the search is performed on a batch of populations. In such a setting, each population within the population batch is associated with a different hyperparameter set within the hyperparameter batch.
Example: cross entropy method. Let us assume that we have the following fitness function, whose output we wish to minimize:
import torch
def f(x: torch.Tensor) -> torch.Tensor:
assert x.ndim == 2, "Please pass `x` as a 2-dimensional tensor"
return torch.sum(x**2, dim=-1)
Let us initialize our search from a random point:
Now we can initialize our cross entropy method like this:
from evotorch.algorithms.functional import cem, cem_ask, cem_tell
state = cem(
#
# Center point of the initial search distribution:
center_init=center_init,
#
#
# Standard deviation of the initial search distribution:
stdev_init=10.0,
#
#
# Top half of the population are to be chosen as parents:
parenthood_ratio=0.5,
#
#
# We wish to minimize the fitnesses:
objective_sense="min",
#
#
# A standard deviation item is not allowed to change more than
# 1% of its original value:
stdev_max_change=0.01,
)
At this point, we have an initial state of our cross entropy method search,
stored by the variable state
. Now, we can implement a loop and perform
multiple generations of evolutionary search like this:
num_generations = 1000
for generation in range(1, 1 + num_generations):
# Ask for a new population (of size 1000) from cross entropy method
solutions = cem_ask(state, popsize=1000)
# At this point, `solutions` is a regular PyTorch tensor, ready to be
# passed to the function `f`.
# `solutions` is a 2-dimensional tensor of shape (N, L) where `N`
# is the number of solutions, and `L` is the length of a solution.
# Our example fitness function `f` is implemented in such a way that
# we can pass our 2-dimensional `solutions` tensor into it directly.
# We will receive `fitnesses` as a 1-dimensional tensor of length `N`.
fitnesses = f(solutions)
# Let us report the mean of fitnesses to see the progress
print("Generation:", generation, " Mean of fitnesses:", torch.mean(fitnesses))
# Now, we inform cross entropy method of the latest state of the search,
# the latest population, and the latest fitnesses, so that it can give us
# the next state of the search.
state = cem_tell(state, solutions, fitnesses)
At the end of the evolutionary search (or, actually, at any point), one can
analyze the state
tuple to get information about the current status of the
search distribution. These state tuples are named tuples, and therefore, the
data they store are labeled.
In the case of cross entropy method, the latest center of the search
distribution can be obtained via:
latest_center = state.center
# Note, in the case of pgpe, this would be:
# latest_center = state.optimizer_state.center
Notes on manipulating the evolutionary search.
If, at any point of the search, you would like to change a hyperparameter,
you can do so by creating a modified copy of your latest state
tuple,
and pass it to the ask method of your evolutionary search (which,
in the case of cross entropy method, is cem_ask
).
Similarly, if you wish to change the center point of the search, you can
pass a modified state tuple containing the new center point to cem_ask
.
Notes on batching.
In regular non-batched cases, functional search algorithms expect the
center_init
argument as a 1-dimensional tensor. If center_init
is given
as a tensor with 2 or more dimensions, the extra leftmost dimensions will
be considered as batch dimensions, and therefore the evolutionary search
itself will be batched (which means that the ask method of the search
algorithm will return a batch of populations). Furthermore, certain
hyperparameters can also be given in batches. See the specific
documentation of the functional algorithms to see which hyperparameters
support batching.
When working with batched populations, it is important to make sure that the fitness function can work with arbitrary amount of dimensions (not just 2 dimensions). One way to implement such fitness functions with the help of the rowwise decorator:
from evotorch.decorators import rowwise
@rowwise
def f(x: torch.Tensor) -> torch.Tensor:
return torch.sum(x**2)
When decorated with @rowwise
, we can implement our function as if the
tensor x
is a 1-dimensional tensor. If the decorated f
receives x
not as a vector, but as a matrix, then it will do the same operation
on each row of the matrix, in a vectorized manner. If x
has 3 or
more dimensions, they will be considered as extra batch dimensions,
affecting the shape of the resulting tensor.
Example: gradient-based search.
This namespace also provides functional implementations of various gradient
based optimizers. The reasoning behind the existence of these implementations
is two-fold: (i) these optimizers are used by the functional pgpe
implementation (for handling the momentum); and (ii) having these optimizers
with a similar API allows user to switch back-and-forth between evolutionary
and gradient-based search for solving the same problem, hopefully without
having to change the code too much.
Let us consider the same fitness function again, in its @rowwise
form so
that it can work with a single vector or a batch of such vectors:
from evotorch.decorators import rowwise
@rowwise
def f(x: torch.Tensor) -> torch.Tensor:
return torch.sum(x**2)
To solve this optimization problem using the Adam optimizer, one can do the following:
from evotorch.algorithms.functional import adam, adam_ask, adam_tell
from torch.func import grad
# Prepare an initial search point
solution_length = 1000
center_init = torch.randn(solution_length, dtype=torch.float32) * 10
# Initialize the Adam optimizer
state = adam(
center_init=center_init,
center_learning_rate=0.001,
beta1=0.9,
beta2=0.999,
epsilon=1e-8,
)
num_iterations = 1000
for iteration in range(1, 1 + num_iterations):
# Get the current search point of the Adam search
center = adam_ask(state)
# Get the gradient.
# Negative, because we want to minimize f.
gradient = -(grad(f)(center))
# Inform the Adam optimizer of the gradient to follow, and get the next
# state of the search
state = adam_tell(state, follow_grad=gradient)
# Store the final solution
final_solution = adam_ask(state)
# or, alternatively:
# final_solution = state.center
Solving a stateful Problem object using functional algorithms. If you wish to solve a stateful Problem using a functional optimization algorithm, you can obtain a callable evaluator out of that Problem object, and then use it for computing the fitnesses. See the following example:
from evotorch import Problem, SolutionBatch
from evotorch.algorithms.functional import cem, cem_ask, cem_tell
class MyProblem(Problem):
def __init__(self): ...
def _evaluate_batch(self, batch: SolutionBatch):
# Stateful batch evaluation code goes here
...
# Instantiate the problem
problem = MyProblem()
# Make a callable fitness evaluator
fproblem = problem.make_callable_evaluator()
# Make an initial solution
center_init = torch.randn(problem.solution_length, dtype=torch.float32) * 10
# Prepare a cross entropy method search
state = cem(
center_init=center_init,
stdev_init=10.0,
parenthood_ratio=0.5,
objective_sense="min",
stdev_max_change=0.01,
)
num_generations = 1000
for generation in range(1, 1 + num_generations):
# Get a population
solutions = cem_ask(state, popsize=1000)
# Call the evaluator to get the fitnesses
fitnesses = fproblem(solutions)
# Let us report the mean of fitnesses to see the progress
print("Generation:", generation, " Mean of fitnesses:", torch.mean(fitnesses))
# Now, we inform cross entropy method of the latest state of the search,
# the latest population, and the latest fitnesses, so that it can give us
# the next state of the search.
state = cem_tell(state, solutions, fitnesses)
# Center of the latest search distribution
latest_center = state.center
adam(*, center_init, center_learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08)
¶
Initialize an Adam optimizer and return its initial state.
Reference:
Kingma, D. P. and J. Ba (2015).
Adam: A method for stochastic optimization.
In Proceedings of 3rd International Conference on Learning Representations.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
center_init
|
BatchableVector
|
Starting point for the Adam search. Expected as a PyTorch tensor with at least 1 dimension. If there are 2 or more dimensions, the extra leftmost dimensions are interpreted as batch dimensions. |
required |
center_learning_rate
|
BatchableScalar
|
Learning rate (i.e. the step size) for the Adam updates. Can be a scalar or a multidimensional tensor. If given as a tensor with multiple dimensions, those dimensions will be interpreted as batch dimensions. |
0.001
|
beta1
|
BatchableScalar
|
beta1 hyperparameter for the Adam optimizer. Can be a scalar or a multidimensional tensor. If given as a tensor with multiple dimensions, those dimensions will be interpreted as batch dimensions. |
0.9
|
beta2
|
BatchableScalar
|
beta2 hyperparameter for the Adam optimizer. Can be a scalar or a multidimensional tensor. If given as a tensor with multiple dimensions, those dimensions will be interpreted as batch dimensions. |
0.999
|
epsilon
|
BatchableScalar
|
epsilon hyperparameter for the Adam optimizer. Can be a scalar or a multidimensional tensor. If given as a tensor with multiple dimensions, those dimensions will be interpreted as batch dimensions. |
1e-08
|
Source code in evotorch/algorithms/functional/funcadam.py
adam_ask(state)
¶
Get the search point stored by the given AdamState
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
state
|
AdamState
|
The current state of the Adam optimizer. |
required |
Source code in evotorch/algorithms/functional/funcadam.py
adam_tell(state, *, follow_grad)
¶
Tell the Adam optimizer the current gradient to get its next state.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
state
|
AdamState
|
The current state of the Adam optimizer. |
required |
follow_grad
|
BatchableVector
|
Gradient at the current point of the Adam search. Can be a 1-dimensional tensor in the non-batched case, or a multi-dimensional tensor in the batched case. |
required |
Source code in evotorch/algorithms/functional/funcadam.py
cem(*, center_init, parenthood_ratio, objective_sense, stdev_init=None, radius_init=None, stdev_min=None, stdev_max=None, stdev_max_change=None)
¶
Get an initial state for the cross entropy method (CEM).
The received initial state, a named tuple of type CEMState
, is to be
passed to the function cem_ask(...)
to receive the solutions belonging
to the first generation of the evolutionary search.
References:
Rubinstein, R. (1999). The cross-entropy method for combinatorial
and continuous optimization.
Methodology and computing in applied probability, 1(2), 127-190.
Duan, Y., Chen, X., Houthooft, R., Schulman, J., Abbeel, P. (2016).
Benchmarking deep reinforcement learning for continuous control.
International conference on machine learning. PMLR, 2016.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
center_init
|
BatchableVector
|
Center (i.e. mean) of the initial search distribution.
Expected as a PyTorch tensor with at least 1 dimension.
If the given |
required |
stdev_init
|
Optional[Union[float, BatchableVector]]
|
Standard deviation of the initial search distribution.
If this is given as a scalar |
None
|
radius_init
|
Optional[Union[float, BatchableScalar]]
|
Radius for the initial search distribution, representing
the euclidean norm for the first standard deviation vector.
Setting this value as |
None
|
parenthood_ratio
|
float
|
Proportion of the solutions that will be chosen as the parents for the next generation. For example, if this is given as 0.5, the top 50% of the solutions will be chosen as parents. |
required |
objective_sense
|
str
|
Expected as a string, either as 'min' or as 'max'. Determines if the goal is to minimize or is to maximize. |
required |
stdev_min
|
Optional[Union[float, BatchableVector]]
|
Minimum allowed standard deviation for the search distribution. Can be given as a scalar or as a tensor with one or more dimensions. When given with at least 2 dimensions, the extra leftmost dimensions will be interpreted as batch dimensions. |
None
|
stdev_max
|
Optional[Union[float, BatchableVector]]
|
Maximum allowed standard deviation for the search distribution. Can be given as a scalar or as a tensor with one or more dimensions. When given with at least 2 dimensions, the extra leftmost dimensions will be interpreted as batch dimensions. |
None
|
stdev_max_change
|
Optional[Union[float, BatchableVector]]
|
Maximum allowed change for the standard deviation
vector. If this is given as a scalar, this scalar will serve as a
limiter for the change of the entire standard deviation vector.
For example, a scalar value of 0.2 means that the elements of the
standard deviation vector cannot change more than the 20% of their
original values. If this is given as a vector (i.e. as a
1-dimensional tensor), each element of |
None
|
Source code in evotorch/algorithms/functional/funccem.py
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 |
|
cem_ask(state, *, popsize)
¶
Obtain a population from cross entropy method, given the state.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
state
|
CEMState
|
The current state of the cross entropy method search. |
required |
popsize
|
int
|
Number of solutions to be generated for the requested population. |
required |
Source code in evotorch/algorithms/functional/funccem.py
cem_tell(state, values, evals)
¶
Given the old state and the evals (fitnesses), obtain the next state.
From this state tuple, the center point of the search distribution can be
obtained via the field .center
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
state
|
CEMState
|
The old state of the cross entropy method search. |
required |
values
|
Tensor
|
The most recent population, as a PyTorch tensor. |
required |
evals
|
Tensor
|
Evaluation results (i.e. fitnesses) for the solutions expressed
by |
required |
Source code in evotorch/algorithms/functional/funccem.py
clipup(*, center_init, momentum=0.9, center_learning_rate=None, max_speed=None)
¶
Initialize the ClipUp optimizer and return its initial state.
Reference:
Toklu, N. E., Liskowski, P., & Srivastava, R. K. (2020, September).
ClipUp: A Simple and Powerful Optimizer for Distribution-Based Policy Evolution.
In International Conference on Parallel Problem Solving from Nature (pp. 515-527).
Springer, Cham.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
center_init
|
BatchableVector
|
Starting point for the ClipUp search. Expected as a PyTorch tensor with at least 1 dimension. If there are 2 or more dimensions, the extra leftmost dimensions are interpreted as batch dimensions. |
required |
center_learning_rate
|
Optional[BatchableScalar]
|
Learning rate (i.e. the step size) for the ClipUp updates. Can be a scalar or a multidimensional tensor. If given as a tensor with multiple dimensions, those dimensions will be interpreted as batch dimensions. |
None
|
max_speed
|
Optional[BatchableScalar]
|
Maximum speed, expected as a scalar. The euclidean norm
of the velocity (i.e. of the update vector) is not allowed to
exceed |
None
|
Source code in evotorch/algorithms/functional/funcclipup.py
clipup_ask(state)
¶
Get the search point stored by the given ClipUpState
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
state
|
ClipUpState
|
The current state of the ClipUp optimizer. |
required |
Source code in evotorch/algorithms/functional/funcclipup.py
clipup_tell(state, *, follow_grad)
¶
Tell the ClipUp optimizer the current gradient to get its next state.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
state
|
ClipUpState
|
The current state of the ClipUp optimizer. |
required |
follow_grad
|
BatchableVector
|
Gradient at the current point of the Adam search. Can be a 1-dimensional tensor in the non-batched case, or a multi-dimensional tensor in the batched case. |
required |
Source code in evotorch/algorithms/functional/funcclipup.py
pgpe(*, center_init, center_learning_rate, stdev_learning_rate, objective_sense, ranking_method='centered', optimizer='clipup', optimizer_config=None, stdev_init=None, radius_init=None, stdev_min=None, stdev_max=None, stdev_max_change=0.2, symmetric=True)
¶
Get an initial state for the PGPE algorithm.
The received initial state, a named tuple of type PGPEState
, is to be
passed to the function pgpe_ask(...)
to receive the solutions belonging
to the first generation of the evolutionary search.
Inspired by the PGPE implementations used in the studies of Ha (2017, 2019), and by the evolution strategy variant of Salimans et al. (2017), this PGPE implementation uses 0-centered ranking by default. The default optimizer for this PGPE implementation is ClipUp (Toklu et al., 2020).
References:
Frank Sehnke, Christian Osendorfer, Thomas Ruckstiess,
Alex Graves, Jan Peters, Jurgen Schmidhuber (2010).
Parameter-exploring Policy Gradients.
Neural Networks 23(4), 551-559.
David Ha (2017). Evolving Stable Strategies.
<http://blog.otoro.net/2017/11/12/evolving-stable-strategies/>
Salimans, T., Ho, J., Chen, X., Sidor, S. and Sutskever, I. (2017).
Evolution Strategies as a Scalable Alternative to
Reinforcement Learning.
David Ha (2019). Reinforcement Learning for Improving Agent Design.
Artificial life 25 (4), 352-365.
Toklu, N.E., Liskowski, P., Srivastava, R.K. (2020).
ClipUp: A Simple and Powerful Optimizer
for Distribution-based Policy Evolution.
Parallel Problem Solving from Nature (PPSN 2020).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
center_init
|
BatchableVector
|
Center (i.e. mean) of the initial search distribution.
Expected as a PyTorch tensor with at least 1 dimension.
If the given |
required |
center_learning_rate
|
BatchableScalar
|
Learning rate for when updating the center of the search distribution. For normal cases, this is expected as a scalar. If given as an n-dimensional tensor (where n>0), the extra dimensions will be considered as batch dimensions. |
required |
stdev_learning_rate
|
BatchableScalar
|
Learning rate for when updating the standard deviation of the search distribution. For normal cases, this is expected as a scalar. If given as an n-dimensional tensor (where n>0), the extra dimensions will be considered as batch dimensions. |
required |
objective_sense
|
str
|
Expected as a string, either as 'min' or as 'max'. Determines if the goal is to minimize or is to maximize. |
required |
ranking_method
|
str
|
Determines how the fitnesses will be ranked before computing the gradients. Among the choices are "centered" (a linear ranking where the worst solution gets the rank -0.5 and the best solution gets the rank +0.5), "linear" (a linear ranking where the worst solution gets the rank 0 and the best solution gets the rank 1), "nes" (the ranking method that is used by the natural evolution strategies), and "raw" (no ranking). |
'centered'
|
optimizer
|
Union[str, tuple]
|
Functional optimizer to use when updating the center of the
search distribution. The functional optimizer can be expressed via
a string, or via a tuple.
If given as string, the valid choices are:
"clipup" (for the ClipUp optimizer),
"adam" (for the Adam optimizer),
"sgd" (for regular gradient ascent/descent).
If given as a tuple, the tuple should be in the form
|
'clipup'
|
optimizer_config
|
Optional[dict]
|
Optionally a dictionary, containing the hyperparameters for the optimizer. |
None
|
stdev_init
|
Optional[Union[float, BatchableVector]]
|
Standard deviation of the initial search distribution.
If this is given as a scalar |
None
|
radius_init
|
Optional[Union[float, BatchableScalar]]
|
Radius for the initial search distribution, representing
the euclidean norm for the first standard deviation vector.
Setting this value as |
None
|
stdev_min
|
Optional[Union[float, BatchableVector]]
|
Minimum allowed standard deviation for the search distribution. Can be given as a scalar or as a tensor with one or more dimensions. When given with at least 2 dimensions, the extra leftmost dimensions will be interpreted as batch dimensions. |
None
|
stdev_max
|
Optional[Union[float, BatchableVector]]
|
Maximum allowed standard deviation for the search distribution. Can be given as a scalar or as a tensor with one or more dimensions. When given with at least 2 dimensions, the extra leftmost dimensions will be interpreted as batch dimensions. |
None
|
stdev_max_change
|
Optional[Union[float, BatchableVector]]
|
Maximum allowed change for the standard deviation
vector. If this is given as a scalar, this scalar will serve as a
limiter for the change of the entire standard deviation vector.
For example, a scalar value of 0.2 means that the elements of the
standard deviation vector cannot change more than the 20% of their
original values. If this is given as a vector (i.e. as a
1-dimensional tensor), each element of |
0.2
|
symmetric
|
bool
|
Whether or not symmetric (i.e. antithetic) sampling will be done while generating a new population. |
True
|
Source code in evotorch/algorithms/functional/funcpgpe.py
67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 |
|
pgpe_ask(state, *, popsize)
¶
Obtain a population from the PGPE algorithm.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
state
|
PGPEState
|
The current state of PGPE. |
required |
popsize
|
int
|
Number of solutions to be generated for the requested population. |
required |
Source code in evotorch/algorithms/functional/funcpgpe.py
pgpe_tell(state, values, evals)
¶
Given the old state and the evals (fitnesses), obtain the next state.
From this state tuple, the center point of the search distribution can be
obtained via the field .optimizer_state.center
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
state
|
PGPEState
|
The old state of the cross entropy method search. |
required |
values
|
Tensor
|
The most recent population, as a PyTorch tensor. |
required |
evals
|
Tensor
|
Evaluation results (i.e. fitnesses) for the solutions expressed
by |
required |
Source code in evotorch/algorithms/functional/funcpgpe.py
sgd(*, center_init, center_learning_rate, momentum=None)
¶
Initialize the gradient ascent/descent search and get its initial state.
Reference regarding the momentum behavior:
Polyak, B. T. (1964).
Some methods of speeding up the convergence of iteration methods.
USSR Computational Mathematics and Mathematical Physics, 4(5):1–17.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
center_init
|
BatchableVector
|
Starting point for the gradient ascent/descent. Expected as a PyTorch tensor with at least 1 dimension. If there are 2 or more dimensions, the extra leftmost dimensions are interpreted as batch dimensions. |
required |
center_learning_rate
|
BatchableScalar
|
Learning rate (i.e. the step size) for gradient ascent/descent. Can be a scalar or a multidimensional tensor. If given as a tensor with multiple dimensions, those dimensions will be interpreted as batch dimensions. |
required |
momentum
|
Optional[BatchableScalar]
|
Momentum coefficient, expected as a scalar. If provided as a scalar, Polyak-style momentum will be enabled. If given as a tensor with multiple dimensions, those dimensions will be interpreted as batch dimensions. |
None
|
Source code in evotorch/algorithms/functional/funcsgd.py
sgd_ask(state)
¶
Get the search point stored by the given SGDState
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
state
|
SGDState
|
The current state of gradient ascent/descent. |
required |
Source code in evotorch/algorithms/functional/funcsgd.py
sgd_tell(state, *, follow_grad)
¶
Tell the gradient ascent/descent the current gradient to get its next state.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
state
|
SGDState
|
The current state of gradient ascent/descent. |
required |
follow_grad
|
BatchableVector
|
Gradient at the current point of the search. Can be a 1-dimensional tensor in the non-batched case, or a multi-dimensional tensor in the batched case. |
required |