Funcclipup
clipup(*, center_init, momentum=0.9, center_learning_rate=None, max_speed=None)
¶
Initialize the ClipUp optimizer and return its initial state.
Reference:
Toklu, N. E., Liskowski, P., & Srivastava, R. K. (2020, September).
ClipUp: A Simple and Powerful Optimizer for Distribution-Based Policy Evolution.
In International Conference on Parallel Problem Solving from Nature (pp. 515-527).
Springer, Cham.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
center_init
|
BatchableVector
|
Starting point for the ClipUp search. Expected as a PyTorch tensor with at least 1 dimension. If there are 2 or more dimensions, the extra leftmost dimensions are interpreted as batch dimensions. |
required |
center_learning_rate
|
Optional[BatchableScalar]
|
Learning rate (i.e. the step size) for the ClipUp updates. Can be a scalar or a multidimensional tensor. If given as a tensor with multiple dimensions, those dimensions will be interpreted as batch dimensions. |
None
|
max_speed
|
Optional[BatchableScalar]
|
Maximum speed, expected as a scalar. The euclidean norm
of the velocity (i.e. of the update vector) is not allowed to
exceed |
None
|
Source code in evotorch/algorithms/functional/funcclipup.py
clipup_ask(state)
¶
Get the search point stored by the given ClipUpState
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
state
|
ClipUpState
|
The current state of the ClipUp optimizer. |
required |
Source code in evotorch/algorithms/functional/funcclipup.py
clipup_tell(state, *, follow_grad)
¶
Tell the ClipUp optimizer the current gradient to get its next state.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
state
|
ClipUpState
|
The current state of the ClipUp optimizer. |
required |
follow_grad
|
BatchableVector
|
Gradient at the current point of the Adam search. Can be a 1-dimensional tensor in the non-batched case, or a multi-dimensional tensor in the batched case. |
required |