Funcadam
adam(*, center_init, center_learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08)
¶
Initialize an Adam optimizer and return its initial state.
Reference:
Kingma, D. P. and J. Ba (2015).
Adam: A method for stochastic optimization.
In Proceedings of 3rd International Conference on Learning Representations.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
center_init
|
BatchableVector
|
Starting point for the Adam search. Expected as a PyTorch tensor with at least 1 dimension. If there are 2 or more dimensions, the extra leftmost dimensions are interpreted as batch dimensions. |
required |
center_learning_rate
|
BatchableScalar
|
Learning rate (i.e. the step size) for the Adam updates. Can be a scalar or a multidimensional tensor. If given as a tensor with multiple dimensions, those dimensions will be interpreted as batch dimensions. |
0.001
|
beta1
|
BatchableScalar
|
beta1 hyperparameter for the Adam optimizer. Can be a scalar or a multidimensional tensor. If given as a tensor with multiple dimensions, those dimensions will be interpreted as batch dimensions. |
0.9
|
beta2
|
BatchableScalar
|
beta2 hyperparameter for the Adam optimizer. Can be a scalar or a multidimensional tensor. If given as a tensor with multiple dimensions, those dimensions will be interpreted as batch dimensions. |
0.999
|
epsilon
|
BatchableScalar
|
epsilon hyperparameter for the Adam optimizer. Can be a scalar or a multidimensional tensor. If given as a tensor with multiple dimensions, those dimensions will be interpreted as batch dimensions. |
1e-08
|
Source code in evotorch/algorithms/functional/funcadam.py
adam_ask(state)
¶
Get the search point stored by the given AdamState
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
state
|
AdamState
|
The current state of the Adam optimizer. |
required |
Source code in evotorch/algorithms/functional/funcadam.py
adam_tell(state, *, follow_grad)
¶
Tell the Adam optimizer the current gradient to get its next state.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
state
|
AdamState
|
The current state of the Adam optimizer. |
required |
follow_grad
|
BatchableVector
|
Gradient at the current point of the Adam search. Can be a 1-dimensional tensor in the non-batched case, or a multi-dimensional tensor in the batched case. |
required |