Parser
Utilities for parsing string representations of neural net policies
NetParsingError (Exception)
¶
Representation of a parsing error
Source code in evotorch/neuroevolution/net/parser.py
class NetParsingError(Exception):
"""
Representation of a parsing error
"""
def __init__(
self,
message: str,
lineno: Optional[int] = None,
col_offset: Optional[int] = None,
original_error: Optional[Exception] = None,
):
"""
`__init__(...)`: Initialize the NetParsingError.
Args:
message: Error message, as string.
lineno: Erroneous line number in the string representation of the
neural network structure.
col_offset: Erroneous column number in the string representation
of the neural network structure.
original_error: If another error caused this parsing error,
that original error can be attached to this `NetParsingError`
instance via this argument.
"""
super().__init__()
self.message = message
self.lineno = lineno
self.col_offset = col_offset
self.original_error = original_error
def _to_string(self) -> str:
parts = []
parts.append(type(self).__name__)
if self.lineno is not None:
parts.append(" at line(")
parts.append(str(self.lineno - 1))
parts.append(")")
if self.col_offset is not None:
parts.append(" at column(")
parts.append(str(self.col_offset + 1))
parts.append(")")
parts.append(": ")
parts.append(self.message)
return "".join(parts)
def __str__(self) -> str:
return self._to_string()
def __repr__(self) -> str:
return self._to_string()
__init__(self, message, lineno=None, col_offset=None, original_error=None)
special
¶
__init__(...): Initialize the NetParsingError.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
message |
str |
Error message, as string. |
required |
lineno |
Optional[int] |
Erroneous line number in the string representation of the neural network structure. |
None |
col_offset |
Optional[int] |
Erroneous column number in the string representation of the neural network structure. |
None |
original_error |
Optional[Exception] |
If another error caused this parsing error,
that original error can be attached to this |
None |
Source code in evotorch/neuroevolution/net/parser.py
def __init__(
self,
message: str,
lineno: Optional[int] = None,
col_offset: Optional[int] = None,
original_error: Optional[Exception] = None,
):
"""
`__init__(...)`: Initialize the NetParsingError.
Args:
message: Error message, as string.
lineno: Erroneous line number in the string representation of the
neural network structure.
col_offset: Erroneous column number in the string representation
of the neural network structure.
original_error: If another error caused this parsing error,
that original error can be attached to this `NetParsingError`
instance via this argument.
"""
super().__init__()
self.message = message
self.lineno = lineno
self.col_offset = col_offset
self.original_error = original_error
str_to_net(s, **constants)
¶
Read a string representation of a neural net structure,
and return a torch.nn.Module instance out of it.
Let us imagine that one wants to describe the following neural network structure:
from torch import nn
from evotorch.neuroevolution.net import MultiLayered
net = MultiLayered(nn.Linear(8, 16), nn.Tanh(), nn.Linear(16, 4, bias=False), nn.ReLU())
By using str_to_net(...) one can construct an equivalent
module via:
from evotorch.neuroevolution.net import str_to_net
net = str_to_net("Linear(8, 16) >> Tanh() >> Linear(16, 4, bias=False) >> ReLU()")
The string can also be multi-line:
One can also define constants for using them in strings:
net = str_to_net(
'''
Linear(input_size, hidden_size)
>> Tanh()
>> Linear(hidden_size, output_size, bias=False)
>> ReLU()
''',
input_size=8,
hidden_size=16,
output_size=4,
)
In the neural net structure string, when one refers to a module type,
say, Linear, first the name Linear is searched for in the namespace
evotorch.neuroevolution.net.layers, and then in the namespace torch.nn.
In the case of Linear, the searched name exists in torch.nn,
and therefore, the layer type to be instantiated is accepted as
torch.nn.Linear.
Instead of Linear, if one had used the name, say,
StructuredControlNet, then, the layer type to be instantiated
would be evotorch.neuroevolution.net.layers.StructuredControlNet.
The namespace evotorch.neuroevolution.net.layers contains its own
implementations for RNN and LSTM. These recurrent layer implementations
work similarly to their counterparts torch.nn.RNN and torch.nn.LSTM,
except that EvoTorch's implementations do not expect the data with extra
leftmost dimensions for batching and for timesteps. Instead, they expect
to receive a single input and a single current hidden state, and produce
a single output and a single new hidden state. These recurrent layer
implementations of EvoTorch can be used within a neural net structure
string. Therefore, the following examples are valid:
rnn1 = str_to_net("RNN(4, 8) >> Linear(8, 2)")
rnn2 = str_to_net(
'''
Linear(4, 10)
>> Tanh()
>> RNN(input_size=10, hidden_size=24, nonlinearity='tanh'
>> Linear(24, 2)
'''
)
lstm1 = str_to_net("LSTM(4, 32) >> Linear(32, 2)")
lstm2 = str_to_net("LSTM(input_size=4, hidden_size=32) >> Linear(32, 2)")
Notes regarding usage with evotorch.neuroevolution.GymNE
or with evotorch.neuroevolution.VecGymNE:
While instantiating a GymNE or a VecGymNE, one can specify a neural
net structure string as the policy. Therefore, while filling the policy
string for a GymNE, all these rules mentioned above apply. Additionally,
while using str_to_net(...) internally, GymNE and VecGymNE define
these extra constants:
obs_length (length of the observation vector),
act_length (length of the action vector for continuous-action
environments, or number of actions for discrete-action
environments), and
obs_shape (shape of the observation as a tuple, assuming that the
observation space is of type gym.spaces.Box, usable within the string
like obs_shape[0], obs_shape[1], etc., or simply obs_shape to refer
to the entire tuple).
Therefore, while instantiating a GymNE or a VecGymNE, one can define a
single-hidden-layered policy via this string:
In the policy string above, one might choose to omit the last Tanh(), as
GymNE and VecGymNE will clip the final output of the policy to conform
to the action boundaries defined by the target reinforcement learning
environment, and such a clipping operation might be seen as using an
activation function similar to hard-tanh anyway.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
s |
str |
The string which expresses the neural net structure. |
required |
Returns:
| Type | Description |
|---|---|
Module |
The PyTorch module of the specified structure. |
Source code in evotorch/neuroevolution/net/parser.py
def str_to_net(s: str, **constants) -> nn.Module:
"""
Read a string representation of a neural net structure,
and return a `torch.nn.Module` instance out of it.
Let us imagine that one wants to describe the following
neural network structure:
```python
from torch import nn
from evotorch.neuroevolution.net import MultiLayered
net = MultiLayered(nn.Linear(8, 16), nn.Tanh(), nn.Linear(16, 4, bias=False), nn.ReLU())
```
By using `str_to_net(...)` one can construct an equivalent
module via:
```python
from evotorch.neuroevolution.net import str_to_net
net = str_to_net("Linear(8, 16) >> Tanh() >> Linear(16, 4, bias=False) >> ReLU()")
```
The string can also be multi-line:
```python
net = str_to_net(
'''
Linear(8, 16)
>> Tanh()
>> Linear(16, 4, bias=False)
>> ReLU()
'''
)
```
One can also define constants for using them in strings:
```python
net = str_to_net(
'''
Linear(input_size, hidden_size)
>> Tanh()
>> Linear(hidden_size, output_size, bias=False)
>> ReLU()
''',
input_size=8,
hidden_size=16,
output_size=4,
)
```
In the neural net structure string, when one refers to a module type,
say, `Linear`, first the name `Linear` is searched for in the namespace
`evotorch.neuroevolution.net.layers`, and then in the namespace `torch.nn`.
In the case of `Linear`, the searched name exists in `torch.nn`,
and therefore, the layer type to be instantiated is accepted as
`torch.nn.Linear`.
Instead of `Linear`, if one had used the name, say,
`StructuredControlNet`, then, the layer type to be instantiated
would be `evotorch.neuroevolution.net.layers.StructuredControlNet`.
The namespace `evotorch.neuroevolution.net.layers` contains its own
implementations for RNN and LSTM. These recurrent layer implementations
work similarly to their counterparts `torch.nn.RNN` and `torch.nn.LSTM`,
except that EvoTorch's implementations do not expect the data with extra
leftmost dimensions for batching and for timesteps. Instead, they expect
to receive a single input and a single current hidden state, and produce
a single output and a single new hidden state. These recurrent layer
implementations of EvoTorch can be used within a neural net structure
string. Therefore, the following examples are valid:
```python
rnn1 = str_to_net("RNN(4, 8) >> Linear(8, 2)")
rnn2 = str_to_net(
'''
Linear(4, 10)
>> Tanh()
>> RNN(input_size=10, hidden_size=24, nonlinearity='tanh'
>> Linear(24, 2)
'''
)
lstm1 = str_to_net("LSTM(4, 32) >> Linear(32, 2)")
lstm2 = str_to_net("LSTM(input_size=4, hidden_size=32) >> Linear(32, 2)")
```
**Notes regarding usage with `evotorch.neuroevolution.GymNE`
or with `evotorch.neuroevolution.VecGymNE`:**
While instantiating a `GymNE` or a `VecGymNE`, one can specify a neural
net structure string as the policy. Therefore, while filling the policy
string for a `GymNE`, all these rules mentioned above apply. Additionally,
while using `str_to_net(...)` internally, `GymNE` and `VecGymNE` define
these extra constants:
`obs_length` (length of the observation vector),
`act_length` (length of the action vector for continuous-action
environments, or number of actions for discrete-action
environments), and
`obs_shape` (shape of the observation as a tuple, assuming that the
observation space is of type `gym.spaces.Box`, usable within the string
like `obs_shape[0]`, `obs_shape[1]`, etc., or simply `obs_shape` to refer
to the entire tuple).
Therefore, while instantiating a `GymNE` or a `VecGymNE`, one can define a
single-hidden-layered policy via this string:
```
"Linear(obs_length, 16) >> Tanh() >> Linear(16, act_length) >> Tanh()"
```
In the policy string above, one might choose to omit the last `Tanh()`, as
`GymNE` and `VecGymNE` will clip the final output of the policy to conform
to the action boundaries defined by the target reinforcement learning
environment, and such a clipping operation might be seen as using an
activation function similar to hard-tanh anyway.
Args:
s: The string which expresses the neural net structure.
Returns:
The PyTorch module of the specified structure.
"""
s = f"(\n{s}\n)"
return _process_expr(ast.parse(s, mode="eval").body, constants=constants)