Parser
Utilities for parsing string representations of neural net policies
NetParsingError
¶
Bases: Exception
Representation of a parsing error
Source code in evotorch/neuroevolution/net/parser.py
__init__(message, lineno=None, col_offset=None, original_error=None)
¶
__init__(...)
: Initialize the NetParsingError.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
message
|
str
|
Error message, as string. |
required |
lineno
|
Optional[int]
|
Erroneous line number in the string representation of the neural network structure. |
None
|
col_offset
|
Optional[int]
|
Erroneous column number in the string representation of the neural network structure. |
None
|
original_error
|
Optional[Exception]
|
If another error caused this parsing error,
that original error can be attached to this |
None
|
Source code in evotorch/neuroevolution/net/parser.py
str_to_net(s, **constants)
¶
Read a string representation of a neural net structure,
and return a torch.nn.Module
instance out of it.
Let us imagine that one wants to describe the following neural network structure:
from torch import nn
from evotorch.neuroevolution.net import MultiLayered
net = MultiLayered(nn.Linear(8, 16), nn.Tanh(), nn.Linear(16, 4, bias=False), nn.ReLU())
By using str_to_net(...)
one can construct an equivalent
module via:
from evotorch.neuroevolution.net import str_to_net
net = str_to_net("Linear(8, 16) >> Tanh() >> Linear(16, 4, bias=False) >> ReLU()")
The string can also be multi-line:
One can also define constants for using them in strings:
net = str_to_net(
'''
Linear(input_size, hidden_size)
>> Tanh()
>> Linear(hidden_size, output_size, bias=False)
>> ReLU()
''',
input_size=8,
hidden_size=16,
output_size=4,
)
In the neural net structure string, when one refers to a module type,
say, Linear
, first the name Linear
is searched for in the namespace
evotorch.neuroevolution.net.layers
, and then in the namespace torch.nn
.
In the case of Linear
, the searched name exists in torch.nn
,
and therefore, the layer type to be instantiated is accepted as
torch.nn.Linear
.
Instead of Linear
, if one had used the name, say,
StructuredControlNet
, then, the layer type to be instantiated
would be evotorch.neuroevolution.net.layers.StructuredControlNet
.
The namespace evotorch.neuroevolution.net.layers
contains its own
implementations for RNN and LSTM. These recurrent layer implementations
work similarly to their counterparts torch.nn.RNN
and torch.nn.LSTM
,
except that EvoTorch's implementations do not expect the data with extra
leftmost dimensions for batching and for timesteps. Instead, they expect
to receive a single input and a single current hidden state, and produce
a single output and a single new hidden state. These recurrent layer
implementations of EvoTorch can be used within a neural net structure
string. Therefore, the following examples are valid:
rnn1 = str_to_net("RNN(4, 8) >> Linear(8, 2)")
rnn2 = str_to_net(
'''
Linear(4, 10)
>> Tanh()
>> RNN(input_size=10, hidden_size=24, nonlinearity='tanh'
>> Linear(24, 2)
'''
)
lstm1 = str_to_net("LSTM(4, 32) >> Linear(32, 2)")
lstm2 = str_to_net("LSTM(input_size=4, hidden_size=32) >> Linear(32, 2)")
Notes regarding usage with evotorch.neuroevolution.GymNE
or with evotorch.neuroevolution.VecGymNE
:
While instantiating a GymNE
or a VecGymNE
, one can specify a neural
net structure string as the policy. Therefore, while filling the policy
string for a GymNE
, all these rules mentioned above apply. Additionally,
while using str_to_net(...)
internally, GymNE
and VecGymNE
define
these extra constants:
obs_length
(length of the observation vector),
act_length
(length of the action vector for continuous-action
environments, or number of actions for discrete-action
environments), and
obs_shape
(shape of the observation as a tuple, assuming that the
observation space is of type gym.spaces.Box
, usable within the string
like obs_shape[0]
, obs_shape[1]
, etc., or simply obs_shape
to refer
to the entire tuple).
Therefore, while instantiating a GymNE
or a VecGymNE
, one can define a
single-hidden-layered policy via this string:
In the policy string above, one might choose to omit the last Tanh()
, as
GymNE
and VecGymNE
will clip the final output of the policy to conform
to the action boundaries defined by the target reinforcement learning
environment, and such a clipping operation might be seen as using an
activation function similar to hard-tanh anyway.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
s
|
str
|
The string which expresses the neural net structure. |
required |
Source code in evotorch/neuroevolution/net/parser.py
218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 |
|