misc
Utilities for reading and for writing neural network parameters
count_parameters(net)
¶
Get the number of parameters the network.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
net |
Module |
The torch module whose parameters will be counted. |
required |
Returns:
Type | Description |
---|---|
int |
The number of parameters, as an integer. |
Source code in evotorch/neuroevolution/net/misc.py
device_of_module(m, default=None)
¶
Get the device in which the module exists.
This function looks at the first parameter of the module, and returns its device. This function is not meant to be used on modules whose parameters exist on different devices.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
m |
Module |
The module whose device is being queried. |
required |
default |
Union[str, torch.device] |
The fallback device to return if the module has no parameters. If this is left as None, the fallback device is assumed to be "cpu". |
None |
Returns:
Type | Description |
---|---|
device |
The device of the module, determined from its first parameter. |
Source code in evotorch/neuroevolution/net/misc.py
def device_of_module(m: nn.Module, default: Optional[Union[str, torch.device]] = None) -> torch.device:
"""
Get the device in which the module exists.
This function looks at the first parameter of the module, and returns
its device. This function is not meant to be used on modules whose
parameters exist on different devices.
Args:
m: The module whose device is being queried.
default: The fallback device to return if the module has no
parameters. If this is left as None, the fallback device
is assumed to be "cpu".
Returns:
The device of the module, determined from its first parameter.
"""
if default is None:
default = torch.device("cpu")
device = default
for p in m.parameters():
device = p.device
break
return device
fill_parameters(net, vector)
¶
Fill the parameters of a torch module (net) from a vector.
No gradient information is kept.
The vector's length must be exactly the same with the number of parameters of the PyTorch module.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
net |
Module |
The torch module whose parameter values will be filled. |
required |
vector |
Tensor |
A 1-D torch tensor which stores the parameter values. |
required |
Source code in evotorch/neuroevolution/net/misc.py
@torch.no_grad()
def fill_parameters(net: nn.Module, vector: torch.Tensor):
"""Fill the parameters of a torch module (net) from a vector.
No gradient information is kept.
The vector's length must be exactly the same with the number
of parameters of the PyTorch module.
Args:
net: The torch module whose parameter values will be filled.
vector: A 1-D torch tensor which stores the parameter values.
"""
address = 0
for p in net.parameters():
d = p.data.view(-1)
n = len(d)
d[:] = torch.as_tensor(vector[address : address + n], device=d.device)
address += n
if address != len(vector):
raise IndexError("The parameter vector is larger than expected")
parameter_vector(net, *, device=None)
¶
Get all the parameters of a torch module (net) into a vector
No gradient information is kept.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
net |
Module |
The torch module whose parameters will be extracted. |
required |
device |
Union[str, torch.device] |
The device in which the parameter vector will be constructed. If the network has parameter across multiple devices, you can specify this argument so that concatenation of all the parameters will be successful. |
None |
Returns:
Type | Description |
---|---|
Tensor |
The parameters of the module in a 1-D tensor. |
Source code in evotorch/neuroevolution/net/misc.py
@torch.no_grad()
def parameter_vector(net: nn.Module, *, device: Optional[Device] = None) -> torch.Tensor:
"""Get all the parameters of a torch module (net) into a vector
No gradient information is kept.
Args:
net: The torch module whose parameters will be extracted.
device: The device in which the parameter vector will be constructed.
If the network has parameter across multiple devices,
you can specify this argument so that concatenation of all the
parameters will be successful.
Returns:
The parameters of the module in a 1-D tensor.
"""
dev_kwarg = {} if device is None else {"device": device}
all_vectors = []
for p in net.parameters():
all_vectors.append(torch.as_tensor(p.data.view(-1), **dev_kwarg))
return torch.cat(all_vectors)