Index
This namespace contains various utility functions, classes, and type aliases.
cloning
¶
Clonable
¶
A base class allowing inheriting classes define how they should be cloned.
Any class inheriting from Clonable gains these behaviors:
(i) A new method named .clone()
becomes available;
(ii) __deepcopy__
and __copy__
work as aliases for .clone()
;
(iii) A new method, _get_cloned_state(self, *, memo: dict)
is now
defined and needs to be implemented by the inheriting class.
The method _get_cloned_state(...)
expects a dictionary named memo
,
which maps from the ids of already cloned objects to their clones.
If _get_cloned_state(...)
is to use deep_clone(...)
or deepcopy(...)
within itself, this memo
dictionary can be passed to these functions.
The return value of _get_cloned_state(...)
is a dictionary, which will
be used as the __dict__
of the newly made clone.
Source code in evotorch/tools/cloning.py
class Clonable:
"""
A base class allowing inheriting classes define how they should be cloned.
Any class inheriting from Clonable gains these behaviors:
(i) A new method named `.clone()` becomes available;
(ii) `__deepcopy__` and `__copy__` work as aliases for `.clone()`;
(iii) A new method, `_get_cloned_state(self, *, memo: dict)` is now
defined and needs to be implemented by the inheriting class.
The method `_get_cloned_state(...)` expects a dictionary named `memo`,
which maps from the ids of already cloned objects to their clones.
If `_get_cloned_state(...)` is to use `deep_clone(...)` or `deepcopy(...)`
within itself, this `memo` dictionary can be passed to these functions.
The return value of `_get_cloned_state(...)` is a dictionary, which will
be used as the `__dict__` of the newly made clone.
"""
def _get_cloned_state(self, *, memo: dict) -> dict:
raise NotImplementedError
def clone(self, *, memo: Optional[dict] = None) -> "Clonable":
"""
Get a clone of this object.
Args:
memo: Optionally a dictionary which maps from the ids of the
already cloned objects to their clones. In most scenarios,
when this method is called from outside, this can be left
as None.
Returns:
The clone of the object.
"""
if memo is None:
memo = {}
self_id = id(self)
if self_id in memo:
return memo[self_id]
new_object = object.__new__(type(self))
memo[id(self)] = new_object
new_object.__dict__.update(self._get_cloned_state(memo=memo))
return new_object
def __copy__(self) -> "Clonable":
return self.clone()
def __deepcopy__(self, memo: Optional[dict]):
if memo is None:
memo = {}
return self.clone(memo=memo)
clone(self, *, memo=None)
¶
Get a clone of this object.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
memo |
Optional[dict] |
Optionally a dictionary which maps from the ids of the already cloned objects to their clones. In most scenarios, when this method is called from outside, this can be left as None. |
None |
Returns:
Type | Description |
---|---|
Clonable |
The clone of the object. |
Source code in evotorch/tools/cloning.py
def clone(self, *, memo: Optional[dict] = None) -> "Clonable":
"""
Get a clone of this object.
Args:
memo: Optionally a dictionary which maps from the ids of the
already cloned objects to their clones. In most scenarios,
when this method is called from outside, this can be left
as None.
Returns:
The clone of the object.
"""
if memo is None:
memo = {}
self_id = id(self)
if self_id in memo:
return memo[self_id]
new_object = object.__new__(type(self))
memo[id(self)] = new_object
new_object.__dict__.update(self._get_cloned_state(memo=memo))
return new_object
ReadOnlyClonable (Clonable)
¶
Clonability base class for read-only and/or immutable objects.
This is a base class specialized for the immutable containers of EvoTorch. These immutable containers have two behaviors for cloning: one where the read-only attribute is preserved and one where a mutable clone is created.
Upon being copied or deep-copied (using the standard Python functions),
the newly made clones are also read-only. However, when copied using the
clone(...)
method, the newly made clone is mutable by default
(unless the clone(...)
method was used with preserve_read_only=True
).
This default behavior of the clone(...)
method was inspired by the
copy()
method of numpy arrays (the inspiration being that the .copy()
of a read-only numpy array will not be read-only anymore).
Subclasses of evotorch.immutable.ImmutableContainer
inherit from
ReadOnlyClonable
.
Source code in evotorch/tools/cloning.py
class ReadOnlyClonable(Clonable):
"""
Clonability base class for read-only and/or immutable objects.
This is a base class specialized for the immutable containers of EvoTorch.
These immutable containers have two behaviors for cloning:
one where the read-only attribute is preserved and one where a mutable
clone is created.
Upon being copied or deep-copied (using the standard Python functions),
the newly made clones are also read-only. However, when copied using the
`clone(...)` method, the newly made clone is mutable by default
(unless the `clone(...)` method was used with `preserve_read_only=True`).
This default behavior of the `clone(...)` method was inspired by the
`copy()` method of numpy arrays (the inspiration being that the `.copy()`
of a read-only numpy array will not be read-only anymore).
Subclasses of `evotorch.immutable.ImmutableContainer` inherit from
`ReadOnlyClonable`.
"""
def _get_mutable_clone(self, *, memo: dict) -> Any:
raise NotImplementedError
def clone(self, *, memo: Optional[dict] = None, preserve_read_only: bool = False) -> Any:
"""
Get a clone of this read-only object.
Args:
memo: Optionally a dictionary which maps from the ids of the
already cloned objects to their clones. In most scenarios,
when this method is called from outside, this can be left
as None.
preserve_read_only: Whether or not to preserve the read-only
behavior in the clone.
Returns:
The clone of the object.
"""
if memo is None:
memo = {}
if preserve_read_only:
return super().clone(memo=memo)
else:
return self._get_mutable_clone(memo=memo)
def __copy__(self) -> Any:
return self.clone(preserve_read_only=True)
def __deepcopy__(self, memo: Optional[dict]) -> Any:
if memo is None:
memo = {}
return self.clone(memo=memo, preserve_read_only=True)
clone(self, *, memo=None, preserve_read_only=False)
¶
Get a clone of this read-only object.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
memo |
Optional[dict] |
Optionally a dictionary which maps from the ids of the already cloned objects to their clones. In most scenarios, when this method is called from outside, this can be left as None. |
None |
preserve_read_only |
bool |
Whether or not to preserve the read-only behavior in the clone. |
False |
Returns:
Type | Description |
---|---|
Any |
The clone of the object. |
Source code in evotorch/tools/cloning.py
def clone(self, *, memo: Optional[dict] = None, preserve_read_only: bool = False) -> Any:
"""
Get a clone of this read-only object.
Args:
memo: Optionally a dictionary which maps from the ids of the
already cloned objects to their clones. In most scenarios,
when this method is called from outside, this can be left
as None.
preserve_read_only: Whether or not to preserve the read-only
behavior in the clone.
Returns:
The clone of the object.
"""
if memo is None:
memo = {}
if preserve_read_only:
return super().clone(memo=memo)
else:
return self._get_mutable_clone(memo=memo)
Serializable (Clonable)
¶
Base class allowing the inheriting classes become Clonable and picklable.
Any class inheriting from Serializable
becomes Clonable
(since
Serializable
is a subclass of Clonable
) and therefore is expected to
define its own _get_cloned_state(...)
(see the documentation of the
class Clonable
for details).
A Serializable
class gains a behavior for its __getstate__
. In this
already defined and implemented __getstate__
method, the resulting
dictionary of _get_cloned_state(...)
is used as the state dictionary.
Therefore, for Serializable
objects, the behavior defined in their
_get_cloned_state(...)
methods affect how they are pickled.
Classes inheriting from Serializable
are evotorch.Problem
,
evotorch.Solution
, evotorch.SolutionBatch
, and
evotorch.distributions.Distribution
. In their _get_cloned_state(...)
implementations, these classes use deep_clone(...)
on themselves to make
sure that their contained PyTorch tensors are copied using the .clone()
method, ensuring that those tensors are detached from their old storages
during the cloning operation. Thanks to being Serializable
, their
contained tensors are detached from their old storages both at the moment
of copying/cloning AND at the moment of pickling.
Source code in evotorch/tools/cloning.py
class Serializable(Clonable):
"""
Base class allowing the inheriting classes become Clonable and picklable.
Any class inheriting from `Serializable` becomes `Clonable` (since
`Serializable` is a subclass of `Clonable`) and therefore is expected to
define its own `_get_cloned_state(...)` (see the documentation of the
class `Clonable` for details).
A `Serializable` class gains a behavior for its `__getstate__`. In this
already defined and implemented `__getstate__` method, the resulting
dictionary of `_get_cloned_state(...)` is used as the state dictionary.
Therefore, for `Serializable` objects, the behavior defined in their
`_get_cloned_state(...)` methods affect how they are pickled.
Classes inheriting from `Serializable` are `evotorch.Problem`,
`evotorch.Solution`, `evotorch.SolutionBatch`, and
`evotorch.distributions.Distribution`. In their `_get_cloned_state(...)`
implementations, these classes use `deep_clone(...)` on themselves to make
sure that their contained PyTorch tensors are copied using the `.clone()`
method, ensuring that those tensors are detached from their old storages
during the cloning operation. Thanks to being `Serializable`, their
contained tensors are detached from their old storages both at the moment
of copying/cloning AND at the moment of pickling.
"""
def __getstate__(self) -> dict:
memo = {id(self): self}
return self._get_cloned_state(memo=memo)
deep_clone(x, *, otherwise_deepcopy=False, otherwise_return=False, otherwise_fail=False, memo=None)
¶
A recursive cloning function similar to the standard deepcopy
.
The difference between deep_clone(...)
and deepcopy(...)
is that
deep_clone(...)
, while recursively traversing, will run the .clone()
method on the PyTorch tensors it encounters, so that the cloned tensors
are forcefully detached from their storages (instead of cloning those
storages as well).
At the moment of writing this documentation, the current behavior of
PyTorch tensors upon being deep-copied is to clone themselves AND their
storages. Therefore, if a PyTorch tensor is a slice of a large tensor
(which has a large storage), then the large storage will also be
deep-copied, and the newly made clone of the tensor will point to a newly
made large storage. One might instead prefer to clone tensors in such a
way that the newly made tensor points to a newly made storage that
contains just enough data for the tensor (with the unused data being
dropped). When such a behavior is desired, one can use this
deep_clone(...)
function.
Upon encountering a read-only and/or immutable data, this function will
NOT modify the read-only behavior. For example, the deep-clone of a
ReadOnlyTensor is still a ReadOnlyTensor, and the deep-clone of a
read-only numpy array is still a read-only numpy array. Note that this
behavior is different than the clone()
method of a ReadOnlyTensor
and the copy()
method of a numpy array. The reason for this
protective behavior is that since this is a deep-cloning operation,
the encountered tensors and/or arrays might be the components of the root
object, and changing their read-only attributes might affect the integrity
of this root object.
The deep_clone(...)
function needs to know what to do when an object
of unrecognized type is encountered. Therefore, the user is expected to
set one of these arguments as True (and leave the others as False):
otherwise_deepcopy
, otherwise_return
, otherwise_fail
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Any |
The object which will be deep-cloned. This object can be a standard
Python container (i.e. list, tuple, dict, set), an instance of
Problem, Solution, SolutionBatch, ObjectArray, ImmutableContainer,
Clonable, and also any other type of object if either the argument
|
required |
otherwise_deepcopy |
bool |
Setting this as True means that, when an
unrecognized object is encountered, that object will be
deep-copied. To handle shared and cyclic-referencing objects,
the |
False |
otherwise_return |
bool |
Setting this as True means that, when an unrecognized object is encountered, that object itself will be returned (i.e. will be a part of the created clone). |
False |
otherwise_fail |
bool |
Setting this as True means that, when an unrecognized object is encountered, a TypeError will be raised. |
False |
memo |
Optional[dict] |
Optionally a dictionary. In most scenarios, when this function is called from outside, this is expected to be left as None. |
None |
Returns:
Type | Description |
---|---|
Any |
The newly made clone of the original object. |
Source code in evotorch/tools/cloning.py
def deep_clone( # noqa: C901
x: Any,
*,
otherwise_deepcopy: bool = False,
otherwise_return: bool = False,
otherwise_fail: bool = False,
memo: Optional[dict] = None,
) -> Any:
"""
A recursive cloning function similar to the standard `deepcopy`.
The difference between `deep_clone(...)` and `deepcopy(...)` is that
`deep_clone(...)`, while recursively traversing, will run the `.clone()`
method on the PyTorch tensors it encounters, so that the cloned tensors
are forcefully detached from their storages (instead of cloning those
storages as well).
At the moment of writing this documentation, the current behavior of
PyTorch tensors upon being deep-copied is to clone themselves AND their
storages. Therefore, if a PyTorch tensor is a slice of a large tensor
(which has a large storage), then the large storage will also be
deep-copied, and the newly made clone of the tensor will point to a newly
made large storage. One might instead prefer to clone tensors in such a
way that the newly made tensor points to a newly made storage that
contains just enough data for the tensor (with the unused data being
dropped). When such a behavior is desired, one can use this
`deep_clone(...)` function.
Upon encountering a read-only and/or immutable data, this function will
NOT modify the read-only behavior. For example, the deep-clone of a
ReadOnlyTensor is still a ReadOnlyTensor, and the deep-clone of a
read-only numpy array is still a read-only numpy array. Note that this
behavior is different than the `clone()` method of a ReadOnlyTensor
and the `copy()` method of a numpy array. The reason for this
protective behavior is that since this is a deep-cloning operation,
the encountered tensors and/or arrays might be the components of the root
object, and changing their read-only attributes might affect the integrity
of this root object.
The `deep_clone(...)` function needs to know what to do when an object
of unrecognized type is encountered. Therefore, the user is expected to
set one of these arguments as True (and leave the others as False):
`otherwise_deepcopy`, `otherwise_return`, `otherwise_fail`.
Args:
x: The object which will be deep-cloned. This object can be a standard
Python container (i.e. list, tuple, dict, set), an instance of
Problem, Solution, SolutionBatch, ObjectArray, ImmutableContainer,
Clonable, and also any other type of object if either the argument
`otherwise_deepcopy` or the argument `otherwise_return` is set as
True.
otherwise_deepcopy: Setting this as True means that, when an
unrecognized object is encountered, that object will be
deep-copied. To handle shared and cyclic-referencing objects,
the `deep_clone(...)` function stores its own memo dictionary.
When the control is given to the standard `deepcopy(...)`
function, the memo dictionary of `deep_clone(...)` will be passed
to `deepcopy`.
otherwise_return: Setting this as True means that, when an
unrecognized object is encountered, that object itself will be
returned (i.e. will be a part of the created clone).
otherwise_fail: Setting this as True means that, when an unrecognized
object is encountered, a TypeError will be raised.
memo: Optionally a dictionary. In most scenarios, when this function
is called from outside, this is expected to be left as None.
Returns:
The newly made clone of the original object.
"""
from .objectarray import ObjectArray
from .readonlytensor import ReadOnlyTensor
if memo is None:
# If a memo dictionary was not given, make a new one now.
memo = {}
# Get the id of the object being cloned.
x_id = id(x)
if x_id in memo:
# If the id of the object being cloned is already in the memo dictionary, then this object was previously
# cloned. We just return that clone.
return memo[x_id]
# Count how many of the arguments `otherwise_deepcopy`, `otherwise_return`, and `otherwise_fail` was set as True.
# In this context, we call these arguments as fallback behaviors.
fallback_behaviors = (otherwise_deepcopy, otherwise_return, otherwise_fail)
enabled_behavior_count = sum(1 for behavior in fallback_behaviors if behavior)
if enabled_behavior_count == 0:
# If none of the fallback behaviors was enabled, then we raise an error.
raise ValueError(
"The action to take with objects of unrecognized types is not known because"
" none of these arguments was set as True: `otherwise_deepcopy`, `otherwise_return`, `otherwise_fail`."
" Please set one of these arguments as True."
)
elif enabled_behavior_count == 1:
# If one of the fallback behaviors was enabled, then we received our expected input. We do nothing here.
pass
else:
# If the number of enabled fallback behaviors is an unexpected value. then we raise an error.
raise ValueError(
f"The following arguments were received, which is conflicting: otherwise_deepcopy={otherwise_deepcopy},"
f" otherwise_return={otherwise_return}, otherwise_fail={otherwise_fail}."
f" Please set exactly one of these arguments as True and leave the others as False."
)
# This inner function specifies how the deep_clone function should call itself.
def call_self(obj: Any) -> Any:
return deep_clone(
obj,
otherwise_deepcopy=otherwise_deepcopy,
otherwise_return=otherwise_return,
otherwise_fail=otherwise_fail,
memo=memo,
)
# Below, we handle the cloning behaviors case by case.
if (x is None) or (x is NotImplemented) or (x is Ellipsis):
result = deepcopy(x)
elif isinstance(x, (Number, str, bytes, bytearray)):
result = deepcopy(x, memo=memo)
elif isinstance(x, np.ndarray):
result = x.copy()
result.flags["WRITEABLE"] = x.flags["WRITEABLE"]
elif isinstance(x, (ObjectArray, ReadOnlyClonable)):
result = x.clone(preserve_read_only=True, memo=memo)
elif isinstance(x, ReadOnlyTensor):
result = x.clone(preserve_read_only=True)
elif isinstance(x, torch.Tensor):
result = x.clone()
elif isinstance(x, Clonable):
result = x.clone(memo=memo)
elif isinstance(x, (dict, OrderedDict)):
result = type(x)()
memo[x_id] = result
for k, v in x.items():
result[call_self(k)] = call_self(v)
elif isinstance(x, list):
result = type(x)()
memo[x_id] = result
for item in x:
result.append(call_self(item))
elif isinstance(x, set):
result = type(x)()
memo[x_id] = result
for item in x:
result.add(call_self(item))
elif isinstance(x, tuple):
result = []
memo[x_id] = result
for item in x:
result.append(call_self(item))
if hasattr(x, "_fields"):
result = type(x)(*result)
else:
result = type(x)(result)
memo[x_id] = result
else:
# If the object is not recognized, we use the fallback behavior.
if otherwise_deepcopy:
result = deepcopy(x, memo=memo)
elif otherwise_return:
result = x
elif otherwise_fail:
raise TypeError(f"Do not know how to clone {repr(x)} (of type {type(x)}).")
else:
raise RuntimeError("The function `deep_clone` reached an unexpected state. This might be a bug.")
if (x_id not in memo) and (result is not x):
# If the newly made clone is still not in the memo dictionary AND the "clone" is not just a reference to the
# original object, we make sure that it is in the memo dictionary.
memo[x_id] = result
# Finally, the result is returned.
return result
hook
¶
This module contains the Hook class, which is used for event handling, and for defining additional behaviors to the class instances which own the Hook.
Hook (MutableSequence)
¶
A Hook stores a list of callable objects to be called for handling certain events. A Hook itself is callable, which invokes the callables stored in its list. If the callables stored by the Hook return list-like objects or dict-like objects, their returned results are accumulated, and then those accumulated results are finally returned by the Hook.
Source code in evotorch/tools/hook.py
class Hook(MutableSequence):
"""
A Hook stores a list of callable objects to be called for handling
certain events. A Hook itself is callable, which invokes the callables
stored in its list. If the callables stored by the Hook return list-like
objects or dict-like objects, their returned results are accumulated,
and then those accumulated results are finally returned by the Hook.
"""
def __init__(
self,
callables: Optional[Iterable[Callable]] = None,
*,
args: Optional[Iterable] = None,
kwargs: Optional[Mapping] = None,
):
"""
Initialize the Hook.
Args:
callables: A sequence of callables to be stored by the Hook.
args: Positional arguments which, when the Hook is called,
are to be passed to every callable stored by the Hook.
Please note that these positional arguments will be passed
as the leftmost arguments, and, the other positional
arguments passed via the `__call__(...)` method of the
Hook will be added to the right of these arguments.
kwargs: Keyword arguments which, when the Hook is called,
are to be passed to every callable stored by the Hook.
Please note that these keyword arguments could be overriden
by the keyword arguments passed via the `__call__(...)`
method of the Hook.
"""
self._funcs: list = [] if callables is None else list(callables)
self._args: list = [] if args is None else list(args)
self._kwargs: dict = {} if kwargs is None else dict(kwargs)
def __call__(self, *args: Any, **kwargs: Any) -> Optional[Union[dict, list]]:
"""
Call every callable object stored by the Hook.
The results of the stored callable objects (which can be dict-like
or list-like objects) are accumulated and finally returned.
Args:
args: Additional positional arguments to be passed to the stored
callables.
kwargs: Additional keyword arguments to be passed to the stored
keyword arguments.
"""
all_args = []
all_args.extend(self._args)
all_args.extend(args)
all_kwargs = {}
all_kwargs.update(self._kwargs)
all_kwargs.update(kwargs)
result: Optional[Union[dict, list]] = None
for f in self._funcs:
tmp = f(*all_args, **all_kwargs)
if tmp is not None:
if isinstance(tmp, Mapping):
if result is None:
result = dict(tmp)
elif isinstance(result, list):
raise TypeError(
f"The function {f} returned a dict-like object."
f" However, previous function(s) in this hook had returned list-like object(s)."
f" Such incompatible results cannot be accumulated."
)
elif isinstance(result, dict):
result.update(tmp)
else:
raise RuntimeError
elif isinstance(tmp, Iterable):
if result is None:
result = list(tmp)
elif isinstance(result, list):
result.extend(tmp)
elif isinstance(result, dict):
raise TypeError(
f"The function {f} returned a list-like object."
f" However, previous function(s) in this hook had returned dict-like object(s)."
f" Such incompatible results cannot be accumulated."
)
else:
raise RuntimeError
else:
raise TypeError(
f"Expected the function {f} to return None, or a dict-like object, or a list-like object."
f" However, the function returned an object of type {repr(type(tmp))}."
)
return result
def accumulate_dict(self, *args: Any, **kwargs: Any) -> Optional[Union[dict, list]]:
result = self(*args, **kwargs)
if result is None:
return {}
elif isinstance(result, Mapping):
return result
else:
raise TypeError(
f"Expected the functions in this hook to accumulate"
f" dictionary-like objects. Instead, accumulated"
f" an object of type {type(result)}."
f" Hint: are the functions registered in this hook"
f" returning non-dictionary iterables?"
)
def accumulate_sequence(self, *args: Any, **kwargs: Any) -> Optional[Union[dict, list]]:
result = self(*args, **kwargs)
if result is None:
return []
elif isinstance(result, Mapping):
raise TypeError(
f"Expected the functions in this hook to accumulate"
f" sequences (that are NOT dictionaries). Instead, accumulated"
f" a dict-like object of type {type(result)}."
f" Hint: are the functions registered in this hook"
f" returning objects with Mapping interface?"
)
else:
return result
def _to_string(self) -> str:
init_args = [repr(self._funcs)]
if len(self._args) > 0:
init_args.append(f"args={self._args}")
if len(self._kwargs) > 0:
init_args.append(f"kwargs={self._kwargs}")
s_init_args = ", ".join(init_args)
return f"{type(self).__name__}({s_init_args})"
def __repr__(self) -> str:
return self._to_string()
def __str__(self) -> str:
return self._to_string()
def __getitem__(self, i: Union[int, slice]) -> Union[Callable, "Hook"]:
if isinstance(i, slice):
return Hook(self._funcs[i], args=self._args, kwargs=self._kwargs)
else:
return self._funcs[i]
def __setitem__(self, i: Union[int, slice], x: Iterable[Callable]):
self._funcs[i] = x
def __delitem__(self, i: Union[int, slice]):
del self._funcs[i]
def insert(self, i: int, x: Callable):
self._funcs.insert(i, x)
def __len__(self) -> int:
return len(self._funcs)
@property
def args(self) -> list:
"""Positional arguments that will be passed to the stored callables"""
return self._args
@property
def kwargs(self) -> dict:
"""Keyword arguments that will be passed to the stored callables"""
return self._kwargs
args: list
property
readonly
¶
Positional arguments that will be passed to the stored callables
kwargs: dict
property
readonly
¶
Keyword arguments that will be passed to the stored callables
__call__(self, *args, **kwargs)
special
¶
Call every callable object stored by the Hook. The results of the stored callable objects (which can be dict-like or list-like objects) are accumulated and finally returned.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
args |
Any |
Additional positional arguments to be passed to the stored callables. |
() |
kwargs |
Any |
Additional keyword arguments to be passed to the stored keyword arguments. |
{} |
Source code in evotorch/tools/hook.py
def __call__(self, *args: Any, **kwargs: Any) -> Optional[Union[dict, list]]:
"""
Call every callable object stored by the Hook.
The results of the stored callable objects (which can be dict-like
or list-like objects) are accumulated and finally returned.
Args:
args: Additional positional arguments to be passed to the stored
callables.
kwargs: Additional keyword arguments to be passed to the stored
keyword arguments.
"""
all_args = []
all_args.extend(self._args)
all_args.extend(args)
all_kwargs = {}
all_kwargs.update(self._kwargs)
all_kwargs.update(kwargs)
result: Optional[Union[dict, list]] = None
for f in self._funcs:
tmp = f(*all_args, **all_kwargs)
if tmp is not None:
if isinstance(tmp, Mapping):
if result is None:
result = dict(tmp)
elif isinstance(result, list):
raise TypeError(
f"The function {f} returned a dict-like object."
f" However, previous function(s) in this hook had returned list-like object(s)."
f" Such incompatible results cannot be accumulated."
)
elif isinstance(result, dict):
result.update(tmp)
else:
raise RuntimeError
elif isinstance(tmp, Iterable):
if result is None:
result = list(tmp)
elif isinstance(result, list):
result.extend(tmp)
elif isinstance(result, dict):
raise TypeError(
f"The function {f} returned a list-like object."
f" However, previous function(s) in this hook had returned dict-like object(s)."
f" Such incompatible results cannot be accumulated."
)
else:
raise RuntimeError
else:
raise TypeError(
f"Expected the function {f} to return None, or a dict-like object, or a list-like object."
f" However, the function returned an object of type {repr(type(tmp))}."
)
return result
__init__(self, callables=None, *, args=None, kwargs=None)
special
¶
Initialize the Hook.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
callables |
Optional[Iterable[Callable]] |
A sequence of callables to be stored by the Hook. |
None |
args |
Optional[Iterable] |
Positional arguments which, when the Hook is called,
are to be passed to every callable stored by the Hook.
Please note that these positional arguments will be passed
as the leftmost arguments, and, the other positional
arguments passed via the |
None |
kwargs |
Optional[collections.abc.Mapping] |
Keyword arguments which, when the Hook is called,
are to be passed to every callable stored by the Hook.
Please note that these keyword arguments could be overriden
by the keyword arguments passed via the |
None |
Source code in evotorch/tools/hook.py
def __init__(
self,
callables: Optional[Iterable[Callable]] = None,
*,
args: Optional[Iterable] = None,
kwargs: Optional[Mapping] = None,
):
"""
Initialize the Hook.
Args:
callables: A sequence of callables to be stored by the Hook.
args: Positional arguments which, when the Hook is called,
are to be passed to every callable stored by the Hook.
Please note that these positional arguments will be passed
as the leftmost arguments, and, the other positional
arguments passed via the `__call__(...)` method of the
Hook will be added to the right of these arguments.
kwargs: Keyword arguments which, when the Hook is called,
are to be passed to every callable stored by the Hook.
Please note that these keyword arguments could be overriden
by the keyword arguments passed via the `__call__(...)`
method of the Hook.
"""
self._funcs: list = [] if callables is None else list(callables)
self._args: list = [] if args is None else list(args)
self._kwargs: dict = {} if kwargs is None else dict(kwargs)
insert(self, i, x)
¶
misc
¶
Miscellaneous utility functions
DTypeAndDevice (tuple)
¶
ErroneousResult
¶
Representation of a caught error being returned as a result.
Source code in evotorch/tools/misc.py
class ErroneousResult:
"""
Representation of a caught error being returned as a result.
"""
def __init__(self, error: Exception):
self.error = error
def _to_string(self) -> str:
return f"<{type(self).__name__}, error: {self.error}>"
def __str__(self) -> str:
return self._to_string()
def __repr__(self) -> str:
return self._to_string()
def __bool__(self) -> bool:
return False
@staticmethod
def call(f, *args, **kwargs) -> Any:
"""
Call a function with the given arguments.
If the function raises an error, wrap the error in an ErroneousResult
object, and return that ErroneousResult object instead.
Returns:
The result of the function if there was no error,
or an ErroneousResult if there was an error.
"""
try:
result = f(*args, **kwargs)
except Exception as ex:
result = ErroneousResult(ex)
return result
call(f, *args, **kwargs)
staticmethod
¶
Call a function with the given arguments. If the function raises an error, wrap the error in an ErroneousResult object, and return that ErroneousResult object instead.
Returns:
Type | Description |
---|---|
Any |
The result of the function if there was no error, or an ErroneousResult if there was an error. |
Source code in evotorch/tools/misc.py
@staticmethod
def call(f, *args, **kwargs) -> Any:
"""
Call a function with the given arguments.
If the function raises an error, wrap the error in an ErroneousResult
object, and return that ErroneousResult object instead.
Returns:
The result of the function if there was no error,
or an ErroneousResult if there was an error.
"""
try:
result = f(*args, **kwargs)
except Exception as ex:
result = ErroneousResult(ex)
return result
as_tensor(x, *, dtype=None, device=None)
¶
Get the tensor counterpart of the given object x
.
This function can be used to convert native Python objects to tensors:
my_tensor = as_tensor([1.0, 2.0, 3.0], dtype="float32")
One can also use this function to convert an existing tensor to another dtype:
my_new_tensor = as_tensor(my_tensor, dtype="float16")
This function can also be used for moving a tensor from one device to another:
my_gpu_tensor = as_tensor(my_tensor, device="cuda:0")
This function can also create ObjectArray instances when dtype is
given as object
or Any
or "object" or "O".
my_objects = as_tensor([1, {"a": 3}], dtype=object)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Any |
Any object to be converted to a tensor. |
required |
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32) or, for creating an |
None |
device |
Union[str, torch.device] |
The device in which the resulting tensor will be stored. |
None |
Returns:
Type | Description |
---|---|
Iterable |
The tensor counterpart of the given object |
Source code in evotorch/tools/misc.py
def as_tensor(x: Any, *, dtype: Optional[DType] = None, device: Optional[Device] = None) -> Iterable:
"""
Get the tensor counterpart of the given object `x`.
This function can be used to convert native Python objects to tensors:
my_tensor = as_tensor([1.0, 2.0, 3.0], dtype="float32")
One can also use this function to convert an existing tensor to another
dtype:
my_new_tensor = as_tensor(my_tensor, dtype="float16")
This function can also be used for moving a tensor from one device to
another:
my_gpu_tensor = as_tensor(my_tensor, device="cuda:0")
This function can also create ObjectArray instances when dtype is
given as `object` or `Any` or "object" or "O".
my_objects = as_tensor([1, {"a": 3}], dtype=object)
Args:
x: Any object to be converted to a tensor.
dtype: Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32) or, for creating an `ObjectArray`,
"object" (as string) or `object` or `Any`.
If `dtype` is not specified, the default behavior of
`torch.as_tensor(...)` will be used, that is, dtype will be
inferred from `x`.
device: The device in which the resulting tensor will be stored.
Returns:
The tensor counterpart of the given object `x`.
"""
from .objectarray import ObjectArray
if (dtype is None) and isinstance(x, (torch.Tensor, ObjectArray)):
if (device is None) or (str(device) == "cpu"):
return x
else:
raise ValueError(
f"An ObjectArray cannot be moved into a device other than 'cpu'." f" The received device is: {device}."
)
elif is_dtype_object(dtype):
if (device is None) or (str(device) == "cpu"):
raise ValueError(
f"An ObjectArray cannot be created on a device other than 'cpu'." f" The received device is: {device}."
)
if isinstance(x, ObjectArray):
return x
else:
x = list(x)
n = len(x)
result = ObjectArray(n)
result[:] = x
return result
else:
dtype = to_torch_dtype(dtype)
return torch.as_tensor(x, dtype=dtype, device=device)
cast_tensors_in_container(container, *, dtype=None, device=None, memo=None)
¶
Cast and/or transfer all the tensors in a Python container.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
If given as a dtype and not as None, then all the PyTorch tensors in the container will be cast to this dtype. |
None |
device |
Union[str, torch.device] |
If given as a device and not as None, then all the PyTorch tensors in the container will be copied to this device. |
None |
memo |
Optional[dict] |
Optionally a memo dictionary to handle shared objects and circular references. In most scenarios, when calling this function from outside, this is expected as None. |
None |
Returns:
Type | Description |
---|---|
Any |
A new copy of the original container in which the tensors have the desired dtype and/or device. |
Source code in evotorch/tools/misc.py
def cast_tensors_in_container(
container: Any,
*,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
memo: Optional[dict] = None,
) -> Any:
"""
Cast and/or transfer all the tensors in a Python container.
Args:
dtype: If given as a dtype and not as None, then all the PyTorch
tensors in the container will be cast to this dtype.
device: If given as a device and not as None, then all the PyTorch
tensors in the container will be copied to this device.
memo: Optionally a memo dictionary to handle shared objects and
circular references. In most scenarios, when calling this
function from outside, this is expected as None.
Returns:
A new copy of the original container in which the tensors have the
desired dtype and/or device.
"""
if memo is None:
memo = {}
container_id = id(container)
if container_id in memo:
return memo[container_id]
cast_kwargs = {}
if dtype is not None:
cast_kwargs["dtype"] = to_torch_dtype(dtype)
if device is not None:
cast_kwargs["device"] = device
def call_self(sub_container: Any) -> Any:
return cast_tensors_in_container(sub_container, dtype=dtype, device=device, memo=memo)
if isinstance(container, torch.Tensor):
result = torch.as_tensor(container, **cast_kwargs)
memo[container_id] = result
elif (container is None) or isinstance(container, (Number, str, bytes, bool)):
result = container
elif isinstance(container, set):
result = set()
memo[container_id] = result
for x in container:
result.add(call_self(x))
elif isinstance(container, Mapping):
result = {}
memo[container_id] = result
for k, v in container.items():
result[k] = call_self(v)
elif isinstance(container, tuple):
result = []
memo[container_id] = result
for x in container:
result.append(call_self(x))
if hasattr(container, "_fields"):
result = type(container)(*result)
else:
result = type(container)(result)
memo[container_id] = result
elif isinstance(container, Iterable):
result = []
memo[container_id] = result
for x in container:
result.append(call_self(x))
else:
raise TypeError(f"Encountered an object of unrecognized type: {type(container)}")
return result
clip_tensor(x, lb=None, ub=None, ensure_copy=True)
¶
Clip the values of a tensor with respect to the given bounds.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Tensor |
The PyTorch tensor whose values will be clipped. |
required |
lb |
Union[float, Iterable] |
Lower bounds, as a PyTorch tensor. Can be None if there are no lower bounds. |
None |
ub |
Union[float, Iterable] |
Upper bounds, as a PyTorch tensor. Can be None if there are no upper bonuds. |
None |
ensure_copy |
bool |
If |
True |
Returns:
Type | Description |
---|---|
Tensor |
The clipped tensor. |
Source code in evotorch/tools/misc.py
@torch.no_grad()
def clip_tensor(
x: torch.Tensor,
lb: Optional[Union[float, Iterable]] = None,
ub: Optional[Union[float, Iterable]] = None,
ensure_copy: bool = True,
) -> torch.Tensor:
"""
Clip the values of a tensor with respect to the given bounds.
Args:
x: The PyTorch tensor whose values will be clipped.
lb: Lower bounds, as a PyTorch tensor.
Can be None if there are no lower bounds.
ub: Upper bounds, as a PyTorch tensor.
Can be None if there are no upper bonuds.
ensure_copy: If `ensure_copy` is True, the result will be
a clipped copy of the original tensor.
If `ensure_copy` is False, and both `lb` and `ub`
are None, then there is nothing to do, so, the result
will be the original tensor itself, not a copy of it.
Returns:
The clipped tensor.
"""
result = x
if lb is not None:
lb = torch.as_tensor(lb, dtype=x.dtype, device=x.device)
result = torch.max(result, lb)
if ub is not None:
ub = torch.as_tensor(ub, dtype=x.dtype, device=x.device)
result = torch.min(result, ub)
if ensure_copy and result is x:
result = x.clone()
return result
clone(x, *, memo=None)
¶
Get a deep copy of the given object.
The cloning is done in no_grad mode.
When this function is used on read-only containers (e.g. ReadOnlyTensor,
ImmutableContainer, etc.), the created clones preserve their read-only
behaviors. For creating a mutable clone of an immutable object,
use their clone()
method instead.
Returns:
Type | Description |
---|---|
Any |
The deep copy of the given object. |
Source code in evotorch/tools/misc.py
@torch.no_grad()
def clone(x: Any, *, memo: Optional[dict] = None) -> Any:
"""
Get a deep copy of the given object.
The cloning is done in no_grad mode.
When this function is used on read-only containers (e.g. ReadOnlyTensor,
ImmutableContainer, etc.), the created clones preserve their read-only
behaviors. For creating a mutable clone of an immutable object,
use their `clone()` method instead.
Returns:
The deep copy of the given object.
"""
from .cloning import deep_clone
if memo is None:
memo = {}
return deep_clone(x, otherwise_deepcopy=True, memo=memo)
device_of(x)
¶
Get the device of the given object.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Any |
The object whose device is being queried.
The object can be a PyTorch tensor, or a PyTorch module
(in which case the device of the first parameter tensor
will be returned), or an ObjectArray (in which case
the returned device will be the cpu device), or any object
with the attribute |
required |
Returns:
Type | Description |
---|---|
Union[str, torch.device] |
The device of the given object. |
Source code in evotorch/tools/misc.py
def device_of(x: Any) -> Device:
"""
Get the device of the given object.
Args:
x: The object whose device is being queried.
The object can be a PyTorch tensor, or a PyTorch module
(in which case the device of the first parameter tensor
will be returned), or an ObjectArray (in which case
the returned device will be the cpu device), or any object
with the attribute `device`.
Returns:
The device of the given object.
"""
if isinstance(x, nn.Module):
result = None
for param in x.parameters():
result = param.device
break
if result is None:
raise ValueError(f"Cannot determine the device of the module {x}")
return result
else:
return x.device
device_of_container(container, *, visited=None, visiting=None)
¶
Get the device of the given container.
It is assumed that the given container stores PyTorch tensors from which the device information will be extracted. If the container contains only basic types like int, float, string, bool, or None, or if the container is empty, then the returned device will be None. If the container contains unrecognized objects, an error will be raised.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
container |
Any |
A sequence or a dictionary of objects from which the device information will be extracted. |
required |
visited |
Optional[dict] |
Optionally a dictionary which stores the (sub)containers which are already visited. In most cases, when this function is called from outside, this is expected as None. |
None |
visiting |
Optional[str] |
Optionally a set which stores the (sub)containers which are being visited. This set is used to prevent recursion errors while handling circular references. In most cases, when this function is called from outside, this argument is expected as None. |
None |
Returns:
Type | Description |
---|---|
Optional[torch.device] |
The device if available, None otherwise. |
Source code in evotorch/tools/misc.py
def device_of_container(
container: Any, *, visited: Optional[dict] = None, visiting: Optional[str] = None
) -> Optional[torch.device]:
"""
Get the device of the given container.
It is assumed that the given container stores PyTorch tensors from
which the device information will be extracted.
If the container contains only basic types like int, float, string,
bool, or None, or if the container is empty, then the returned device
will be None.
If the container contains unrecognized objects, an error will be
raised.
Args:
container: A sequence or a dictionary of objects from which the
device information will be extracted.
visited: Optionally a dictionary which stores the (sub)containers
which are already visited. In most cases, when this function
is called from outside, this is expected as None.
visiting: Optionally a set which stores the (sub)containers
which are being visited. This set is used to prevent recursion
errors while handling circular references. In most cases,
when this function is called from outside, this argument is
expected as None.
Returns:
The device if available, None otherwise.
"""
container_id = id(container)
if visited is None:
visited = {}
if container_id in visited:
return visited[container_id]
if visiting is None:
visiting = set()
if container_id in visiting:
return None
class result:
device: Optional[torch.device] = None
@classmethod
def update(cls, new_device: Optional[torch.device]):
if new_device is not None:
if cls.device is None:
cls.device = new_device
else:
if new_device != cls.device:
raise ValueError(f"Encountered tensors whose `device`s mismatch: {new_device}, {cls.device}")
def call_self(sub_container):
return device_of_container(sub_container, visited=visited, visiting=visiting)
if isinstance(container, torch.Tensor):
result.update(container.device)
elif (container is None) or isinstance(container, (Number, str, bytes, bool)):
pass
elif isinstance(container, Mapping):
visiting.add(container_id)
try:
for _, v in container.items():
result.update(call_self(v))
finally:
visiting.remove(container_id)
elif isinstance(container, Iterable):
visiting.add(container_id)
try:
for v in container:
result.update(call_self(v))
finally:
visiting.remove(container_id)
else:
raise TypeError(f"Encountered an object of unrecognized type: {type(container)}")
visited[container_id] = result.device
return result.device
dtype_of(x)
¶
Get the dtype of the given object.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Any |
The object whose dtype is being queried.
The object can be a PyTorch tensor, or a PyTorch module
(in which case the dtype of the first parameter tensor
will be returned), or an ObjectArray (in which case
the returned dtype will be |
required |
Returns:
Type | Description |
---|---|
Union[str, torch.dtype, numpy.dtype, Type] |
The dtype of the given object. |
Source code in evotorch/tools/misc.py
def dtype_of(x: Any) -> DType:
"""
Get the dtype of the given object.
Args:
x: The object whose dtype is being queried.
The object can be a PyTorch tensor, or a PyTorch module
(in which case the dtype of the first parameter tensor
will be returned), or an ObjectArray (in which case
the returned dtype will be `object`), or any object with
the attribute `dtype`.
Returns:
The dtype of the given object.
"""
if isinstance(x, nn.Module):
result = None
for param in x.parameters():
result = param.dtype
break
if result is None:
raise ValueError(f"Cannot determine the dtype of the module {x}")
return result
else:
return x.dtype
dtype_of_container(container, *, visited=None, visiting=None)
¶
Get the dtype of the given container.
It is assumed that the given container stores PyTorch tensors from which the dtype information will be extracted. If the container contains only basic types like int, float, string, bool, or None, or if the container is empty, then the returned dtype will be None. If the container contains unrecognized objects, an error will be raised.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
container |
Any |
A sequence or a dictionary of objects from which the dtype information will be extracted. |
required |
visited |
Optional[dict] |
Optionally a dictionary which stores the (sub)containers which are already visited. In most cases, when this function is called from outside, this is expected as None. |
None |
visiting |
Optional[str] |
Optionally a set which stores the (sub)containers which are being visited. This set is used to prevent recursion errors while handling circular references. In most cases, when this function is called from outside, this argument is expected as None. |
None |
Returns:
Type | Description |
---|---|
Optional[torch.dtype] |
The dtype if available, None otherwise. |
Source code in evotorch/tools/misc.py
def dtype_of_container(
container: Any, *, visited: Optional[dict] = None, visiting: Optional[str] = None
) -> Optional[torch.dtype]:
"""
Get the dtype of the given container.
It is assumed that the given container stores PyTorch tensors from
which the dtype information will be extracted.
If the container contains only basic types like int, float, string,
bool, or None, or if the container is empty, then the returned dtype
will be None.
If the container contains unrecognized objects, an error will be
raised.
Args:
container: A sequence or a dictionary of objects from which the
dtype information will be extracted.
visited: Optionally a dictionary which stores the (sub)containers
which are already visited. In most cases, when this function
is called from outside, this is expected as None.
visiting: Optionally a set which stores the (sub)containers
which are being visited. This set is used to prevent recursion
errors while handling circular references. In most cases,
when this function is called from outside, this argument is
expected as None.
Returns:
The dtype if available, None otherwise.
"""
container_id = id(container)
if visited is None:
visited = {}
if container_id in visited:
return visited[container_id]
if visiting is None:
visiting = set()
if container_id in visiting:
return None
class result:
dtype: Optional[torch.dtype] = None
@classmethod
def update(cls, new_dtype: Optional[torch.dtype]):
if new_dtype is not None:
if cls.dtype is None:
cls.dtype = new_dtype
else:
if new_dtype != cls.dtype:
raise ValueError(f"Encountered tensors whose `dtype`s mismatch: {new_dtype}, {cls.dtype}")
def call_self(sub_container):
return dtype_of_container(sub_container, visited=visited, visiting=visiting)
if isinstance(container, torch.Tensor):
result.update(container.dtype)
elif (container is None) or isinstance(container, (Number, str, bytes, bool)):
pass
elif isinstance(container, Mapping):
visiting.add(container_id)
try:
for _, v in container.items():
result.update(call_self(v))
finally:
visiting.remove(container_id)
elif isinstance(container, Iterable):
visiting.add(container_id)
try:
for v in container:
result.update(call_self(v))
finally:
visiting.remove(container_id)
else:
raise TypeError(f"Encountered an object of unrecognized type: {type(container)}")
visited[container_id] = result.dtype
return result.dtype
empty_tensor_like(source, *, shape=None, length=None, dtype=None, device=None)
¶
Make an empty tensor with attributes taken from a source tensor.
The source tensor can be a PyTorch tensor, or an ObjectArray.
Unlike torch.empty_like(...)
, this function allows one to redefine the
shape and/or length of the new empty tensor.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
source |
Any |
The source tensor whose shape, dtype, and device will be used by default for the new empty tensor. |
required |
shape |
Union[tuple, int] |
If given as None (which is the default), then the shape of the
source tensor will be used for the new empty tensor.
If given as a tuple or a |
None |
length |
Optional[int] |
If given as None (which is the default), then the length of
the new empty tensor will be equal to the length of the source
tensor (where length here means the size of the outermost
dimension, i.e., what is returned by |
None |
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
If given as None, the dtype of the new empty tensor will be
the dtype of the source tensor.
If given as a |
None |
device |
Union[str, torch.device] |
If given as None, the device of the new empty tensor will be
the device of the source tensor.
If given as a |
None |
Returns:
Type | Description |
---|---|
Any |
The new empty tensor. |
Source code in evotorch/tools/misc.py
def empty_tensor_like(
source: Any,
*,
shape: Optional[Union[tuple, int]] = None,
length: Optional[int] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
) -> Any:
"""
Make an empty tensor with attributes taken from a source tensor.
The source tensor can be a PyTorch tensor, or an ObjectArray.
Unlike `torch.empty_like(...)`, this function allows one to redefine the
shape and/or length of the new empty tensor.
Args:
source: The source tensor whose shape, dtype, and device will be used
by default for the new empty tensor.
shape: If given as None (which is the default), then the shape of the
source tensor will be used for the new empty tensor.
If given as a tuple or a `torch.Size` instance, then the new empty
tensor will be in this given shape instead.
This argument cannot be used together with `length`.
length: If given as None (which is the default), then the length of
the new empty tensor will be equal to the length of the source
tensor (where length here means the size of the outermost
dimension, i.e., what is returned by `len(...)`).
If given as an integer, the length of the empty tensor will be
this given length instead.
This argument cannot be used together with `shape`.
dtype: If given as None, the dtype of the new empty tensor will be
the dtype of the source tensor.
If given as a `torch.dtype` instance, then the dtype of the
tensor will be this given dtype instead.
device: If given as None, the device of the new empty tensor will be
the device of the source tensor.
If given as a `torch.device` instance, then the device of the
tensor will be this given device instead.
Returns:
The new empty tensor.
"""
from .objectarray import ObjectArray
if isinstance(source, ObjectArray):
if length is not None and shape is None:
n = int(length)
elif shape is not None and length is None:
if isinstance(shape, Iterable):
if len(shape) != 1:
raise ValueError(
f"An ObjectArray must always be 1-dimensional."
f" Therefore, this given shape is incompatible: {shape}"
)
n = int(shape[0])
elif length is None and shape is None:
n = len(source)
else:
raise ValueError("`length` and `shape` cannot be used together")
if device is not None:
if str(device) != "cpu":
raise ValueError(
f"An ObjectArray can only be allocated on cpu. However, the specified `device` is: {device}."
)
if dtype is not None:
if not is_dtype_object(dtype):
raise ValueError(
f"The dtype of an ObjectArray can only be `object`. However, the specified `dtype` is: {dtype}."
)
return ObjectArray(n)
elif isinstance(source, torch.Tensor):
if length is not None:
if shape is not None:
raise ValueError("`length` and `shape` cannot be used together")
if source.ndim == 0:
raise ValueError("`length` can only be used when the source tensor is at least 1-dimensional")
newshape = [int(length)]
newshape.extend(source.shape[1:])
shape = tuple(newshape)
if not ((dtype is None) or isinstance(dtype, torch.dtype)):
dtype = to_torch_dtype(dtype)
return torch.empty(
source.shape if shape is None else shape,
dtype=(source.dtype if dtype is None else dtype),
device=(source.device if device is None else device),
)
else:
raise TypeError(f"The source tensor is of an unrecognized type: {type(source)}")
ensure_ray()
¶
Ensure that the ray parallelization engine is initialized. If ray is already initialized, this function does nothing.
ensure_tensor_length_and_dtype(t, length, dtype, about=None, *, allow_scalar=False, device=None)
¶
Return the given sequence as a tensor while also confirming its length, dtype, and device. If the given object is already a tensor conforming to the desired length, dtype, and device, the object will be returned as it is (there will be no copying).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
t |
Any |
The tensor, or a sequence which is convertible to a tensor. |
required |
length |
int |
The length to which the tensor is expected to conform. |
required |
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
The dtype to which the tensor is expected to conform. |
required |
about |
Optional[str] |
The prefix for the error message. Can be left as None. |
None |
allow_scalar |
bool |
Whether or not to accept scalars in addition
to vector of the desired length.
If |
False |
device |
Union[str, torch.device] |
The device in which the sequence is to be stored.
If the given sequence is on a different device than the
desired device, a copy on the correct device will be made.
If device is None, the default behavior of |
None |
Returns:
Type | Description |
---|---|
Any |
The sequence whose correctness in terms of length, dtype, and device is ensured. |
Exceptions:
Type | Description |
---|---|
ValueError |
if there is a length mismatch. |
Source code in evotorch/tools/misc.py
@torch.no_grad()
def ensure_tensor_length_and_dtype(
t: Any,
length: int,
dtype: DType,
about: Optional[str] = None,
*,
allow_scalar: bool = False,
device: Optional[Device] = None,
) -> Any:
"""
Return the given sequence as a tensor while also confirming its
length, dtype, and device.
If the given object is already a tensor conforming to the desired
length, dtype, and device, the object will be returned as it is
(there will be no copying).
Args:
t: The tensor, or a sequence which is convertible to a tensor.
length: The length to which the tensor is expected to conform.
dtype: The dtype to which the tensor is expected to conform.
about: The prefix for the error message. Can be left as None.
allow_scalar: Whether or not to accept scalars in addition
to vector of the desired length.
If `allow_scalar` is False, then scalars will be converted
to sequences of the desired length. The sequence will contain
the same scalar, repeated.
If `allow_scalar` is True, then the scalar itself will be
converted to a PyTorch scalar, and then will be returned.
device: The device in which the sequence is to be stored.
If the given sequence is on a different device than the
desired device, a copy on the correct device will be made.
If device is None, the default behavior of `torch.tensor(...)`
will be used, that is: if `t` is already a tensor, the result
will be on the same device, otherwise, the result will be on
the cpu.
Returns:
The sequence whose correctness in terms of length, dtype, and
device is ensured.
Raises:
ValueError: if there is a length mismatch.
"""
device_args = {}
if device is not None:
device_args["device"] = device
t = as_tensor(t, dtype=dtype, **device_args)
if t.ndim == 0:
if allow_scalar:
return t
else:
return t.repeat(length)
else:
if t.ndim != 1 or len(t) != length:
if about is not None:
err_prefix = about + ": "
else:
err_prefix = ""
raise ValueError(
f"{err_prefix}Expected a 1-dimensional tensor of length {length}, but got a tensor with shape: {t.shape}"
)
return t
expect_none(msg_prefix, **kwargs)
¶
Expect the values associated with the given keyword arguments to be None. If not, raise error.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
msg_prefix |
str |
Prefix of the error message. |
required |
kwargs |
Keyword arguments whose values are expected to be None. |
{} |
Exceptions:
Type | Description |
---|---|
ValueError |
if at least one of the keyword arguments has a value other than None. |
Source code in evotorch/tools/misc.py
def expect_none(msg_prefix: str, **kwargs):
"""
Expect the values associated with the given keyword arguments
to be None. If not, raise error.
Args:
msg_prefix: Prefix of the error message.
kwargs: Keyword arguments whose values are expected to be None.
Raises:
ValueError: if at least one of the keyword arguments has a value
other than None.
"""
for k, v in kwargs.items():
if v is not None:
raise ValueError(f"{msg_prefix}: expected `{k}` as None, however, it was found to be {repr(v)}")
is_bool(x)
¶
Return True if x
represents a bool.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Any |
An object whose type is being queried. |
required |
Returns:
Type | Description |
---|---|
bool |
True if |
Source code in evotorch/tools/misc.py
def is_bool(x: Any) -> bool:
"""
Return True if `x` represents a bool.
Args:
x: An object whose type is being queried.
Returns:
True if `x` is a bool; False otherwise.
"""
if isinstance(x, (bool, np.bool_)):
return True
elif isinstance(x, (torch.Tensor, np.ndarray)):
if x.ndim > 0:
return False
else:
return is_dtype_bool(x.dtype)
else:
return False
is_bool_vector(x)
¶
Return True if x
is a vector consisting of bools.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Any |
An object whose elements' types are to be queried. |
required |
Returns:
Type | Description |
---|---|
bool |
True if the elements of |
Source code in evotorch/tools/misc.py
def is_bool_vector(x: Any) -> bool:
"""
Return True if `x` is a vector consisting of bools.
Args:
x: An object whose elements' types are to be queried.
Returns:
True if the elements of `x` are bools; False otherwise.
"""
if isinstance(x, (torch.Tensor, np.ndarray)):
if x.ndim != 1:
return False
else:
return is_dtype_bool(x.dtype)
elif isinstance(x, Iterable):
for item in x:
if not is_bool(item):
return False
return True
else:
return False
is_dtype_bool(t)
¶
Return True if the given dtype is an bool type.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
t |
Union[str, torch.dtype, numpy.dtype, Type] |
The dtype, which can be a dtype string, a numpy dtype, or a PyTorch dtype. |
required |
Returns:
Type | Description |
---|---|
bool |
True if t is a bool type; False otherwise. |
Source code in evotorch/tools/misc.py
is_dtype_float(t)
¶
Return True if the given dtype is an float type.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
t |
Union[str, torch.dtype, numpy.dtype, Type] |
The dtype, which can be a dtype string, a numpy dtype, or a PyTorch dtype. |
required |
Returns:
Type | Description |
---|---|
bool |
True if t is an float type; False otherwise. |
Source code in evotorch/tools/misc.py
is_dtype_integer(t)
¶
Return True if the given dtype is an integer type.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
t |
Union[str, torch.dtype, numpy.dtype, Type] |
The dtype, which can be a dtype string, a numpy dtype, or a PyTorch dtype. |
required |
Returns:
Type | Description |
---|---|
bool |
True if t is an integer type; False otherwise. |
Source code in evotorch/tools/misc.py
def is_dtype_integer(t: DType) -> bool:
"""
Return True if the given dtype is an integer type.
Args:
t: The dtype, which can be a dtype string, a numpy dtype,
or a PyTorch dtype.
Returns:
True if t is an integer type; False otherwise.
"""
t: np.dtype = to_numpy_dtype(t)
return t.kind.startswith("u") or t.kind.startswith("i")
is_dtype_object(dtype)
¶
Return True if the given dtype is object
or Any
.
Returns:
Type | Description |
---|---|
bool |
True if the given dtype is |
Source code in evotorch/tools/misc.py
def is_dtype_object(dtype: DType) -> bool:
"""
Return True if the given dtype is `object` or `Any`.
Returns:
True if the given dtype is `object` or `Any`; False otherwise.
"""
if isinstance(dtype, str):
return dtype in ("object", "Any", "O")
elif dtype is object or dtype is Any:
return True
else:
return False
is_dtype_real(t)
¶
Return True if the given dtype represents real numbers (i.e. if dtype is an integer type or is a float type).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
t |
Union[str, torch.dtype, numpy.dtype, Type] |
The dtype, which can be a dtype string, a numpy dtype, or a PyTorch dtype. |
required |
Returns:
Type | Description |
---|---|
bool |
True if t represents a real numbers type; False otherwise. |
Source code in evotorch/tools/misc.py
def is_dtype_real(t: DType) -> bool:
"""
Return True if the given dtype represents real numbers
(i.e. if dtype is an integer type or is a float type).
Args:
t: The dtype, which can be a dtype string, a numpy dtype,
or a PyTorch dtype.
Returns:
True if t represents a real numbers type; False otherwise.
"""
return is_dtype_float(t) or is_dtype_integer(t)
is_integer(x)
¶
Return True if x
is an integer.
Note that this function does NOT consider booleans as integers.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Any |
An object whose type is being queried. |
required |
Returns:
Type | Description |
---|---|
bool |
True if |
Source code in evotorch/tools/misc.py
def is_integer(x: Any) -> bool:
"""
Return True if `x` is an integer.
Note that this function does NOT consider booleans as integers.
Args:
x: An object whose type is being queried.
Returns:
True if `x` is an integer; False otherwise.
"""
if is_bool(x):
return False
elif isinstance(x, Integral):
return True
elif isinstance(x, (torch.Tensor, np.ndarray)):
if x.ndim > 0:
return False
else:
return is_dtype_integer(x.dtype)
else:
return False
is_integer_vector(x)
¶
Return True if x
is a vector consisting of integers.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Any |
An object whose elements' types are to be queried. |
required |
Returns:
Type | Description |
---|---|
bool |
True if the elements of |
Source code in evotorch/tools/misc.py
def is_integer_vector(x: Any) -> bool:
"""
Return True if `x` is a vector consisting of integers.
Args:
x: An object whose elements' types are to be queried.
Returns:
True if the elements of `x` are integers; False otherwise.
"""
if isinstance(x, (torch.Tensor, np.ndarray)):
if x.ndim != 1:
return False
else:
return is_dtype_integer(x.dtype)
elif isinstance(x, Iterable):
for item in x:
if not is_integer(item):
return False
return True
else:
return False
is_real(x)
¶
Return True if x
is a real number.
Note that this function does NOT consider booleans as real numbers.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Any |
An object whose type is being queried. |
required |
Returns:
Type | Description |
---|---|
bool |
True if |
Source code in evotorch/tools/misc.py
def is_real(x: Any) -> bool:
"""
Return True if `x` is a real number.
Note that this function does NOT consider booleans as real numbers.
Args:
x: An object whose type is being queried.
Returns:
True if `x` is a real number; False otherwise.
"""
if is_bool(x):
return False
elif isinstance(x, Real):
return True
elif isinstance(x, (torch.Tensor, np.ndarray)):
if x.ndim > 0:
return False
else:
return is_dtype_real(x.dtype)
else:
return False
is_real_vector(x)
¶
Return True if x
is a vector consisting of real numbers.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Any |
An object whose elements' types are to be queried. |
required |
Returns:
Type | Description |
---|---|
bool |
True if the elements of |
Source code in evotorch/tools/misc.py
def is_real_vector(x: Any) -> bool:
"""
Return True if `x` is a vector consisting of real numbers.
Args:
x: An object whose elements' types are to be queried.
Returns:
True if the elements of `x` are real numbers; False otherwise.
"""
if isinstance(x, (torch.Tensor, np.ndarray)):
if x.ndim != 1:
return False
else:
return is_dtype_real(x.dtype)
elif isinstance(x, Iterable):
for item in x:
if not is_real(item):
return False
return True
else:
return False
is_sequence(x)
¶
Return True if x
is a sequence.
Note that this function considers str
and bytes
as scalars,
not as sequences.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Any |
The object whose sequential nature is being queried. |
required |
Returns:
Type | Description |
---|---|
bool |
True if |
Source code in evotorch/tools/misc.py
def is_sequence(x: Any) -> bool:
"""
Return True if `x` is a sequence.
Note that this function considers `str` and `bytes` as scalars,
not as sequences.
Args:
x: The object whose sequential nature is being queried.
Returns:
True if `x` is a sequence; False otherwise.
"""
if isinstance(x, (str, bytes)):
return False
elif isinstance(x, (np.ndarray, torch.Tensor)):
return x.ndim > 0
elif isinstance(x, Iterable):
return True
else:
return False
is_tensor_on_cpu(tensor)
¶
make_I(size=None, *, out=None, dtype=None, device=None)
¶
Make a new identity matrix (I), or change an existing tensor into one.
The following example creates a 3x3 identity matrix:
identity_matrix = make_I(3, dtype="float32")
The following example changes an already existing square matrix such that its values will store an identity matrix:
make_I(out=existing_tensor)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
size |
Optional[int] |
A single integer or a tuple containing a single integer,
where the integer specifies the length of the target square
matrix. In this context, "length" means both rowwise length
and columnwise length, since the target is a square matrix.
Note that, if the user wishes to fill an existing tensor with
identity values, then |
None |
out |
Optional[torch.Tensor] |
Optionally, the existing tensor whose values will be changed
so that they represent an identity matrix.
If an |
None |
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If |
None |
device |
Union[str, torch.device] |
The device in which the new tensor will be stored.
If not specified, "cpu" will be used.
If an |
None |
Returns:
Type | Description |
---|---|
Tensor |
The created or modified tensor after placing the I matrix values |
Source code in evotorch/tools/misc.py
def make_I(
size: Optional[int] = None,
*,
out: Optional[torch.Tensor] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
) -> torch.Tensor:
"""
Make a new identity matrix (I), or change an existing tensor into one.
The following example creates a 3x3 identity matrix:
identity_matrix = make_I(3, dtype="float32")
The following example changes an already existing square matrix such that
its values will store an identity matrix:
make_I(out=existing_tensor)
Args:
size: A single integer or a tuple containing a single integer,
where the integer specifies the length of the target square
matrix. In this context, "length" means both rowwise length
and columnwise length, since the target is a square matrix.
Note that, if the user wishes to fill an existing tensor with
identity values, then `size` is expected to be left as None.
out: Optionally, the existing tensor whose values will be changed
so that they represent an identity matrix.
If an `out` tensor is given, then `size` is expected as None.
dtype: Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If `dtype` is not specified, the default choice of
`torch.empty(...)` is used, that is, `torch.float32`.
If an `out` tensor is specified, then `dtype` is expected
as None.
device: The device in which the new tensor will be stored.
If not specified, "cpu" will be used.
If an `out` tensor is specified, then `device` is expected
as None.
Returns:
The created or modified tensor after placing the I matrix values
"""
if size is None:
if out is None:
raise ValueError(
"When the `size` argument is missing, the function `make_I(...)` expects an `out` tensor."
" However, the `out` argument was received as None."
)
size = tuple()
else:
if isinstance(size, tuple):
if len(size) == 1:
size = size[0]
else:
raise ValueError(
f"When the `size` argument is given as a tuple,"
f" the function `make_I(...)` expects this tuple to contain exactly one element."
f" The received tuple is {size}."
)
n = int(size)
size = (n, n)
out = _out_tensor(*size, out=out, dtype=dtype, device=device)
out.zero_()
out.fill_diagonal_(1)
return out
make_empty(*size, *, dtype=None, device=None)
¶
Make an empty tensor.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
size |
Union[int, torch.Size] |
Shape of the empty tensor to be created.
expected as multiple positional arguments of integers,
or as a single positional argument containing a tuple of
integers.
Note that when the user wishes to create an |
() |
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32) or, for creating an |
None |
device |
Union[str, torch.device] |
The device in which the new empty tensor will be stored. If not specified, "cpu" will be used. |
None |
Returns:
Type | Description |
---|---|
Iterable |
The new empty tensor, which can be a PyTorch tensor or an
|
Source code in evotorch/tools/misc.py
def make_empty(
*size: Size,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
) -> Iterable:
"""
Make an empty tensor.
Args:
size: Shape of the empty tensor to be created.
expected as multiple positional arguments of integers,
or as a single positional argument containing a tuple of
integers.
Note that when the user wishes to create an `ObjectArray`
(i.e. when `dtype` is given as `object`), then the size
is expected as a single integer, or as a single-element
tuple containing an integer (because `ObjectArray` can only
be one-dimensional).
dtype: Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32) or, for creating an `ObjectArray`,
"object" (as string) or `object` or `Any`.
If `dtype` is not specified, the default choice of
`torch.empty(...)` is used, that is, `torch.float32`.
device: The device in which the new empty tensor will be stored.
If not specified, "cpu" will be used.
Returns:
The new empty tensor, which can be a PyTorch tensor or an
`ObjectArray`.
"""
from .objectarray import ObjectArray
if (dtype is not None) and is_dtype_object(dtype):
if (device is None) or (str(device) == "cpu"):
if len(size) == 1:
size = size[0]
return ObjectArray(size)
else:
return ValueError(
f"Invalid device for ObjectArray: {repr(device)}. Note: an ObjectArray can only be stored on 'cpu'."
)
else:
kwargs = {}
if dtype is not None:
kwargs["dtype"] = to_torch_dtype(dtype)
if device is not None:
kwargs["device"] = device
return torch.empty(*size, **kwargs)
make_gaussian(*size, *, center=None, stdev=None, symmetric=False, out=None, dtype=None, device=None, generator=None)
¶
Make a new or existing tensor filled by Gaussian distributed values. This function can work only with float dtypes.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
size |
Union[int, torch.Size] |
Size of the new tensor to be filled with Gaussian distributed values. This can be given as multiple positional arguments, each such positional argument being an integer, or as a single positional argument of a tuple, the tuple containing multiple integers. Note that, if the user wishes to fill an existing tensor instead, then no positional argument is expected. |
() |
center |
Union[float, Iterable[float], torch.Tensor] |
Center point (i.e. mean) of the Gaussian distribution.
Can be a scalar, or a tensor.
If not specified, the center point will be taken as 0.
Note that, if one specifies |
None |
stdev |
Union[float, Iterable[float], torch.Tensor] |
Standard deviation for the Gaussian distributed values.
Can be a scalar, or a tensor.
If not specified, the standard deviation will be taken as 1.
Note that, if one specifies |
None |
symmetric |
bool |
Whether or not the values should be sampled in a symmetric (i.e. antithetic) manner. The default is False. |
False |
out |
Optional[torch.Tensor] |
Optionally, the tensor to be filled by Gaussian distributed
values. If an |
None |
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If |
None |
device |
Union[str, torch.device] |
The device in which the new tensor will be stored.
If not specified, "cpu" will be used.
If an |
None |
generator |
Any |
Pseudo-random number generator to be used when sampling
the values. Can be a |
None |
Returns:
Type | Description |
---|---|
Tensor |
The created or modified tensor after placing the Gaussian distributed values. |
Source code in evotorch/tools/misc.py
def make_gaussian(
*size: Size,
center: Optional[RealOrVector] = None,
stdev: Optional[RealOrVector] = None,
symmetric: bool = False,
out: Optional[torch.Tensor] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
generator: Any = None,
) -> torch.Tensor:
"""
Make a new or existing tensor filled by Gaussian distributed values.
This function can work only with float dtypes.
Args:
size: Size of the new tensor to be filled with Gaussian distributed
values. This can be given as multiple positional arguments, each
such positional argument being an integer, or as a single
positional argument of a tuple, the tuple containing multiple
integers. Note that, if the user wishes to fill an existing
tensor instead, then no positional argument is expected.
center: Center point (i.e. mean) of the Gaussian distribution.
Can be a scalar, or a tensor.
If not specified, the center point will be taken as 0.
Note that, if one specifies `center`, then `stdev` is also
expected to be explicitly specified.
stdev: Standard deviation for the Gaussian distributed values.
Can be a scalar, or a tensor.
If not specified, the standard deviation will be taken as 1.
Note that, if one specifies `stdev`, then `center` is also
expected to be explicitly specified.
symmetric: Whether or not the values should be sampled in a
symmetric (i.e. antithetic) manner.
The default is False.
out: Optionally, the tensor to be filled by Gaussian distributed
values. If an `out` tensor is given, then no `size` argument is
expected.
dtype: Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If `dtype` is not specified, the default choice of
`torch.empty(...)` is used, that is, `torch.float32`.
If an `out` tensor is specified, then `dtype` is expected
as None.
device: The device in which the new tensor will be stored.
If not specified, "cpu" will be used.
If an `out` tensor is specified, then `device` is expected
as None.
generator: Pseudo-random number generator to be used when sampling
the values. Can be a `torch.Generator`, or an object with
a `generator` attribute (such as `Problem`).
If left as None, the global generator of PyTorch will be used.
Returns:
The created or modified tensor after placing the Gaussian
distributed values.
"""
scalar_requested = _scalar_requested(*size)
if scalar_requested:
size = (1,)
out = _out_tensor(*size, out=out, dtype=dtype, device=device)
gen_kwargs = _generator_kwargs(generator)
if symmetric:
leftmost_dim = out.shape[0]
if (leftmost_dim % 2) != 0:
raise ValueError(
f"Symmetric sampling cannot be done if the leftmost dimension of the target tensor is odd."
f" The shape of the target tensor is: {repr(out.shape)}."
)
out[0::2, ...].normal_(**gen_kwargs)
out[1::2, ...] = out[0::2, ...]
out[1::2, ...] *= -1
else:
out.normal_(**gen_kwargs)
if (center is None) and (stdev is None):
pass # do nothing
elif (center is not None) and (stdev is not None):
stdev = torch.as_tensor(stdev, dtype=out.dtype, device=out.device)
out *= stdev
center = torch.as_tensor(center, dtype=out.dtype, device=out.device)
out += center
else:
raise ValueError(
f"Please either specify none of `stdev` and `center`, or both of them."
f" Currently, `center` is {center}"
f" and `stdev` is {stdev}."
)
if scalar_requested:
out = out[0]
return out
make_nan(*size, *, out=None, dtype=None, device=None)
¶
Make a new tensor filled with NaN, or fill an existing tensor with NaN.
The following example creates a float32 tensor filled with NaN values, of shape (3, 5):
nan_values = make_nan(3, 5, dtype="float32")
The following example fills an existing tensor with NaNs.
make_nan(out=existing_tensor)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
size |
Union[int, torch.Size] |
Size of the new tensor to be filled with NaNs. This can be given as multiple positional arguments, each such positional argument being an integer, or as a single positional argument of a tuple, the tuple containing multiple integers. Note that, if the user wishes to fill an existing tensor with NaN values, then no positional argument is expected. |
() |
out |
Optional[torch.Tensor] |
Optionally, the tensor to be filled by NaN values.
If an |
None |
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If |
None |
device |
Union[str, torch.device] |
The device in which the new tensor will be stored.
If not specified, "cpu" will be used.
If an |
None |
Returns:
Type | Description |
---|---|
Tensor |
The created or modified tensor after placing NaN values. |
Source code in evotorch/tools/misc.py
def make_nan(
*size: Size,
out: Optional[torch.Tensor] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
) -> torch.Tensor:
"""
Make a new tensor filled with NaN, or fill an existing tensor with NaN.
The following example creates a float32 tensor filled with NaN values,
of shape (3, 5):
nan_values = make_nan(3, 5, dtype="float32")
The following example fills an existing tensor with NaNs.
make_nan(out=existing_tensor)
Args:
size: Size of the new tensor to be filled with NaNs.
This can be given as multiple positional arguments, each such
positional argument being an integer, or as a single positional
argument of a tuple, the tuple containing multiple integers.
Note that, if the user wishes to fill an existing tensor with
NaN values, then no positional argument is expected.
out: Optionally, the tensor to be filled by NaN values.
If an `out` tensor is given, then no `size` argument is expected.
dtype: Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If `dtype` is not specified, the default choice of
`torch.empty(...)` is used, that is, `torch.float32`.
If an `out` tensor is specified, then `dtype` is expected
as None.
device: The device in which the new tensor will be stored.
If not specified, "cpu" will be used.
If an `out` tensor is specified, then `device` is expected
as None.
Returns:
The created or modified tensor after placing NaN values.
"""
if _scalar_requested(*size):
return _scalar_tensor(float("nan"), out=out, dtype=dtype, device=device)
else:
out = _out_tensor(*size, out=out, dtype=dtype, device=device)
out[:] = float("nan")
return out
make_ones(*size, *, out=None, dtype=None, device=None)
¶
Make a new tensor filled with 1, or fill an existing tensor with 1.
The following example creates a float32 tensor filled with 1 values, of shape (3, 5):
zero_values = make_ones(3, 5, dtype="float32")
The following example fills an existing tensor with 1s:
make_ones(out=existing_tensor)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
size |
Union[int, torch.Size] |
Size of the new tensor to be filled with 1. This can be given as multiple positional arguments, each such positional argument being an integer, or as a single positional argument of a tuple, the tuple containing multiple integers. Note that, if the user wishes to fill an existing tensor with 1 values, then no positional argument is expected. |
() |
out |
Optional[torch.Tensor] |
Optionally, the tensor to be filled by 1 values.
If an |
None |
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If |
None |
device |
Union[str, torch.device] |
The device in which the new tensor will be stored.
If not specified, "cpu" will be used.
If an |
None |
Returns:
Type | Description |
---|---|
Tensor |
The created or modified tensor after placing 1 values. |
Source code in evotorch/tools/misc.py
def make_ones(
*size: Size,
out: Optional[torch.Tensor] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
) -> torch.Tensor:
"""
Make a new tensor filled with 1, or fill an existing tensor with 1.
The following example creates a float32 tensor filled with 1 values,
of shape (3, 5):
zero_values = make_ones(3, 5, dtype="float32")
The following example fills an existing tensor with 1s:
make_ones(out=existing_tensor)
Args:
size: Size of the new tensor to be filled with 1.
This can be given as multiple positional arguments, each such
positional argument being an integer, or as a single positional
argument of a tuple, the tuple containing multiple integers.
Note that, if the user wishes to fill an existing tensor with
1 values, then no positional argument is expected.
out: Optionally, the tensor to be filled by 1 values.
If an `out` tensor is given, then no `size` argument is expected.
dtype: Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If `dtype` is not specified, the default choice of
`torch.empty(...)` is used, that is, `torch.float32`.
If an `out` tensor is specified, then `dtype` is expected
as None.
device: The device in which the new tensor will be stored.
If not specified, "cpu" will be used.
If an `out` tensor is specified, then `device` is expected
as None.
Returns:
The created or modified tensor after placing 1 values.
"""
if _scalar_requested(*size):
return _scalar_tensor(1, out=out, dtype=dtype, device=device)
else:
out = _out_tensor(*size, out=out, dtype=dtype, device=device)
out[:] = 1
return out
make_randint(*size, *, n, out=None, dtype=None, device=None, generator=None)
¶
Make a new or existing tensor filled by random integers.
The integers are uniformly distributed within [0 ... n-1]
.
This function can be used with integer or float dtypes.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
size |
Union[int, torch.Size] |
Size of the new tensor to be filled with uniformly distributed values. This can be given as multiple positional arguments, each such positional argument being an integer, or as a single positional argument of a tuple, the tuple containing multiple integers. Note that, if the user wishes to fill an existing tensor instead, then no positional argument is expected. |
() |
n |
Union[int, float, torch.Tensor] |
Number of choice(s) for integer sampling.
The lowest possible value will be 0, and the highest possible
value will be n - 1.
|
required |
out |
Optional[torch.Tensor] |
Optionally, the tensor to be filled by the random integers.
If an |
None |
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
Optionally a string (e.g. "int64") or a PyTorch dtype
(e.g. torch.int64).
If |
None |
device |
Union[str, torch.device] |
The device in which the new tensor will be stored.
If not specified, "cpu" will be used.
If an |
None |
generator |
Any |
Pseudo-random number generator to be used when sampling
the values. Can be a |
None |
Returns:
Type | Description |
---|---|
Tensor |
The created or modified tensor after placing the uniformly distributed values. |
Source code in evotorch/tools/misc.py
def make_randint(
*size: Size,
n: Union[int, float, torch.Tensor],
out: Optional[torch.Tensor] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
generator: Any = None,
) -> torch.Tensor:
"""
Make a new or existing tensor filled by random integers.
The integers are uniformly distributed within `[0 ... n-1]`.
This function can be used with integer or float dtypes.
Args:
size: Size of the new tensor to be filled with uniformly distributed
values. This can be given as multiple positional arguments, each
such positional argument being an integer, or as a single
positional argument of a tuple, the tuple containing multiple
integers. Note that, if the user wishes to fill an existing
tensor instead, then no positional argument is expected.
n: Number of choice(s) for integer sampling.
The lowest possible value will be 0, and the highest possible
value will be n - 1.
`n` can be a scalar, or a tensor.
out: Optionally, the tensor to be filled by the random integers.
If an `out` tensor is given, then no `size` argument is
expected.
dtype: Optionally a string (e.g. "int64") or a PyTorch dtype
(e.g. torch.int64).
If `dtype` is not specified, torch.int64 will be used.
device: The device in which the new tensor will be stored.
If not specified, "cpu" will be used.
If an `out` tensor is specified, then `device` is expected
as None.
generator: Pseudo-random number generator to be used when sampling
the values. Can be a `torch.Generator`, or an object with
a `generator` attribute (such as `Problem`).
If left as None, the global generator of PyTorch will be used.
Returns:
The created or modified tensor after placing the uniformly
distributed values.
"""
scalar_requested = _scalar_requested(*size)
if scalar_requested:
size = (1,)
if (dtype is None) and (out is None):
dtype = torch.int64
out = _out_tensor(*size, out=out, dtype=dtype, device=device)
gen_kwargs = _generator_kwargs(generator)
out.random_(**gen_kwargs)
out %= n
if scalar_requested:
out = out[0]
return out
make_tensor(data, *, dtype=None, device=None, read_only=False)
¶
Make a new tensor.
This function can be used to create PyTorch tensors, or ObjectArray instances with or without read-only behavior.
The following example creates a 2-dimensional PyTorch tensor:
my_tensor = make_tensor(
[[1, 2], [3, 4]],
dtype="float32", # alternatively, torch.float32
device="cpu",
)
The following example creates an ObjectArray from a list that contains arbitrary data:
my_obj_tensor = make_tensor(["a_string", (1, 2)], dtype=object)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
data |
Any |
The data to be converted to a tensor.
If one wishes to create a PyTorch tensor, this can be anything
that can be stored by a PyTorch tensor.
If one wishes to create an |
required |
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
Optionally a string (e.g. "float32"), or a PyTorch dtype
(e.g. torch.float32), or |
None |
device |
Union[str, torch.device] |
The device in which the tensor will be stored.
If |
None |
read_only |
bool |
Whether or not the created tensor will be read-only. By default, this is False. |
False |
Returns:
Type | Description |
---|---|
Iterable |
A PyTorch tensor or an ObjectArray. |
Source code in evotorch/tools/misc.py
def make_tensor(
data: Any,
*,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
read_only: bool = False,
) -> Iterable:
"""
Make a new tensor.
This function can be used to create PyTorch tensors, or ObjectArray
instances with or without read-only behavior.
The following example creates a 2-dimensional PyTorch tensor:
my_tensor = make_tensor(
[[1, 2], [3, 4]],
dtype="float32", # alternatively, torch.float32
device="cpu",
)
The following example creates an ObjectArray from a list that contains
arbitrary data:
my_obj_tensor = make_tensor(["a_string", (1, 2)], dtype=object)
Args:
data: The data to be converted to a tensor.
If one wishes to create a PyTorch tensor, this can be anything
that can be stored by a PyTorch tensor.
If one wishes to create an `ObjectArray` and therefore passes
`dtype=object`, then the provided `data` is expected as an
`Iterable`.
dtype: Optionally a string (e.g. "float32"), or a PyTorch dtype
(e.g. torch.float32), or `object` or "object" (as a string)
or `Any` if one wishes to create an `ObjectArray`.
If `dtype` is not specified, it will be assumed that the user
wishes to create a PyTorch tensor (not an `ObjectArray`) and
then `dtype` will be inferred from the provided `data`
(according to the default behavior of PyTorch).
device: The device in which the tensor will be stored.
If `device` is not specified, it will be understood from the
given `data` (according to the default behavior of PyTorch).
read_only: Whether or not the created tensor will be read-only.
By default, this is False.
Returns:
A PyTorch tensor or an ObjectArray.
"""
from .objectarray import ObjectArray
from .readonlytensor import as_read_only_tensor
if (dtype is not None) and is_dtype_object(dtype):
if not hasattr(data, "__len__"):
data = list(data)
n = len(data)
result = ObjectArray(n)
result[:] = data
else:
kwargs = {}
if dtype is not None:
kwargs["dtype"] = to_torch_dtype(dtype)
if device is not None:
kwargs["device"] = device
result = torch.tensor(data, **kwargs)
if read_only:
result = as_read_only_tensor(result)
return result
make_uniform(*size, *, lb=None, ub=None, out=None, dtype=None, device=None, generator=None)
¶
Make a new or existing tensor filled by uniformly distributed values. Both lower and upper bounds are inclusive. This function can work with both float and int dtypes.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
size |
Union[int, torch.Size] |
Size of the new tensor to be filled with uniformly distributed values. This can be given as multiple positional arguments, each such positional argument being an integer, or as a single positional argument of a tuple, the tuple containing multiple integers. Note that, if the user wishes to fill an existing tensor instead, then no positional argument is expected. |
() |
lb |
Union[float, Iterable[float], torch.Tensor] |
Lower bound for the uniformly distributed values.
Can be a scalar, or a tensor.
If not specified, the lower bound will be taken as 0.
Note that, if one specifies |
None |
ub |
Union[float, Iterable[float], torch.Tensor] |
Upper bound for the uniformly distributed values.
Can be a scalar, or a tensor.
If not specified, the upper bound will be taken as 1.
Note that, if one specifies |
None |
out |
Optional[torch.Tensor] |
Optionally, the tensor to be filled by uniformly distributed
values. If an |
None |
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If |
None |
device |
Union[str, torch.device] |
The device in which the new tensor will be stored.
If not specified, "cpu" will be used.
If an |
None |
generator |
Any |
Pseudo-random number generator to be used when sampling
the values. Can be a |
None |
Returns:
Type | Description |
---|---|
Tensor |
The created or modified tensor after placing the uniformly distributed values. |
Source code in evotorch/tools/misc.py
def make_uniform(
*size: Size,
lb: Optional[RealOrVector] = None,
ub: Optional[RealOrVector] = None,
out: Optional[torch.Tensor] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
generator: Any = None,
) -> torch.Tensor:
"""
Make a new or existing tensor filled by uniformly distributed values.
Both lower and upper bounds are inclusive.
This function can work with both float and int dtypes.
Args:
size: Size of the new tensor to be filled with uniformly distributed
values. This can be given as multiple positional arguments, each
such positional argument being an integer, or as a single
positional argument of a tuple, the tuple containing multiple
integers. Note that, if the user wishes to fill an existing
tensor instead, then no positional argument is expected.
lb: Lower bound for the uniformly distributed values.
Can be a scalar, or a tensor.
If not specified, the lower bound will be taken as 0.
Note that, if one specifies `lb`, then `ub` is also expected to
be explicitly specified.
ub: Upper bound for the uniformly distributed values.
Can be a scalar, or a tensor.
If not specified, the upper bound will be taken as 1.
Note that, if one specifies `ub`, then `lb` is also expected to
be explicitly specified.
out: Optionally, the tensor to be filled by uniformly distributed
values. If an `out` tensor is given, then no `size` argument is
expected.
dtype: Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If `dtype` is not specified, the default choice of
`torch.empty(...)` is used, that is, `torch.float32`.
If an `out` tensor is specified, then `dtype` is expected
as None.
device: The device in which the new tensor will be stored.
If not specified, "cpu" will be used.
If an `out` tensor is specified, then `device` is expected
as None.
generator: Pseudo-random number generator to be used when sampling
the values. Can be a `torch.Generator`, or an object with
a `generator` attribute (such as `Problem`).
If left as None, the global generator of PyTorch will be used.
Returns:
The created or modified tensor after placing the uniformly
distributed values.
"""
scalar_requested = _scalar_requested(*size)
if scalar_requested:
size = (1,)
def _invalid_bound_args():
raise ValueError(
f"Expected both `lb` and `ub` as None, or both `lb` and `ub` as not None."
f" It appears that one of them is None, while the other is not."
f" lb: {repr(lb)}."
f" ub: {repr(ub)}."
)
out = _out_tensor(*size, out=out, dtype=dtype, device=device)
gen_kwargs = _generator_kwargs(generator)
def _cast_bounds():
nonlocal lb, ub
lb = torch.as_tensor(lb, dtype=out.dtype, device=out.device)
ub = torch.as_tensor(ub, dtype=out.dtype, device=out.device)
if out.dtype == torch.bool:
out.random_(**gen_kwargs)
if (lb is None) and (ub is None):
pass # nothing to do
elif (lb is not None) and (ub is not None):
_cast_bounds()
lb_shape_matches = lb.shape == out.shape
ub_shape_matches = ub.shape == out.shape
if (not lb_shape_matches) or (not ub_shape_matches):
all_false = torch.zeros_like(out)
if not lb_shape_matches:
lb = lb | all_false
if not ub_shape_matches:
ub = ub | all_false
mask_for_always_false = (~lb) & (~ub)
mask_for_always_true = lb & ub
out[mask_for_always_false] = False
out[mask_for_always_true] = True
else:
_invalid_bound_args()
elif out.dtype in (torch.uint8, torch.int8, torch.int16, torch.int32, torch.int64):
out.random_(**gen_kwargs)
if (lb is None) and (ub is None):
out %= 2
elif (lb is not None) and (ub is not None):
_cast_bounds()
diff = (ub - lb) + 1
out -= lb
out %= diff
out += lb
else:
_invalid_bound_args()
else:
out.uniform_(**gen_kwargs)
if (lb is None) and (ub is None):
pass # nothing to do
elif (lb is not None) and (ub is not None):
_cast_bounds()
diff = ub - lb
out *= diff
out += lb
else:
_invalid_bound_args()
if scalar_requested:
out = out[0]
return out
make_zeros(*size, *, out=None, dtype=None, device=None)
¶
Make a new tensor filled with 0, or fill an existing tensor with 0.
The following example creates a float32 tensor filled with 0 values, of shape (3, 5):
zero_values = make_zeros(3, 5, dtype="float32")
The following example fills an existing tensor with 0s:
make_zeros(out=existing_tensor)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
size |
Union[int, torch.Size] |
Size of the new tensor to be filled with 0. This can be given as multiple positional arguments, each such positional argument being an integer, or as a single positional argument of a tuple, the tuple containing multiple integers. Note that, if the user wishes to fill an existing tensor with 0 values, then no positional argument is expected. |
() |
out |
Optional[torch.Tensor] |
Optionally, the tensor to be filled by 0 values.
If an |
None |
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If |
None |
device |
Union[str, torch.device] |
The device in which the new tensor will be stored.
If not specified, "cpu" will be used.
If an |
None |
Returns:
Type | Description |
---|---|
Tensor |
The created or modified tensor after placing 0 values. |
Source code in evotorch/tools/misc.py
def make_zeros(
*size: Size,
out: Optional[torch.Tensor] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
) -> torch.Tensor:
"""
Make a new tensor filled with 0, or fill an existing tensor with 0.
The following example creates a float32 tensor filled with 0 values,
of shape (3, 5):
zero_values = make_zeros(3, 5, dtype="float32")
The following example fills an existing tensor with 0s:
make_zeros(out=existing_tensor)
Args:
size: Size of the new tensor to be filled with 0.
This can be given as multiple positional arguments, each such
positional argument being an integer, or as a single positional
argument of a tuple, the tuple containing multiple integers.
Note that, if the user wishes to fill an existing tensor with
0 values, then no positional argument is expected.
out: Optionally, the tensor to be filled by 0 values.
If an `out` tensor is given, then no `size` argument is expected.
dtype: Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If `dtype` is not specified, the default choice of
`torch.empty(...)` is used, that is, `torch.float32`.
If an `out` tensor is specified, then `dtype` is expected
as None.
device: The device in which the new tensor will be stored.
If not specified, "cpu" will be used.
If an `out` tensor is specified, then `device` is expected
as None.
Returns:
The created or modified tensor after placing 0 values.
"""
if _scalar_requested(*size):
return _scalar_tensor(0, out=out, dtype=dtype, device=device)
else:
out = _out_tensor(*size, out=out, dtype=dtype, device=device)
out.zero_()
return out
message_from(sender, message)
¶
Prepend the sender object's name and id to a string message.
Let us imagine that we have a class named Example
:
from evotorch.tools import message_from
class Example:
def say_hello(self):
print(message_from(self, "Hello!"))
Let us now instantiate this class and use its say_hello
method:
The output becomes something like this:
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sender |
object |
The object which produces the message |
required |
message |
Any |
The message, as something that can be converted to string |
required |
Returns:
Type | Description |
---|---|
str |
The new message string, with the details regarding the sender object inserted to the beginning. |
Source code in evotorch/tools/misc.py
def message_from(sender: object, message: Any) -> str:
"""
Prepend the sender object's name and id to a string message.
Let us imagine that we have a class named `Example`:
```python
from evotorch.tools import message_from
class Example:
def say_hello(self):
print(message_from(self, "Hello!"))
```
Let us now instantiate this class and use its `say_hello` method:
```python
ex = Example()
ex.say_hello()
```
The output becomes something like this:
```
Instance of `Example` (id:...) -- Hello!
```
Args:
sender: The object which produces the message
message: The message, as something that can be converted to string
Returns:
The new message string, with the details regarding the sender object
inserted to the beginning.
"""
sender_type = type(sender).__name__
sender_id = id(sender)
return f"Instance of `{sender_type}` (id:{sender_id}) -- {message}"
modify_tensor(original, target, lb=None, ub=None, max_change=None, in_place=False)
¶
Return the modified version of the original tensor, with bounds checking.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
original |
Tensor |
The original tensor. |
required |
target |
Tensor |
The target tensor which contains the values to replace the old ones in the original tensor. |
required |
lb |
Union[float, torch.Tensor] |
The lower bound(s), as a scalar or as an tensor. Values below these bounds are clipped in the resulting tensor. None means -inf. |
None |
ub |
Union[float, torch.Tensor] |
The upper bound(s), as a scalar or as an tensor. Value above these bounds are clipped in the resulting tensor. None means +inf. |
None |
max_change |
Union[float, torch.Tensor] |
The ratio of allowed change.
In more details, when given as a real number r,
modifications are allowed only within
|
None |
in_place |
bool |
Provide this as True if you wish the modification to be done within the original tensor. The default value of this argument is False, which means, the original tensor is not changed, and its modified version is returned as an independent copy. |
False |
Returns:
Type | Description |
---|---|
Tensor |
The modified tensor. |
Source code in evotorch/tools/misc.py
@torch.no_grad()
def modify_tensor(
original: torch.Tensor,
target: torch.Tensor,
lb: Optional[Union[float, torch.Tensor]] = None,
ub: Optional[Union[float, torch.Tensor]] = None,
max_change: Optional[Union[float, torch.Tensor]] = None,
in_place: bool = False,
) -> torch.Tensor:
"""Return the modified version of the original tensor, with bounds checking.
Args:
original: The original tensor.
target: The target tensor which contains the values to replace the
old ones in the original tensor.
lb: The lower bound(s), as a scalar or as an tensor.
Values below these bounds are clipped in the resulting tensor.
None means -inf.
ub: The upper bound(s), as a scalar or as an tensor.
Value above these bounds are clipped in the resulting tensor.
None means +inf.
max_change: The ratio of allowed change.
In more details, when given as a real number r,
modifications are allowed only within
``[original-(r*abs(original)) ... original+(r*abs(original))]``.
Modifications beyond this interval are clipped.
This argument can also be left as None if no such limitation
is needed.
in_place: Provide this as True if you wish the modification to be
done within the original tensor. The default value of this
argument is False, which means, the original tensor is not
changed, and its modified version is returned as an independent
copy.
Returns:
The modified tensor.
"""
if (lb is None) and (ub is None) and (max_change is None):
# If there is no restriction regarding how the tensor
# should be modified (no lb, no ub, no max_change),
# then we simply use the target values
# themselves for modifying the tensor.
if in_place:
original[:] = target
return original
else:
return target
else:
# If there are some restriction regarding how the tensor
# should be modified, then we turn to the following
# operations
def convert_to_tensor(x, tensorname: str):
if isinstance(x, torch.Tensor):
converted = x
else:
converted = torch.as_tensor(x, dtype=original.dtype, device=original.device)
if converted.ndim == 0 or converted.shape == original.shape:
return converted
else:
raise IndexError(
f"Argument {tensorname}: shape mismatch."
f" Shape of the original tensor: {original.shape}."
f" Shape of {tensorname}: {converted.shape}."
)
if lb is None:
# If lb is None, then it should be taken as -inf
lb = convert_to_tensor(float("-inf"), "lb")
else:
lb = convert_to_tensor(lb, "lb")
if ub is None:
# If ub is None, then it should be taken as +inf
ub = convert_to_tensor(float("inf"), "ub")
else:
ub = convert_to_tensor(ub, "ub")
if max_change is not None:
# If max_change is provided as something other than None,
# then we update the lb and ub so that they are tight
# enough to satisfy the max_change restriction.
max_change = convert_to_tensor(max_change, "max_change")
allowed_amounts = torch.abs(original) * max_change
allowed_lb = original - allowed_amounts
allowed_ub = original + allowed_amounts
lb = torch.max(lb, allowed_lb)
ub = torch.min(ub, allowed_ub)
## If in_place is given as True, the clipping (that we are about
## to perform), should be in-place.
# more_config = {}
# if in_place:
# more_config['out'] = original
#
## Return the clipped version of the target values
# return torch.clamp(target, lb, ub, **more_config)
result = torch.max(target, lb)
result = torch.min(result, ub)
if in_place:
original[:] = result
return original
else:
return result
numpy_copy(x, dtype=None)
¶
Return a numpy copy of the given iterable.
The newly created numpy array will be mutable, even if the original iterable object is read-only.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Iterable |
Any Iterable whose numpy copy will be returned. |
required |
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
The desired dtype. Can be given as a numpy dtype, as a torch dtype, or a native dtype (e.g. int, float), or as a string (e.g. "float32"). If left as None, dtype will be determined according to the data contained by the original iterable object. |
None |
Returns:
Type | Description |
---|---|
ndarray |
The numpy copy of the original iterable object. |
Source code in evotorch/tools/misc.py
def numpy_copy(x: Iterable, dtype: Optional[DType] = None) -> np.ndarray:
"""
Return a numpy copy of the given iterable.
The newly created numpy array will be mutable, even if the
original iterable object is read-only.
Args:
x: Any Iterable whose numpy copy will be returned.
dtype: The desired dtype. Can be given as a numpy dtype,
as a torch dtype, or a native dtype (e.g. int, float),
or as a string (e.g. "float32").
If left as None, dtype will be determined according
to the data contained by the original iterable object.
Returns:
The numpy copy of the original iterable object.
"""
from .objectarray import ObjectArray
needs_casting = dtype is not None
if isinstance(x, ObjectArray):
result = x.numpy()
elif isinstance(x, torch.Tensor):
result = x.cpu().clone().numpy()
elif isinstance(x, np.ndarray):
result = x.copy()
else:
needs_casting = False
result = np.array(x, dtype=dtype)
if needs_casting:
result = result.astype(dtype)
return result
pass_info_if_needed(f, info)
¶
Pass additional arguments into a callable, the info dictionary is unpacked and passed as additional keyword arguments only if the policy is decorated with the pass_info decorator.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
f |
Callable |
The callable to be called. |
required |
info |
Dict[str, Any] |
The info to be passed to the callable. |
required |
Returns:
Type | Description |
---|---|
Callable |
The callable with extra arguments |
Exceptions:
Type | Description |
---|---|
TypeError |
If the callable is decorated with the pass_info decorator, but its signature does not match the expected signature. |
Source code in evotorch/tools/misc.py
def pass_info_if_needed(f: Callable, info: Dict[str, Any]) -> Callable:
"""
Pass additional arguments into a callable, the info dictionary is unpacked
and passed as additional keyword arguments only if the policy is decorated
with the [pass_info][evotorch.decorators.pass_info] decorator.
Args:
f (Callable): The callable to be called.
info (Dict[str, Any]): The info to be passed to the callable.
Returns:
Callable: The callable with extra arguments
Raises:
TypeError: If the callable is decorated with the [pass_info][evotorch.decorators.pass_info] decorator,
but its signature does not match the expected signature.
"""
if hasattr(f, "__evotorch_pass_info__"):
try:
sig = inspect.signature(f)
sig.bind_partial(**info)
except TypeError:
raise TypeError(
"Callable {f} is decorated with @pass_info, but it doesn't expect some of the extra arguments "
f"({', '.join(info.keys())}). Hint: maybe you forgot to add **kwargs to the function signature?"
)
except Exception:
pass
return functools.partial(f, **info)
else:
return f
set_default_logger_config(logger_name='evotorch', logger_level=20, show_process=True, show_lineno=False, override=False)
¶
Configure the "EvoTorch" Python logger to print to the console with default format.
The logger will be configured to print to all messages with level INFO or lower to stdout and all messages with level WARNING or higher to stderr.
The default format is:
[2022-11-23 22:28:47] INFO <75935> evotorch: This is a log message
{asctime} {level} {process} {logger_name}: {message}
show_process=False
to hide Process ID or show_lineno=True
to
show the filename and line number of the log message instead of the Logger Name.
This function should be called before any other logging is performed, otherwise the default configuration will
not be applied. If the logger is already configured, this function will do nothing unless override=True
is passed,
in which case the logger will be reconfigured.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
logger_name |
str |
Name of the logger to configure. |
'evotorch' |
logger_level |
int |
Level of the logger to configure. |
20 |
show_process |
bool |
Whether to show the process name in the log message. |
True |
show_lineno |
bool |
Whether to show the filename with the line number in the log message or just the name of the logger. |
False |
override |
bool |
Whether to override the logger configuration if it has already been configured. |
False |
Source code in evotorch/tools/misc.py
def set_default_logger_config(
logger_name: str = "evotorch",
logger_level: int = logging.INFO,
show_process: bool = True,
show_lineno: bool = False,
override: bool = False,
):
"""
Configure the "EvoTorch" Python logger to print to the console with default format.
The logger will be configured to print to all messages with level INFO or lower to stdout and all
messages with level WARNING or higher to stderr.
The default format is:
```
[2022-11-23 22:28:47] INFO <75935> evotorch: This is a log message
{asctime} {level} {process} {logger_name}: {message}
```
The format can be slightly customized by passing `show_process=False` to hide Process ID or `show_lineno=True` to
show the filename and line number of the log message instead of the Logger Name.
This function should be called before any other logging is performed, otherwise the default configuration will
not be applied. If the logger is already configured, this function will do nothing unless `override=True` is passed,
in which case the logger will be reconfigured.
Args:
logger_name: Name of the logger to configure.
logger_level: Level of the logger to configure.
show_process: Whether to show the process name in the log message.
show_lineno: Whether to show the filename with the line number in the log message or just the name of the logger.
override: Whether to override the logger configuration if it has already been configured.
"""
logger = logging.getLogger(logger_name)
if not override and logger.hasHandlers():
# warn user that the logger is already configured
logger.warning(
"The logger is already configured. "
"The default configuration will not be applied. "
"Call `set_default_logger_config` with `override=True` to override the current configuration."
)
return
elif override:
# remove all handlers
for handler in logger.handlers:
logger.removeHandler(handler)
logger.setLevel(logger_level)
logger.propagate = False
formatter = logging.Formatter(
"[{asctime}] "
+ "{levelname:<8s} "
+ ("<{process:5d}> " if show_process else "")
+ ("{filename}:{lineno}: " if show_lineno else "{name}: ")
+ "{message}",
datefmt="%Y-%m-%d %H:%M:%S",
style="{",
)
_stdout_handler = logging.StreamHandler(sys.stdout)
_stdout_handler.addFilter(lambda log_record: log_record.levelno < logging.WARNING)
_stdout_handler.setFormatter(formatter)
logger.addHandler(_stdout_handler)
_stderr_handler = logging.StreamHandler(sys.stderr)
_stderr_handler.addFilter(lambda log_record: log_record.levelno >= logging.WARNING)
_stderr_handler.setFormatter(formatter)
logger.addHandler(_stderr_handler)
split_workload(workload, num_actors)
¶
Split a workload among actors.
By "workload" what is meant is the total amount of a work, this amount being expressed by an integer. For example, if the "work" is the evaluation of a population, the "workload" would usually be the population size.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
workload |
int |
Total amount of work, as an integer. |
required |
num_actors |
int |
Number of actors (i.e. remote workers) among which the workload will be distributed. |
required |
Returns:
Type | Description |
---|---|
list |
A list of integers. The i-th item of the returned list expresses the suggested workload for the i-th actor. |
Source code in evotorch/tools/misc.py
def split_workload(workload: int, num_actors: int) -> list:
"""
Split a workload among actors.
By "workload" what is meant is the total amount of a work,
this amount being expressed by an integer.
For example, if the "work" is the evaluation of a population,
the "workload" would usually be the population size.
Args:
workload: Total amount of work, as an integer.
num_actors: Number of actors (i.e. remote workers) among
which the workload will be distributed.
Returns:
A list of integers. The i-th item of the returned list
expresses the suggested workload for the i-th actor.
"""
base_workload = workload // num_actors
extra_workload = workload % num_actors
result = [base_workload] * num_actors
for i in range(extra_workload):
result[i] += 1
return result
stdev_from_radius(radius, solution_length)
¶
Get elementwise standard deviation from a given radius.
Sometimes, for a distribution-based search algorithm, the user might
choose to configure the initial coverage area of the search distribution
not via standard deviation, but via a radius value, as was done in the
study of Toklu et al. (2020).
This function takes the desired radius value and the solution length of
the problem at hand, and returns the elementwise standard deviation value.
Let us name this returned standard deviation value as s
.
When a new Gaussian distribution is constructed such that its initial
standard deviation is [s, s, s, ...]
(the length of this vector being
equal to the solution length), this constructed distribution's radius
corresponds with the desired radius.
Here, the "radius" of a Gaussian distribution is defined as the norm
of the standard deviation vector. In the case of a standard normal
distribution, this radius formulation serves as a simplified approximation
to E[||Normal(0, I)||]
(for which a closer approximation is used in
the study of Hansen & Ostermeier (2001)).
Reference:
Toklu, N.E., Liskowski, P., Srivastava, R.K. (2020).
ClipUp: A Simple and Powerful Optimizer
for Distribution-based Policy Evolution.
Parallel Problem Solving from Nature (PPSN 2020).
Nikolaus Hansen, Andreas Ostermeier (2001).
Completely Derandomized Self-Adaptation in Evolution Strategies.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
radius |
float |
The radius whose elementwise standard deviation counterpart will be returned. |
required |
solution_length |
int |
Length of a solution for the problem at hand. |
required |
Returns:
Type | Description |
---|---|
float |
An elementwise standard deviation value |
Source code in evotorch/tools/misc.py
def stdev_from_radius(radius: float, solution_length: int) -> float:
"""
Get elementwise standard deviation from a given radius.
Sometimes, for a distribution-based search algorithm, the user might
choose to configure the initial coverage area of the search distribution
not via standard deviation, but via a radius value, as was done in the
study of Toklu et al. (2020).
This function takes the desired radius value and the solution length of
the problem at hand, and returns the elementwise standard deviation value.
Let us name this returned standard deviation value as `s`.
When a new Gaussian distribution is constructed such that its initial
standard deviation is `[s, s, s, ...]` (the length of this vector being
equal to the solution length), this constructed distribution's radius
corresponds with the desired radius.
Here, the "radius" of a Gaussian distribution is defined as the norm
of the standard deviation vector. In the case of a standard normal
distribution, this radius formulation serves as a simplified approximation
to `E[||Normal(0, I)||]` (for which a closer approximation is used in
the study of Hansen & Ostermeier (2001)).
Reference:
Toklu, N.E., Liskowski, P., Srivastava, R.K. (2020).
ClipUp: A Simple and Powerful Optimizer
for Distribution-based Policy Evolution.
Parallel Problem Solving from Nature (PPSN 2020).
Nikolaus Hansen, Andreas Ostermeier (2001).
Completely Derandomized Self-Adaptation in Evolution Strategies.
Args:
radius: The radius whose elementwise standard deviation counterpart
will be returned.
solution_length: Length of a solution for the problem at hand.
Returns:
An elementwise standard deviation value `s`, such that a Gaussian
distribution constructed with the standard deviation `[s, s, s, ...]`
has the desired radius.
"""
radius = float(radius)
solution_length = int(solution_length)
return math.sqrt((radius**2) / solution_length)
storage_ptr(x)
¶
Get the pointer to the underlying storage of a tensor of an ObjectArray.
Calling storage_ptr(x)
is equivalent to x.untyped_storage().data_ptr()
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Iterable |
A regular PyTorch tensor, or a ReadOnlyTensor, or an ObjectArray. |
required |
Returns:
Type | Description |
---|---|
int |
The address of the underlying storage. |
Source code in evotorch/tools/misc.py
def storage_ptr(x: Iterable) -> int:
"""
Get the pointer to the underlying storage of a tensor of an ObjectArray.
Calling `storage_ptr(x)` is equivalent to `x.untyped_storage().data_ptr()`.
Args:
x: A regular PyTorch tensor, or a ReadOnlyTensor, or an ObjectArray.
Returns:
The address of the underlying storage.
"""
return _storage_ptr(x)
to_numpy_dtype(dtype)
¶
Convert the given string or the given PyTorch dtype to a numpy dtype. If the argument is already a numpy dtype, then the argument is returned as it is.
Returns:
Type | Description |
---|---|
dtype |
The dtype, converted to a numpy dtype. |
Source code in evotorch/tools/misc.py
def to_numpy_dtype(dtype: DType) -> np.dtype:
"""
Convert the given string or the given PyTorch dtype to a numpy dtype.
If the argument is already a numpy dtype, then the argument is returned
as it is.
Returns:
The dtype, converted to a numpy dtype.
"""
if isinstance(dtype, torch.dtype):
return torch.tensor([], dtype=dtype).numpy().dtype
elif is_dtype_object(dtype):
return np.dtype(object)
elif isinstance(dtype, np.dtype):
return dtype
else:
return np.dtype(dtype)
to_stdev_init(*, solution_length, stdev_init=None, radius_init=None)
¶
Ask for both standard deviation and radius, return the standard deviation.
It is very common among the distribution-based search algorithms to ask for both standard deviation and for radius for initializing the coverage area of the search distribution. During their initialization phases, these algorithms must check which one the user provided (radius or standard deviation), and return the result as the standard deviation so that a Gaussian distribution can easily be constructed.
This function serves as a helper function for such search algorithms by performing these actions:
- If the user provided a standard deviation and not a radius, then this provided standard deviation is simply returned.
- If the user provided a radius and not a standard deviation, then this provided radius is converted to its standard deviation counterpart, and then returned.
- If both standard deviation and radius are missing, or they are both given at the same time, then an error is raised.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
solution_length |
int |
Length of a solution for the problem at hand. |
required |
stdev_init |
Union[float, Iterable[float], torch.Tensor] |
Standard deviation. If one wishes to provide a radius
instead, then |
None |
radius_init |
Union[float, Iterable[float], torch.Tensor] |
Radius. If one wishes to provide a standard deviation
instead, then |
None |
Returns:
Type | Description |
---|---|
Union[float, Iterable[float], torch.Tensor] |
The standard deviation for the search distribution to be constructed. |
Source code in evotorch/tools/misc.py
def to_stdev_init(
*,
solution_length: int,
stdev_init: Optional[RealOrVector] = None,
radius_init: Optional[RealOrVector] = None,
) -> RealOrVector:
"""
Ask for both standard deviation and radius, return the standard deviation.
It is very common among the distribution-based search algorithms to ask
for both standard deviation and for radius for initializing the coverage
area of the search distribution. During their initialization phases,
these algorithms must check which one the user provided (radius or
standard deviation), and return the result as the standard deviation
so that a Gaussian distribution can easily be constructed.
This function serves as a helper function for such search algorithms
by performing these actions:
- If the user provided a standard deviation and not a radius, then this
provided standard deviation is simply returned.
- If the user provided a radius and not a standard deviation, then this
provided radius is converted to its standard deviation counterpart,
and then returned.
- If both standard deviation and radius are missing, or they are both
given at the same time, then an error is raised.
Args:
solution_length: Length of a solution for the problem at hand.
stdev_init: Standard deviation. If one wishes to provide a radius
instead, then `stdev_init` is expected as None.
radius_init: Radius. If one wishes to provide a standard deviation
instead, then `radius_init` is expected as None.
Returns:
The standard deviation for the search distribution to be constructed.
"""
if (stdev_init is not None) and (radius_init is None):
return stdev_init
elif (stdev_init is None) and (radius_init is not None):
return stdev_from_radius(radius_init, solution_length)
elif (stdev_init is None) and (radius_init is None):
raise ValueError(
"Received both `stdev_init` and `radius_init` as None."
" Please provide a value either for `stdev_init` or for `radius_init`."
)
else:
raise ValueError(
"Found both `stdev_init` and `radius_init` with values other than None."
" Please provide a value either for `stdev_init` or for `radius_init`, but not for both."
)
to_torch_dtype(dtype)
¶
Convert the given string or the given numpy dtype to a PyTorch dtype. If the argument is already a PyTorch dtype, then the argument is returned as it is.
Returns:
Type | Description |
---|---|
dtype |
The dtype, converted to a PyTorch dtype. |
Source code in evotorch/tools/misc.py
def to_torch_dtype(dtype: DType) -> torch.dtype:
"""
Convert the given string or the given numpy dtype to a PyTorch dtype.
If the argument is already a PyTorch dtype, then the argument is returned
as it is.
Returns:
The dtype, converted to a PyTorch dtype.
"""
if isinstance(dtype, str) and hasattr(torch, dtype):
attrib_within_torch = getattr(torch, dtype)
else:
attrib_within_torch = None
if isinstance(attrib_within_torch, torch.dtype):
return attrib_within_torch
elif isinstance(dtype, torch.dtype):
return dtype
elif dtype is Any or dtype is object:
raise TypeError(f"Cannot make a numeric tensor with dtype {repr(dtype)}")
else:
return torch.from_numpy(np.array([], dtype=dtype)).dtype
objectarray
¶
This module contains the ObjectArray class, which is an array-like data structure with an interface similar to PyTorch tensors, but with an ability to store arbitrary type of data (not just numbers).
ObjectArray (Sequence, RecursivePrintable)
¶
An object container with an interface similar to PyTorch tensors.
It is strictly one-dimensional, and supports advanced indexing and slicing operations supported by PyTorch tensors.
An ObjectArray can store None
values, strings, numbers, booleans,
lists, sets, dictionaries, PyTorch tensors, and numpy arrays.
When a container (such as a list, dictionary, set, is placed into an ObjectArray, an immutable clone of this container is first created, and then this newly created immutable clone gets stored within the ObjectArray. This behavior is to prevent accidental modification of the stored data.
When a numeric array (such as a PyTorch tensor or a numpy array with a
numeric dtype) is placed into an ObjectArray, the target ObjectArray
first checks if the numeric array is read-only. If the numeric array
is indeed read-only, then the array is put into the ObjectArray as it
is. If the array is not read-only, then a read-only clone of the
original numeric array is first created, and then this clone gets
stored by the ObjectArray. This behavior has the following implications:
(i) even when an ObjectArray is shared by multiple components of the
program, the risk of accidental modification of the stored data through
this shared ObjectArray is significantly reduced as the stored numeric
arrays are read-only;
(ii) although not recommended, one could still forcefully modify the
numeric arrays stored by an ObjectArray by explicitly casting them as
mutable arrays
(in the case of a numpy array, one could forcefully set the WRITEABLE
flag, and, in the case of a ReadOnlyTensor, one could forcefully cast it
as a regular PyTorch tensor);
(iii) if an already read-only array x
is placed into an ObjectArray,
but x
shares its memory with a mutable array y
, then the contents
of the ObjectArray can be affected by modifying y
.
The implication (ii) is demonstrated as follows:
objs = ObjectArray(1) # a single-element ObjectArray
# Place a numpy array into objs:
objs[0] = np.array([1, 2, 3], dtype=float)
# At this point, objs[0] is a read-only numpy array.
# objs[0] *= 2 # <- Not allowed
# Possible but NOT recommended:
objs.flags["WRITEABLE"] = True
objs[0] *= 2
The implication (iii) is demonstrated as follows:
objs = ObjectArray(1) # a single-element ObjectArray
# Make a new mutable numpy array
y = np.array([1, 2, 3], dtype=float)
# Make a read-only view to y:
x = y[:]
x.flags["WRITEABLE"] = False
# Place x into objs.
objs[0] = x
# At this point, objs[0] is a read-only numpy array.
# objs[0] *= 2 # <- Not allowed
# During the operation of setting its 0-th item, the ObjectArray
# `objs` did not clone `x` because `x` was already read-only.
# However, the contents of `x` could actually be modified because
# `x` shares its memory with the mutable array `y`.
# Possible but NOT recommended:
y *= 2 # This affects both x and objs!
When a numpy array of dtype object is placed into an ObjectArray, a read-only ObjectArray copy of the original array will first be created, and then, this newly created ObjectArray will be stored by the outer ObjectArray.
An ObjectArray itself has a read-only mode, so that, in addition to its stored data, the ObjectArray itself can be protected against undesired modifications.
An interesting feature of PyTorch: if one slices a tensor A and the result is a new tensor B, and if B is sharing storage memory with A, then A.untyped_storage().data_ptr() and B.untyped_storage().data_ptr() will return the same pointer. This means, one can compare the storage pointers of A and B and see whether or not the two are sharing memory. ObjectArray was designed to have this exact behavior, so that one can understand if two ObjectArray instances are sharing memory. Note that NumPy does NOT have such a behavior. In more details, a NumPy array C and a NumPy array D could report different pointers even when D was created via a basic slicing operation on C.
Source code in evotorch/tools/objectarray.py
class ObjectArray(Sequence, RecursivePrintable):
"""
An object container with an interface similar to PyTorch tensors.
It is strictly one-dimensional, and supports advanced indexing and
slicing operations supported by PyTorch tensors.
An ObjectArray can store `None` values, strings, numbers, booleans,
lists, sets, dictionaries, PyTorch tensors, and numpy arrays.
When a container (such as a list, dictionary, set, is placed into an
ObjectArray, an immutable clone of this container is first created, and
then this newly created immutable clone gets stored within the
ObjectArray. This behavior is to prevent accidental modification of the
stored data.
When a numeric array (such as a PyTorch tensor or a numpy array with a
numeric dtype) is placed into an ObjectArray, the target ObjectArray
first checks if the numeric array is read-only. If the numeric array
is indeed read-only, then the array is put into the ObjectArray as it
is. If the array is not read-only, then a read-only clone of the
original numeric array is first created, and then this clone gets
stored by the ObjectArray. This behavior has the following implications:
(i) even when an ObjectArray is shared by multiple components of the
program, the risk of accidental modification of the stored data through
this shared ObjectArray is significantly reduced as the stored numeric
arrays are read-only;
(ii) although not recommended, one could still forcefully modify the
numeric arrays stored by an ObjectArray by explicitly casting them as
mutable arrays
(in the case of a numpy array, one could forcefully set the WRITEABLE
flag, and, in the case of a ReadOnlyTensor, one could forcefully cast it
as a regular PyTorch tensor);
(iii) if an already read-only array `x` is placed into an ObjectArray,
but `x` shares its memory with a mutable array `y`, then the contents
of the ObjectArray can be affected by modifying `y`.
The implication (ii) is demonstrated as follows:
```python
objs = ObjectArray(1) # a single-element ObjectArray
# Place a numpy array into objs:
objs[0] = np.array([1, 2, 3], dtype=float)
# At this point, objs[0] is a read-only numpy array.
# objs[0] *= 2 # <- Not allowed
# Possible but NOT recommended:
objs.flags["WRITEABLE"] = True
objs[0] *= 2
```
The implication (iii) is demonstrated as follows:
```python
objs = ObjectArray(1) # a single-element ObjectArray
# Make a new mutable numpy array
y = np.array([1, 2, 3], dtype=float)
# Make a read-only view to y:
x = y[:]
x.flags["WRITEABLE"] = False
# Place x into objs.
objs[0] = x
# At this point, objs[0] is a read-only numpy array.
# objs[0] *= 2 # <- Not allowed
# During the operation of setting its 0-th item, the ObjectArray
# `objs` did not clone `x` because `x` was already read-only.
# However, the contents of `x` could actually be modified because
# `x` shares its memory with the mutable array `y`.
# Possible but NOT recommended:
y *= 2 # This affects both x and objs!
```
When a numpy array of dtype object is placed into an ObjectArray,
a read-only ObjectArray copy of the original array will first be
created, and then, this newly created ObjectArray will be stored
by the outer ObjectArray.
An ObjectArray itself has a read-only mode, so that, in addition to its
stored data, the ObjectArray itself can be protected against undesired
modifications.
An interesting feature of PyTorch: if one slices a tensor A and the
result is a new tensor B, and if B is sharing storage memory with A,
then A.untyped_storage().data_ptr() and B.untyped_storage().data_ptr()
will return the same pointer. This means, one can compare the storage
pointers of A and B and see whether or not the two are sharing memory.
ObjectArray was designed to have this exact behavior, so that one
can understand if two ObjectArray instances are sharing memory.
Note that NumPy does NOT have such a behavior. In more details,
a NumPy array C and a NumPy array D could report different pointers
even when D was created via a basic slicing operation on C.
"""
def __init__(
self,
size: Optional[Size] = None,
*,
slice_of: Optional[tuple] = None,
):
"""
`__init__(...)`: Instantiate a new ObjectArray.
Args:
size: Length of the ObjectArray. If this argument is present and
is an integer `n`, then the resulting ObjectArray will be
of length `n`, and will be filled with `None` values.
This argument cannot be used together with the keyword
argument `slice_of`.
slice_of: Optionally a tuple in the form
`(original_object_tensor, slice_info)`.
When this argument is present, then the resulting ObjectArray
will be a slice of the given `original_object_tensor` (which
is expected as an ObjectArray instance). `slice_info` is
either a `slice` instance, or a sequence of integers.
The resulting ObjectArray might be a view of
`original_object_tensor` (i.e. it might share its memory with
`original_object_tensor`).
This keyword argument cannot be used together with the
argument `size`.
"""
if size is not None and slice_of is not None:
raise ValueError("Expected either `size` argument or `slice_of` argument, but got both.")
elif size is None and slice_of is None:
raise ValueError("Expected either `size` argument or `slice_of` argument, but got none.")
elif size is not None:
if not is_sequence(size):
length = size
elif isinstance(size, (np.ndarray, torch.Tensor)) and (size.ndim > 1):
raise ValueError(f"Invalid size: {size}")
else:
[length] = size
length = int(length)
self._indices = torch.arange(length, dtype=torch.int64)
self._objects = [None] * length
elif slice_of is not None:
source: ObjectArray
source, slicing = slice_of
if not isinstance(source, ObjectArray):
raise TypeError(
f"`slice_of`: The first element was expected as an ObjectArray."
f" But it is of type {repr(type(source))}"
)
if isinstance(slicing, tuple) or is_integer(slicing):
raise TypeError(f"Invalid slice: {slicing}")
self._indices = source._indices[slicing]
self._objects = source._objects
if storage_ptr(self._indices) != storage_ptr(source._indices):
self._objects = clone(self._objects)
self._device = torch.device("cpu")
self._read_only = False
@property
def shape(self) -> Size:
"""Shape of the ObjectArray, as a PyTorch Size tuple."""
return self._indices.shape
def size(self) -> Size:
"""
Get the size of the ObjectArray, as a PyTorch Size tuple.
Returns:
The size (i.e. the shape) of the ObjectArray.
"""
return self._indices.size()
@property
def ndim(self) -> int:
"""
Number of dimensions handled by the ObjectArray.
This is equivalent to getting the length of the size tuple.
"""
return self._indices.ndim
def dim(self) -> int:
"""
Get the number of dimensions handled by the ObjectArray.
This is equivalent to getting the length of the size tuple.
Returns:
The number of dimensions, as an integer.
"""
return self._indices.dim()
def numel(self) -> int:
"""
Number of elements stored by the ObjectArray.
Returns:
The number of elements, as an integer.
"""
return self._indices.numel()
def repeat(self, *sizes) -> "ObjectArray":
"""
Repeat the contents of this ObjectArray.
For example, if we have an ObjectArray `objs` which stores
`["hello", "world"]`, the following line:
objs.repeat(3)
will result in an ObjectArray which stores:
`["hello", "world", "hello", "world", "hello", "world"]`
Args:
sizes: Although this argument is named `sizes` to be compatible
with PyTorch, what is expected here is a single positional
argument, as a single integer, or as a single-element
tuple.
The given integer (which can be the argument itself, or
the integer within the given single-element tuple),
specifies how many times the stored sequence will be
repeated.
Returns:
A new ObjectArray which repeats the original one's values
"""
if len(sizes) != 1:
type_name = type(self).__name__
raise ValueError(
f"The `repeat(...)` method of {type_name} expects exactly one positional argument."
f" This is because {type_name} supports only 1-dimensional storage."
f" The received positional arguments are: {sizes}."
)
if isinstance(sizes, tuple):
if len(sizes) == 1:
sizes = sizes[0]
else:
type_name = type(self).__name__
raise ValueError(
f"The `repeat(...)` method of {type_name} can accept a size tuple with only one element."
f" This is because {type_name} supports only 1-dimensional storage."
f" The received size tuple is: {sizes}."
)
num_repetitions = int(sizes[0])
self_length = len(self)
result = ObjectArray(num_repetitions * self_length)
source_index = 0
for result_index in range(len(result)):
result[result_index] = self[source_index]
source_index = (source_index + 1) % self_length
return result
@property
def device(self) -> Device:
"""
The device which stores the elements of the ObjectArray.
In the case of ObjectArray, this property always returns
the CPU device.
Returns:
The CPU device, as a torch.device object.
"""
return self._device
@property
def dtype(self) -> DType:
"""
The dtype of the elements stored by the ObjectArray.
In the case of ObjectArray, the dtype is always `object`.
"""
return object
def __getitem__(self, i: Any) -> Any:
if is_integer(i):
index = int(self._indices[i])
return self._objects[index]
else:
indices = self._indices[i]
same_ptr = storage_ptr(indices) == storage_ptr(self._indices)
result = ObjectArray(len(indices))
if same_ptr:
result._indices[:] = indices
result._objects = self._objects
else:
result._objects = []
for index in indices:
result._objects.append(self._objects[int(index)])
result._read_only = self._read_only
return result
def __setitem__(self, i: Any, x: Any):
self.set_item(i, x)
def set_item(self, i: Any, x: Any, *, memo: Optional[dict] = None):
"""
Set the i-th item of the ObjectArray as x.
Args:
i: An index or a slice.
x: The object that will be put into the ObjectArray.
memo: Optionally a dictionary which maps from the ids of the
already placed objects to their clones within ObjectArray.
In most scenarios, when this method is called from outside,
this can be left as None.
"""
from .immutable import as_immutable
if memo is None:
memo = {}
memo[id(self)] = self
if self._read_only:
raise ValueError("This ObjectArray is read-only, therefore, modification is not allowed.")
if is_integer(i):
index = int(self._indices[i])
self._objects[index] = as_immutable(x, memo=memo)
else:
indices = self._indices[i]
if not isinstance(x, Iterable):
raise TypeError(f"Expected an iterable, but got {repr(x)}")
if indices.ndim != 1:
raise ValueError(
"Received indices that would change the dimensionality of the ObjectArray."
" However, an ObjectArray can only be 1-dimensional."
)
slice_refers_to_whole_array = (len(indices) == len(self._indices)) and torch.all(indices == self._indices)
if slice_refers_to_whole_array:
memo[id(x)] = self
if not hasattr(x, "__len__"):
x = list(x)
if len(x) != len(indices):
raise TypeError(
f"The slicing operation refers to {len(indices)} elements."
f" However, the given objects sequence has {len(x)} elements."
)
for q, obj in enumerate(x):
index = int(indices[q])
self._objects[index] = as_immutable(obj, memo=memo)
def __len__(self) -> int:
return len(self._indices)
def __iter__(self):
for i in range(len(self)):
yield self[i]
def clone(self, *, preserve_read_only: bool = False, memo: Optional[dict] = None) -> Iterable:
"""
Get a deep copy of the ObjectArray.
Args:
preserve_read_only: Whether or not to preserve the read-only
attribute. Note that the default value is False, which
means that the newly made clone will NOT be read-only
even if the original ObjectArray is.
memo: Optionally a dictionary which maps from the ids of the
already cloned objects to their clones.
In most scenarios, when this method is called from outside,
this can be left as None.
Returns:
The clone of the original ObjectArray.
"""
from .cloning import deep_clone
if memo is None:
memo = {}
self_id = id(self)
if self_id in memo:
return memo[self_id]
if not preserve_read_only:
return self.numpy(memo=memo)
else:
result = ObjectArray(len(self))
memo[self_id] = result
for i, item in enumerate(self):
result[i] = deep_clone(item, otherwise_deepcopy=True, memo=memo)
return result
def __copy__(self) -> "ObjectArray":
return self.clone(preserve_read_only=True)
def __deepcopy__(self, memo: Optional[dict]) -> "ObjectArray":
if memo is None:
memo = {}
return self.clone(preserve_read_only=True, memo=memo)
def __setstate__(self, state: dict):
self.__dict__.update(state)
# After pickling and unpickling, numpy arrays become mutable.
# Since we are dealing with immutable containers here, we need to forcefully make all numpy arrays read-only.
for v in self:
if isinstance(v, np.ndarray):
v.flags["WRITEABLE"] = False
# def __getstate__(self) -> dict:
# from .cloning import deep_clone
# self_id = id(self)
# memo = {self_id: self}
# cloned_dict = deep_clone(self.__dict__, otherwise_deepcopy=True, memo=memo)
# return cloned_dict
def get_read_only_view(self) -> "ObjectArray":
"""
Get a read-only view of this ObjectArray.
"""
result = self[:]
result._read_only = True
return result
@property
def is_read_only(self) -> bool:
"""
True if this ObjectArray is read-only; False otherwise.
"""
return self._read_only
def storage(self) -> ObjectArrayStorage:
return ObjectArrayStorage(self)
def untyped_storage(self) -> ObjectArrayStorage:
return ObjectArrayStorage(self)
def numpy(self, *, memo: Optional[dict] = None) -> np.ndarray:
"""
Convert this ObjectArray to a numpy array.
The resulting numpy array will have its dtype set as `object`.
This new array itself and its contents will be mutable (those
mutable objects being the copies of their immutable sources).
Returns:
The numpy counterpart of this ObjectArray.
"""
from .immutable import mutable_copy
if memo is None:
memo = {}
n = len(self)
result = np.empty(n, dtype=object)
memo[id(self)] = result
for i, item in enumerate(self):
result[i] = mutable_copy(item, memo=memo)
return result
@staticmethod
def from_numpy(ndarray: np.ndarray) -> "ObjectArray":
"""
Convert a numpy array of dtype `object` to an `ObjectArray`.
Args:
ndarray: The numpy array that will be converted to `ObjectArray`.
Returns:
The ObjectArray counterpart of the given numpy array.
"""
if isinstance(ndarray, np.ndarray):
if ndarray.dtype == np.dtype(object):
n = len(ndarray)
result = ObjectArray(n)
for i, element in enumerate(ndarray):
result[i] = element
return result
else:
raise ValueError(
f"The dtype of the given array was expected as `object`."
f" However, the dtype was encountered as {ndarray.dtype}."
)
else:
raise TypeError(f"Expected a `numpy.ndarray` instance, but received an object of type {type(ndarray)}.")
device: Union[str, torch.device]
property
readonly
¶
The device which stores the elements of the ObjectArray. In the case of ObjectArray, this property always returns the CPU device.
Returns:
Type | Description |
---|---|
Union[str, torch.device] |
The CPU device, as a torch.device object. |
dtype: Union[str, torch.dtype, numpy.dtype, Type]
property
readonly
¶
The dtype of the elements stored by the ObjectArray.
In the case of ObjectArray, the dtype is always object
.
is_read_only: bool
property
readonly
¶
True if this ObjectArray is read-only; False otherwise.
ndim: int
property
readonly
¶
Number of dimensions handled by the ObjectArray. This is equivalent to getting the length of the size tuple.
shape: Union[int, torch.Size]
property
readonly
¶
Shape of the ObjectArray, as a PyTorch Size tuple.
__init__(self, size=None, *, slice_of=None)
special
¶
__init__(...)
: Instantiate a new ObjectArray.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
size |
Union[int, torch.Size] |
Length of the ObjectArray. If this argument is present and
is an integer |
None |
slice_of |
Optional[tuple] |
Optionally a tuple in the form
|
None |
Source code in evotorch/tools/objectarray.py
def __init__(
self,
size: Optional[Size] = None,
*,
slice_of: Optional[tuple] = None,
):
"""
`__init__(...)`: Instantiate a new ObjectArray.
Args:
size: Length of the ObjectArray. If this argument is present and
is an integer `n`, then the resulting ObjectArray will be
of length `n`, and will be filled with `None` values.
This argument cannot be used together with the keyword
argument `slice_of`.
slice_of: Optionally a tuple in the form
`(original_object_tensor, slice_info)`.
When this argument is present, then the resulting ObjectArray
will be a slice of the given `original_object_tensor` (which
is expected as an ObjectArray instance). `slice_info` is
either a `slice` instance, or a sequence of integers.
The resulting ObjectArray might be a view of
`original_object_tensor` (i.e. it might share its memory with
`original_object_tensor`).
This keyword argument cannot be used together with the
argument `size`.
"""
if size is not None and slice_of is not None:
raise ValueError("Expected either `size` argument or `slice_of` argument, but got both.")
elif size is None and slice_of is None:
raise ValueError("Expected either `size` argument or `slice_of` argument, but got none.")
elif size is not None:
if not is_sequence(size):
length = size
elif isinstance(size, (np.ndarray, torch.Tensor)) and (size.ndim > 1):
raise ValueError(f"Invalid size: {size}")
else:
[length] = size
length = int(length)
self._indices = torch.arange(length, dtype=torch.int64)
self._objects = [None] * length
elif slice_of is not None:
source: ObjectArray
source, slicing = slice_of
if not isinstance(source, ObjectArray):
raise TypeError(
f"`slice_of`: The first element was expected as an ObjectArray."
f" But it is of type {repr(type(source))}"
)
if isinstance(slicing, tuple) or is_integer(slicing):
raise TypeError(f"Invalid slice: {slicing}")
self._indices = source._indices[slicing]
self._objects = source._objects
if storage_ptr(self._indices) != storage_ptr(source._indices):
self._objects = clone(self._objects)
self._device = torch.device("cpu")
self._read_only = False
clone(self, *, preserve_read_only=False, memo=None)
¶
Get a deep copy of the ObjectArray.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
preserve_read_only |
bool |
Whether or not to preserve the read-only attribute. Note that the default value is False, which means that the newly made clone will NOT be read-only even if the original ObjectArray is. |
False |
memo |
Optional[dict] |
Optionally a dictionary which maps from the ids of the already cloned objects to their clones. In most scenarios, when this method is called from outside, this can be left as None. |
None |
Returns:
Type | Description |
---|---|
Iterable |
The clone of the original ObjectArray. |
Source code in evotorch/tools/objectarray.py
def clone(self, *, preserve_read_only: bool = False, memo: Optional[dict] = None) -> Iterable:
"""
Get a deep copy of the ObjectArray.
Args:
preserve_read_only: Whether or not to preserve the read-only
attribute. Note that the default value is False, which
means that the newly made clone will NOT be read-only
even if the original ObjectArray is.
memo: Optionally a dictionary which maps from the ids of the
already cloned objects to their clones.
In most scenarios, when this method is called from outside,
this can be left as None.
Returns:
The clone of the original ObjectArray.
"""
from .cloning import deep_clone
if memo is None:
memo = {}
self_id = id(self)
if self_id in memo:
return memo[self_id]
if not preserve_read_only:
return self.numpy(memo=memo)
else:
result = ObjectArray(len(self))
memo[self_id] = result
for i, item in enumerate(self):
result[i] = deep_clone(item, otherwise_deepcopy=True, memo=memo)
return result
dim(self)
¶
Get the number of dimensions handled by the ObjectArray. This is equivalent to getting the length of the size tuple.
Returns:
Type | Description |
---|---|
int |
The number of dimensions, as an integer. |
from_numpy(ndarray)
staticmethod
¶
Convert a numpy array of dtype object
to an ObjectArray
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ndarray |
ndarray |
The numpy array that will be converted to |
required |
Returns:
Type | Description |
---|---|
ObjectArray |
The ObjectArray counterpart of the given numpy array. |
Source code in evotorch/tools/objectarray.py
@staticmethod
def from_numpy(ndarray: np.ndarray) -> "ObjectArray":
"""
Convert a numpy array of dtype `object` to an `ObjectArray`.
Args:
ndarray: The numpy array that will be converted to `ObjectArray`.
Returns:
The ObjectArray counterpart of the given numpy array.
"""
if isinstance(ndarray, np.ndarray):
if ndarray.dtype == np.dtype(object):
n = len(ndarray)
result = ObjectArray(n)
for i, element in enumerate(ndarray):
result[i] = element
return result
else:
raise ValueError(
f"The dtype of the given array was expected as `object`."
f" However, the dtype was encountered as {ndarray.dtype}."
)
else:
raise TypeError(f"Expected a `numpy.ndarray` instance, but received an object of type {type(ndarray)}.")
get_read_only_view(self)
¶
numel(self)
¶
Number of elements stored by the ObjectArray.
Returns:
Type | Description |
---|---|
int |
The number of elements, as an integer. |
numpy(self, *, memo=None)
¶
Convert this ObjectArray to a numpy array.
The resulting numpy array will have its dtype set as object
.
This new array itself and its contents will be mutable (those
mutable objects being the copies of their immutable sources).
Returns:
Type | Description |
---|---|
ndarray |
The numpy counterpart of this ObjectArray. |
Source code in evotorch/tools/objectarray.py
def numpy(self, *, memo: Optional[dict] = None) -> np.ndarray:
"""
Convert this ObjectArray to a numpy array.
The resulting numpy array will have its dtype set as `object`.
This new array itself and its contents will be mutable (those
mutable objects being the copies of their immutable sources).
Returns:
The numpy counterpart of this ObjectArray.
"""
from .immutable import mutable_copy
if memo is None:
memo = {}
n = len(self)
result = np.empty(n, dtype=object)
memo[id(self)] = result
for i, item in enumerate(self):
result[i] = mutable_copy(item, memo=memo)
return result
repeat(self, *sizes)
¶
Repeat the contents of this ObjectArray.
For example, if we have an ObjectArray objs
which stores
["hello", "world"]
, the following line:
objs.repeat(3)
will result in an ObjectArray which stores:
`["hello", "world", "hello", "world", "hello", "world"]`
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sizes |
Although this argument is named |
() |
Returns:
Type | Description |
---|---|
ObjectArray |
A new ObjectArray which repeats the original one's values |
Source code in evotorch/tools/objectarray.py
def repeat(self, *sizes) -> "ObjectArray":
"""
Repeat the contents of this ObjectArray.
For example, if we have an ObjectArray `objs` which stores
`["hello", "world"]`, the following line:
objs.repeat(3)
will result in an ObjectArray which stores:
`["hello", "world", "hello", "world", "hello", "world"]`
Args:
sizes: Although this argument is named `sizes` to be compatible
with PyTorch, what is expected here is a single positional
argument, as a single integer, or as a single-element
tuple.
The given integer (which can be the argument itself, or
the integer within the given single-element tuple),
specifies how many times the stored sequence will be
repeated.
Returns:
A new ObjectArray which repeats the original one's values
"""
if len(sizes) != 1:
type_name = type(self).__name__
raise ValueError(
f"The `repeat(...)` method of {type_name} expects exactly one positional argument."
f" This is because {type_name} supports only 1-dimensional storage."
f" The received positional arguments are: {sizes}."
)
if isinstance(sizes, tuple):
if len(sizes) == 1:
sizes = sizes[0]
else:
type_name = type(self).__name__
raise ValueError(
f"The `repeat(...)` method of {type_name} can accept a size tuple with only one element."
f" This is because {type_name} supports only 1-dimensional storage."
f" The received size tuple is: {sizes}."
)
num_repetitions = int(sizes[0])
self_length = len(self)
result = ObjectArray(num_repetitions * self_length)
source_index = 0
for result_index in range(len(result)):
result[result_index] = self[source_index]
source_index = (source_index + 1) % self_length
return result
set_item(self, i, x, *, memo=None)
¶
Set the i-th item of the ObjectArray as x.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
i |
Any |
An index or a slice. |
required |
x |
Any |
The object that will be put into the ObjectArray. |
required |
memo |
Optional[dict] |
Optionally a dictionary which maps from the ids of the already placed objects to their clones within ObjectArray. In most scenarios, when this method is called from outside, this can be left as None. |
None |
Source code in evotorch/tools/objectarray.py
def set_item(self, i: Any, x: Any, *, memo: Optional[dict] = None):
"""
Set the i-th item of the ObjectArray as x.
Args:
i: An index or a slice.
x: The object that will be put into the ObjectArray.
memo: Optionally a dictionary which maps from the ids of the
already placed objects to their clones within ObjectArray.
In most scenarios, when this method is called from outside,
this can be left as None.
"""
from .immutable import as_immutable
if memo is None:
memo = {}
memo[id(self)] = self
if self._read_only:
raise ValueError("This ObjectArray is read-only, therefore, modification is not allowed.")
if is_integer(i):
index = int(self._indices[i])
self._objects[index] = as_immutable(x, memo=memo)
else:
indices = self._indices[i]
if not isinstance(x, Iterable):
raise TypeError(f"Expected an iterable, but got {repr(x)}")
if indices.ndim != 1:
raise ValueError(
"Received indices that would change the dimensionality of the ObjectArray."
" However, an ObjectArray can only be 1-dimensional."
)
slice_refers_to_whole_array = (len(indices) == len(self._indices)) and torch.all(indices == self._indices)
if slice_refers_to_whole_array:
memo[id(x)] = self
if not hasattr(x, "__len__"):
x = list(x)
if len(x) != len(indices):
raise TypeError(
f"The slicing operation refers to {len(indices)} elements."
f" However, the given objects sequence has {len(x)} elements."
)
for q, obj in enumerate(x):
index = int(indices[q])
self._objects[index] = as_immutable(obj, memo=memo)
size(self)
¶
Get the size of the ObjectArray, as a PyTorch Size tuple.
Returns:
Type | Description |
---|---|
Union[int, torch.Size] |
The size (i.e. the shape) of the ObjectArray. |
ranking
¶
This module contains ranking functions which work with PyTorch tensors.
centered(fitnesses, *, higher_is_better=True)
¶
Apply linearly spaced 0-centered ranking on a PyTorch tensor. The lowest weight is -0.5, and the highest weight is 0.5. This is the same ranking method that was used in:
Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, Ilya Sutskever (2017).
Evolution Strategies as a Scalable Alternative to Reinforcement Learning
Parameters:
Name | Type | Description | Default |
---|---|---|---|
fitnesses |
Tensor |
A PyTorch tensor which contains real numbers which we want to rank. |
required |
higher_is_better |
bool |
Whether or not the higher values will be assigned higher ranks. Changing this to False means that lower values are interpreted as better, and therefore lower values will have higher ranks. |
True |
Returns:
Type | Description |
---|---|
Tensor |
The ranks, in the same device, with the same dtype with the original tensor. |
Source code in evotorch/tools/ranking.py
def centered(fitnesses: torch.Tensor, *, higher_is_better: bool = True) -> torch.Tensor:
"""
Apply linearly spaced 0-centered ranking on a PyTorch tensor.
The lowest weight is -0.5, and the highest weight is 0.5.
This is the same ranking method that was used in:
Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, Ilya Sutskever (2017).
Evolution Strategies as a Scalable Alternative to Reinforcement Learning
Args:
fitnesses: A PyTorch tensor which contains real numbers which we want
to rank.
higher_is_better: Whether or not the higher values will be assigned
higher ranks. Changing this to False means that lower values
are interpreted as better, and therefore lower values will have
higher ranks.
Returns:
The ranks, in the same device, with the same dtype with the original
tensor.
"""
device = fitnesses.device
dtype = fitnesses.dtype
with torch.no_grad():
x = fitnesses.reshape(-1)
n = len(x)
indices = x.argsort(descending=(not higher_is_better))
weights = (torch.arange(n, dtype=dtype, device=device) / (n - 1)) - 0.5
ranks = torch.empty_like(x)
ranks[indices] = weights
return ranks.reshape(*(fitnesses.shape))
linear(fitnesses, *, higher_is_better=True)
¶
Apply linearly spaced ranking on a PyTorch tensor. The lowest weight is 0, and the highest weight is 1.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
fitnesses |
Tensor |
A PyTorch tensor which contains real numbers which we want to rank. |
required |
higher_is_better |
bool |
Whether or not the higher values will be assigned higher ranks. Changing this to False means that lower values are interpreted as better, and therefore lower values will have higher ranks. |
True |
Returns:
Type | Description |
---|---|
Tensor |
The ranks, in the same device, with the same dtype with the original tensor. |
Source code in evotorch/tools/ranking.py
def linear(fitnesses: torch.Tensor, *, higher_is_better: bool = True) -> torch.Tensor:
"""
Apply linearly spaced ranking on a PyTorch tensor.
The lowest weight is 0, and the highest weight is 1.
Args:
fitnesses: A PyTorch tensor which contains real numbers which we want
to rank.
higher_is_better: Whether or not the higher values will be assigned
higher ranks. Changing this to False means that lower values
are interpreted as better, and therefore lower values will have
higher ranks.
Returns:
The ranks, in the same device, with the same dtype with the original
tensor.
"""
device = fitnesses.device
dtype = fitnesses.dtype
with torch.no_grad():
x = fitnesses.reshape(-1)
n = len(x)
indices = x.argsort(descending=(not higher_is_better))
weights = torch.arange(n, dtype=dtype, device=device) / (n - 1)
ranks = torch.empty_like(x)
ranks[indices] = weights
return ranks.reshape(*(fitnesses.shape))
nes(fitnesses, *, higher_is_better=True)
¶
Apply the ranking mechanism proposed in:
Wierstra, D., Schaul, T., Glasmachers, T., Sun, Y., Peters, J., & Schmidhuber, J. (2014).
Natural evolution strategies. The Journal of Machine Learning Research, 15(1), 949-980.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
fitnesses |
Tensor |
A PyTorch tensor which contains real numbers which we want to rank. |
required |
higher_is_better |
bool |
Whether or not the higher values will be assigned higher ranks. Changing this to False means that lower values are interpreted as better, and therefore lower values will have higher ranks. |
True |
Returns:
Type | Description |
---|---|
Tensor |
The ranks, in the same device, with the same dtype with the original tensor. |
Source code in evotorch/tools/ranking.py
def nes(fitnesses: torch.Tensor, *, higher_is_better: bool = True) -> torch.Tensor:
"""
Apply the ranking mechanism proposed in:
Wierstra, D., Schaul, T., Glasmachers, T., Sun, Y., Peters, J., & Schmidhuber, J. (2014).
Natural evolution strategies. The Journal of Machine Learning Research, 15(1), 949-980.
Args:
fitnesses: A PyTorch tensor which contains real numbers which we want
to rank.
higher_is_better: Whether or not the higher values will be assigned
higher ranks. Changing this to False means that lower values
are interpreted as better, and therefore lower values will have
higher ranks.
Returns:
The ranks, in the same device, with the same dtype with the original
tensor.
"""
device = fitnesses.device
dtype = fitnesses.dtype
with torch.no_grad():
x = fitnesses.reshape(-1)
n = len(x)
incr_indices = torch.arange(n, dtype=dtype, device=device)
N = torch.tensor(n, dtype=dtype, device=device)
weights = torch.max(
torch.tensor(0, dtype=dtype, device=device), torch.log((N / 2.0) + 1.0) - torch.log(N - incr_indices)
)
indices = torch.argsort(x, descending=(not higher_is_better))
ranks = torch.empty(n, dtype=indices.dtype, device=device)
ranks[indices] = torch.arange(n, dtype=indices.dtype, device=device)
utils = weights[ranks]
utils /= torch.sum(utils)
utils -= 1 / N
return utils.reshape(*(fitnesses.shape))
normalized(fitnesses, *, higher_is_better=True)
¶
Normalize the fitnesses and return the result as ranks.
The normalization is done in such a way that the mean becomes 0.0 and the standard deviation becomes 1.0.
According to the value of higher_is_better
, it will be ensured that
better solutions will have numerically higher rank.
In more details, if higher_is_better
is set as False, then the
fitnesses will be multiplied by -1.0 in addition to being subject
to normalization.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
fitnesses |
Tensor |
A PyTorch tensor which contains real numbers which we want to rank. |
required |
higher_is_better |
bool |
Whether or not the higher values will be assigned higher ranks. Changing this to False means that lower values are interpreted as better, and therefore lower values will have higher ranks. |
True |
Returns:
Type | Description |
---|---|
Tensor |
The ranks, in the same device, with the same dtype with the original tensor. |
Source code in evotorch/tools/ranking.py
def normalized(fitnesses: torch.Tensor, *, higher_is_better: bool = True) -> torch.Tensor:
"""
Normalize the fitnesses and return the result as ranks.
The normalization is done in such a way that the mean becomes 0.0 and
the standard deviation becomes 1.0.
According to the value of `higher_is_better`, it will be ensured that
better solutions will have numerically higher rank.
In more details, if `higher_is_better` is set as False, then the
fitnesses will be multiplied by -1.0 in addition to being subject
to normalization.
Args:
fitnesses: A PyTorch tensor which contains real numbers which we want
to rank.
higher_is_better: Whether or not the higher values will be assigned
higher ranks. Changing this to False means that lower values
are interpreted as better, and therefore lower values will have
higher ranks.
Returns:
The ranks, in the same device, with the same dtype with the original
tensor.
"""
if not higher_is_better:
fitnesses = -fitnesses
fitness_mean = torch.mean(fitnesses)
fitness_stdev = torch.std(fitnesses)
fitnesses = fitnesses - fitness_mean
fitnesses = fitnesses / fitness_stdev
return fitnesses
rank(fitnesses, ranking_method, *, higher_is_better)
¶
Get the ranks of the given sequence of numbers.
Better solutions will have numerically higher ranks.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
fitnesses |
Iterable[float] |
A sequence of numbers to be ranked. |
required |
ranking_method |
str |
The ranking method to be used.
Can be "centered", which means 0-centered linear ranking
from -0.5 to 0.5.
Can be "linear", which means a linear ranking from 0 to 1.
Can be "nes", which means the ranking method used by
Natural Evolution Strategies.
Can be "normalized", which means that the ranks will be
the normalized counterparts of the fitnesses.
Can be "raw", which means that the fitnesses themselves
(or, if |
required |
higher_is_better |
bool |
Whether or not the higher values will be assigned higher ranks. Changing this to False means that lower values are interpreted as better, and therefore lower values will have higher ranks. |
required |
Source code in evotorch/tools/ranking.py
def rank(fitnesses: Iterable[float], ranking_method: str, *, higher_is_better: bool):
"""
Get the ranks of the given sequence of numbers.
Better solutions will have numerically higher ranks.
Args:
fitnesses: A sequence of numbers to be ranked.
ranking_method: The ranking method to be used.
Can be "centered", which means 0-centered linear ranking
from -0.5 to 0.5.
Can be "linear", which means a linear ranking from 0 to 1.
Can be "nes", which means the ranking method used by
Natural Evolution Strategies.
Can be "normalized", which means that the ranks will be
the normalized counterparts of the fitnesses.
Can be "raw", which means that the fitnesses themselves
(or, if `higher_is_better` is False, their inverted
counterparts, inversion meaning the operation of
multiplying by -1 in this context) will be the ranks.
higher_is_better: Whether or not the higher values will be assigned
higher ranks. Changing this to False means that lower values
are interpreted as better, and therefore lower values will have
higher ranks.
"""
fitnesses = torch.as_tensor(fitnesses)
rank_func = rankers[ranking_method]
return rank_func(fitnesses, higher_is_better=higher_is_better)
raw(fitnesses, *, higher_is_better=True)
¶
Return the fitnesses themselves as ranks.
If higher_is_better
is given as False, then the fitnesses will first
be multiplied by -1 and then the result will be returned as ranks.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
fitnesses |
Tensor |
A PyTorch tensor which contains real numbers which we want to rank. |
required |
higher_is_better |
bool |
Whether or not the higher values will be assigned higher ranks. Changing this to False means that lower values are interpreted as better, and therefore lower values will have higher ranks. |
True |
Returns:
Type | Description |
---|---|
Tensor |
The ranks, in the same device, with the same dtype with the original tensor. |
Source code in evotorch/tools/ranking.py
def raw(fitnesses: torch.Tensor, *, higher_is_better: bool = True) -> torch.Tensor:
"""
Return the fitnesses themselves as ranks.
If `higher_is_better` is given as False, then the fitnesses will first
be multiplied by -1 and then the result will be returned as ranks.
Args:
fitnesses: A PyTorch tensor which contains real numbers which we want
to rank.
higher_is_better: Whether or not the higher values will be assigned
higher ranks. Changing this to False means that lower values
are interpreted as better, and therefore lower values will have
higher ranks.
Returns:
The ranks, in the same device, with the same dtype with the original
tensor.
"""
if not higher_is_better:
fitnesses = -fitnesses
return fitnesses
readonlytensor
¶
ReadOnlyTensor (Tensor)
¶
A special type of tensor which is read-only.
This is a subclass of torch.Tensor
which explicitly disallows
operations that would cause in-place modifications.
Since ReadOnlyTensor if a subclass of torch.Tensor
, most
non-destructive PyTorch operations are on this tensor are supported.
Cloning a ReadOnlyTensor using the clone()
method or Python's
deepcopy(...)
function results in a regular PyTorch tensor.
Reshaping or slicing operations might return a ReadOnlyTensor if the
result ends up being a view of the original ReadOnlyTensor; otherwise,
the returned tensor is a regular torch.Tensor
.
Source code in evotorch/tools/readonlytensor.py
class ReadOnlyTensor(torch.Tensor):
"""
A special type of tensor which is read-only.
This is a subclass of `torch.Tensor` which explicitly disallows
operations that would cause in-place modifications.
Since ReadOnlyTensor if a subclass of `torch.Tensor`, most
non-destructive PyTorch operations are on this tensor are supported.
Cloning a ReadOnlyTensor using the `clone()` method or Python's
`deepcopy(...)` function results in a regular PyTorch tensor.
Reshaping or slicing operations might return a ReadOnlyTensor if the
result ends up being a view of the original ReadOnlyTensor; otherwise,
the returned tensor is a regular `torch.Tensor`.
"""
def __getattribute__(self, attribute_name: str) -> Any:
if (
isinstance(attribute_name, str)
and attribute_name.endswith("_")
and (not ((attribute_name.startswith("__")) and (attribute_name.endswith("__"))))
):
raise AttributeError(
f"A ReadOnlyTensor explicitly disables all members whose names end with '_'."
f" Cannot access member {repr(attribute_name)}."
)
else:
return super().__getattribute__(attribute_name)
def __cannot_modify(self, *ignore, **ignore_too):
raise TypeError("The contents of a ReadOnlyTensor cannot be modified")
__setitem__ = __cannot_modify
__iadd__ = __cannot_modify
__iand__ = __cannot_modify
__idiv__ = __cannot_modify
__ifloordiv__ = __cannot_modify
__ilshift__ = __cannot_modify
__imatmul__ = __cannot_modify
__imod__ = __cannot_modify
__imul__ = __cannot_modify
__ior__ = __cannot_modify
__ipow__ = __cannot_modify
__irshift__ = __cannot_modify
__isub__ = __cannot_modify
__itruediv__ = __cannot_modify
__ixor__ = __cannot_modify
if _torch_older_than_1_12:
# Define __str__ and __repr__ for when using PyTorch 1.11 or older.
# With PyTorch 1.12, overriding __str__ and __repr__ are not necessary.
def __to_string(self) -> str:
s = super().__repr__()
if "\n" not in s:
return f"ReadOnlyTensor({super().__repr__()})"
else:
indenter = " " * 4
s = (indenter + s.replace("\n", "\n" + indenter)).rstrip()
return f"ReadOnlyTensor(\n{s}\n)"
__str__ = __to_string
__repr__ = __to_string
def clone(self, *, preserve_read_only: bool = False) -> torch.Tensor:
result = super().clone()
if not preserve_read_only:
result = result.as_subclass(torch.Tensor)
return result
def __mutable_if_independent(self, other: torch.Tensor) -> torch.Tensor:
from .misc import storage_ptr
self_ptr = storage_ptr(self)
other_ptr = storage_ptr(other)
if self_ptr != other_ptr:
other = other.as_subclass(torch.Tensor)
return other
def __getitem__(self, index_or_slice) -> torch.Tensor:
result = super().__getitem__(index_or_slice)
return self.__mutable_if_independent(result)
def reshape(self, *args, **kwargs) -> torch.Tensor:
result = super().reshape(*args, **kwargs)
return self.__mutable_if_independent(result)
def numpy(self) -> np.ndarray:
arr: np.ndarray = torch.Tensor.numpy(self)
arr.flags["WRITEABLE"] = False
return arr
def __array__(self, *args, **kwargs) -> np.ndarray:
arr: np.ndarray = super().__array__(*args, **kwargs)
arr.flags["WRITEABLE"] = False
return arr
def __copy__(self):
return self.clone(preserve_read_only=True)
def __deepcopy__(self, memo):
return self.clone(preserve_read_only=True)
@classmethod
def __torch_function__(cls, func: Callable, types: Iterable, args: tuple = (), kwargs: Optional[Mapping] = None):
if (kwargs is not None) and ("out" in kwargs):
if isinstance(kwargs["out"], ReadOnlyTensor):
raise TypeError(
f"The `out` keyword argument passed to {func} is a ReadOnlyTensor."
f" A ReadOnlyTensor explicitly fails when referenced via the `out` keyword argument of any torch"
f" function."
f" This restriction is for making sure that the torch operations which could normally do in-place"
f" modifications do not operate on ReadOnlyTensor instances."
)
return super().__torch_function__(func, types, args, kwargs)
__torch_function__(func, types, args=(), kwargs=None)
classmethod
special
¶
This torch_function implementation wraps subclasses such that
methods called on subclasses return a subclass instance instead of
a torch.Tensor
instance.
One corollary to this is that you need coverage for torch.Tensor methods if implementing torch_function for subclasses.
We recommend always calling super().__torch_function__
as the base
case when doing the above.
While not mandatory, we recommend making __torch_function__
a classmethod.
Source code in evotorch/tools/readonlytensor.py
@classmethod
def __torch_function__(cls, func: Callable, types: Iterable, args: tuple = (), kwargs: Optional[Mapping] = None):
if (kwargs is not None) and ("out" in kwargs):
if isinstance(kwargs["out"], ReadOnlyTensor):
raise TypeError(
f"The `out` keyword argument passed to {func} is a ReadOnlyTensor."
f" A ReadOnlyTensor explicitly fails when referenced via the `out` keyword argument of any torch"
f" function."
f" This restriction is for making sure that the torch operations which could normally do in-place"
f" modifications do not operate on ReadOnlyTensor instances."
)
return super().__torch_function__(func, types, args, kwargs)
clone(self, *, preserve_read_only=False)
¶
clone(*, memory_format=torch.preserve_format) -> Tensor
See :func:torch.clone
numpy(self)
¶
numpy(*, force=False) -> numpy.ndarray
Returns the tensor as a NumPy :class:ndarray
.
If :attr:force
is False
(the default), the conversion
is performed only if the tensor is on the CPU, does not require grad,
does not have its conjugate bit set, and is a dtype and layout that
NumPy supports. The returned ndarray and the tensor will share their
storage, so changes to the tensor will be reflected in the ndarray
and vice versa.
If :attr:force
is True
this is equivalent to
calling t.detach().cpu().resolve_conj().resolve_neg().numpy()
.
If the tensor isn't on the CPU or the conjugate or negative bit is set,
the tensor won't share its storage with the returned ndarray.
Setting :attr:force
to True
can be a useful shorthand.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
force |
bool |
if |
required |
reshape(self, *args, **kwargs)
¶
reshape(*shape) -> Tensor
Returns a tensor with the same data and number of elements as :attr:self
but with the specified shape. This method returns a view if :attr:shape
is
compatible with the current shape. See :meth:torch.Tensor.view
on when it is
possible to return a view.
See :func:torch.reshape
Parameters:
Name | Type | Description | Default |
---|---|---|---|
shape |
tuple of ints or int... |
the desired shape |
required |
as_read_only_tensor(x, *, dtype=None, device=None)
¶
Convert the given object to a ReadOnlyTensor.
The provided object can be a scalar, or an Iterable of numeric data, or an ObjectArray.
This function can be thought as the read-only counterpart of PyTorch's
torch.as_tensor(...)
function.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Any |
The object to be converted to a ReadOnlyTensor. |
required |
dtype |
Optional[torch.dtype] |
The dtype of the new ReadOnlyTensor (e.g. torch.float32).
If this argument is not specified, dtype will be inferred from |
None |
device |
Union[str, torch.device] |
The device in which the ReadOnlyTensor will be stored
(e.g. "cpu").
If this argument is not specified, the device which is storing
the original |
None |
Returns:
Type | Description |
---|---|
Iterable |
The read-only counterpart of the provided object. |
Source code in evotorch/tools/readonlytensor.py
def as_read_only_tensor(
x: Any, *, dtype: Optional[torch.dtype] = None, device: Optional[Union[str, torch.device]] = None
) -> Iterable:
"""
Convert the given object to a ReadOnlyTensor.
The provided object can be a scalar, or an Iterable of numeric data,
or an ObjectArray.
This function can be thought as the read-only counterpart of PyTorch's
`torch.as_tensor(...)` function.
Args:
x: The object to be converted to a ReadOnlyTensor.
dtype: The dtype of the new ReadOnlyTensor (e.g. torch.float32).
If this argument is not specified, dtype will be inferred from `x`.
For example, if `x` is a PyTorch tensor or a numpy array, its
existing dtype will be kept.
device: The device in which the ReadOnlyTensor will be stored
(e.g. "cpu").
If this argument is not specified, the device which is storing
the original `x` will be re-used.
Returns:
The read-only counterpart of the provided object.
"""
from .objectarray import ObjectArray
kwargs = _device_and_dtype_kwargs(dtype=dtype, device=device)
if isinstance(x, ObjectArray):
if len(kwargs) != 0:
raise ValueError(
f"read_only_tensor(...): when making a read-only tensor from an ObjectArray,"
f" the arguments `dtype` and `device` were not expected."
f" However, the received keyword arguments are: {kwargs}."
)
return x.get_read_only_view()
else:
return torch.as_tensor(x, **kwargs).as_subclass(ReadOnlyTensor)
read_only_tensor(x, *, dtype=None, device=None)
¶
Make a ReadOnlyTensor from the given object.
The provided object can be a scalar, or an Iterable of numeric data, or an ObjectArray.
This function can be thought as the read-only counterpart of PyTorch's
torch.tensor(...)
function.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Any |
The object from which the new ReadOnlyTensor will be made. |
required |
dtype |
Optional[torch.dtype] |
The dtype of the new ReadOnlyTensor (e.g. torch.float32). |
None |
device |
Union[str, torch.device] |
The device in which the ReadOnlyTensor will be stored (e.g. "cpu"). |
None |
Returns:
Type | Description |
---|---|
Iterable |
The new read-only tensor. |
Source code in evotorch/tools/readonlytensor.py
def read_only_tensor(
x: Any, *, dtype: Optional[torch.dtype] = None, device: Optional[Union[str, torch.device]] = None
) -> Iterable:
"""
Make a ReadOnlyTensor from the given object.
The provided object can be a scalar, or an Iterable of numeric data,
or an ObjectArray.
This function can be thought as the read-only counterpart of PyTorch's
`torch.tensor(...)` function.
Args:
x: The object from which the new ReadOnlyTensor will be made.
dtype: The dtype of the new ReadOnlyTensor (e.g. torch.float32).
device: The device in which the ReadOnlyTensor will be stored
(e.g. "cpu").
Returns:
The new read-only tensor.
"""
from .objectarray import ObjectArray
kwargs = _device_and_dtype_kwargs(dtype=dtype, device=device)
if isinstance(x, ObjectArray):
if len(kwargs) != 0:
raise ValueError(
f"read_only_tensor(...): when making a read-only tensor from an ObjectArray,"
f" the arguments `dtype` and `device` were not expected."
f" However, the received keyword arguments are: {kwargs}."
)
return x.get_read_only_view()
else:
return torch.as_tensor(x, **kwargs).as_subclass(ReadOnlyTensor)
recursiveprintable
¶
RecursivePrintable
¶
A base class for making a class printable.
This base class considers custom container types which can recursively
contain themselves (even in a cyclic manner). Classes inheriting from
RecursivePrintable
will gain a new ready-to-use method named
to_string(...)
. This to_string(...)
method, upon being called,
checks if the current class is an Iterable or a Mapping, and prints
the representation accordingly, with a recursion limit to avoid
RecursionError
. The methods __str__(...)
and __repr__(...)
are also defined as aliases of this to_string
method.
Source code in evotorch/tools/recursiveprintable.py
class RecursivePrintable:
"""
A base class for making a class printable.
This base class considers custom container types which can recursively
contain themselves (even in a cyclic manner). Classes inheriting from
`RecursivePrintable` will gain a new ready-to-use method named
`to_string(...)`. This `to_string(...)` method, upon being called,
checks if the current class is an Iterable or a Mapping, and prints
the representation accordingly, with a recursion limit to avoid
`RecursionError`. The methods `__str__(...)` and `__repr__(...)`
are also defined as aliases of this `to_string` method.
"""
def to_string(self, *, max_depth: int = 10) -> str:
if max_depth <= 0:
return "<...>"
def item_repr(x: Any) -> str:
if isinstance(x, RecursivePrintable):
return x.to_string(max_depth=(max_depth - 1))
else:
return repr(x)
result = []
def puts(*x: Any):
for item_of_x in x:
result.append(str(item_of_x))
clsname = type(self).__name__
first_one = True
if isinstance(self, Mapping):
puts(clsname, "({")
for k, v in self.items():
if first_one:
first_one = False
else:
puts(", ")
puts(item_repr(k), ": ", item_repr(v))
puts("})")
elif isinstance(self, Iterable):
puts(clsname, "([")
for v in self:
if first_one:
first_one = False
else:
puts(", ")
puts(item_repr(v))
puts("])")
else:
raise NotImplementedError
return "".join(result)
def __str__(self) -> str:
return self.to_string()
def __repr__(self) -> str:
return self.to_string()
structures
¶
This namespace contains data structures whose underlying storages are contiguous and therefore vectorization-friendly.
CBag (Structure)
¶
An integer bag from which one can do sampling without replacement.
Let us imagine that we wish to create a bag whose maximum length (i.e. whose maximum number of contained elements) is 5. For this, we can do:
which gives us an empty bag (i.e. a bag in which all pre-allocated slots are empty):
_________________________________________________
| | | | | |
| <empty> | <empty> | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
Given that the maximum length for this bag is 5, the default set of acceptable values for this bag is 0, 1, 2, 3, 4. Let us put three values into our bag:
After these push operations, our bag can be visualized like this:
_________________________________________________
| | | | | |
| 1 | 3 | 4 | <empty> | <empty> |
|_________|_________|_________|_________|_________|
Let us now sample an element from this bag:
Because this is the first time we are sampling from this bag, the elements will be first shuffled. Let us assume that the shuffling resulted in:
_________________________________________________
| | | | | |
| 3 | 1 | 4 | <empty> | <empty> |
|_________|_________|_________|_________|_________|
Given this shuffed state, our call to pop_(...)
will pop the leftmost
element (3 in this case). Therefore, the value of sampled1
will be 3
(as a scalar PyTorch tensor), and the state of the bag after the pop
operation will be:
_________________________________________________
| | | | | |
| 1 | 4 | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
Let us keep sampling until the bag is empty:
The value of sampled2
becomes 1, and the value of sampled3
becomes 4.
This class can also represent a contiguous batch of bags. As an example, let us create 4 bags, each of length 5:
After this instantiation, bag_batch
can be visualized like this:
__[ batch item 0 ]_______________________________
| | | | | |
| <empty> | <empty> | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 1 ]_______________________________
| | | | | |
| <empty> | <empty> | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 2 ]_______________________________
| | | | | |
| <empty> | <empty> | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 3 ]_______________________________
| | | | | |
| <empty> | <empty> | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
We can add values to our batch like this:
which would result in:
__[ batch item 0 ]_______________________________
| | | | | |
| 3 | 3 | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 1 ]_______________________________
| | | | | |
| 2 | 1 | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 2 ]_______________________________
| | | | | |
| 3 | 1 | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 3 ]_______________________________
| | | | | |
| 1 | 4 | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
We can also add values only to some of the bags within the batch:
which would result in:
__[ batch item 0 ]_______________________________
| | | | | |
| 3 | 3 | 0 | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 1 ]_______________________________
| | | | | |
| 2 | 1 | 2 | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 2 ]_______________________________
| | | | | |
| 3 | 1 | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 3 ]_______________________________
| | | | | |
| 1 | 4 | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
Notice that the batch items 2 and 3 were not affected, because their
corresponding values in the where
tensor were given as False.
Let us now assume that we wish to obtain a sample from each bag. We can do:
Since this is the first sampling operation on this bag batch, each bag will first be shuffled. Let us assume that the shuffling resulted in:
__[ batch item 0 ]_______________________________
| | | | | |
| 0 | 3 | 3 | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 1 ]_______________________________
| | | | | |
| 1 | 2 | 2 | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 2 ]_______________________________
| | | | | |
| 3 | 1 | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 3 ]_______________________________
| | | | | |
| 4 | 1 | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
Given this shuffled state, the pop operation takes the leftmost element
from each bag. Therefore, the value of sample_batch1
becomes a
1-dimensional tensor containing [0, 1, 3, 4]
. Once the pop operation
is completed, the state of the batch of bags becomes:
__[ batch item 0 ]_______________________________
| | | | | |
| 3 | 3 | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 1 ]_______________________________
| | | | | |
| 2 | 2 | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 2 ]_______________________________
| | | | | |
| 1 | <empty> | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 3 ]_______________________________
| | | | | |
| 1 | <empty> | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
Now, if we wish to pop only from some of the bags, we can do:
which makes the value of sample_batch2
a 1-dimensional tensor containing
[3, 2, 1, 1]
(the leftmost element for each bag). The state of our batch
of bags will become:
__[ batch item 0 ]_______________________________
| | | | | |
| 3 | <empty> | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 1 ]_______________________________
| | | | | |
| 2 | 2 | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 2 ]_______________________________
| | | | | |
| <empty> | <empty> | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 3 ]_______________________________
| | | | | |
| 1 | <empty> | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
Notice that the batch items 1 and 3 were not modified, because their
corresponding values in the where
argument were given as False.
Source code in evotorch/tools/structures.py
class CBag(Structure):
"""
An integer bag from which one can do sampling without replacement.
Let us imagine that we wish to create a bag whose maximum length (i.e.
whose maximum number of contained elements) is 5. For this, we can do:
```python
bag = CBag(max_length=5)
```
which gives us an empty bag (i.e. a bag in which all pre-allocated slots
are empty):
```
_________________________________________________
| | | | | |
| <empty> | <empty> | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
```
Given that the maximum length for this bag is 5, the default set of
acceptable values for this bag is 0, 1, 2, 3, 4. Let us put three values
into our bag:
```
bag.push_(torch.tensor(1))
bag.push_(torch.tensor(3))
bag.push_(torch.tensor(4))
```
After these push operations, our bag can be visualized like this:
```
_________________________________________________
| | | | | |
| 1 | 3 | 4 | <empty> | <empty> |
|_________|_________|_________|_________|_________|
```
Let us now sample an element from this bag:
```python
sampled1 = bag.pop_()
```
Because this is the first time we are sampling from this bag, the elements
will be first shuffled. Let us assume that the shuffling resulted in:
```
_________________________________________________
| | | | | |
| 3 | 1 | 4 | <empty> | <empty> |
|_________|_________|_________|_________|_________|
```
Given this shuffed state, our call to `pop_(...)` will pop the leftmost
element (3 in this case). Therefore, the value of `sampled1` will be 3
(as a scalar PyTorch tensor), and the state of the bag after the pop
operation will be:
```
_________________________________________________
| | | | | |
| 1 | 4 | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
```
Let us keep sampling until the bag is empty:
```python
sampled2 = bag.pop_()
sampled3 = bag.pop_()
```
The value of `sampled2` becomes 1, and the value of `sampled3` becomes 4.
This class can also represent a contiguous batch of bags. As an example,
let us create 4 bags, each of length 5:
```python
bag_batch = CBag(batch_size=4, max_length=5)
```
After this instantiation, `bag_batch` can be visualized like this:
```
__[ batch item 0 ]_______________________________
| | | | | |
| <empty> | <empty> | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 1 ]_______________________________
| | | | | |
| <empty> | <empty> | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 2 ]_______________________________
| | | | | |
| <empty> | <empty> | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 3 ]_______________________________
| | | | | |
| <empty> | <empty> | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
```
We can add values to our batch like this:
```python
bag_batch.push_(torch.tensor([3, 2, 3, 1]))
bag_batch.push_(torch.tensor([3, 1, 1, 4]))
```
which would result in:
```
__[ batch item 0 ]_______________________________
| | | | | |
| 3 | 3 | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 1 ]_______________________________
| | | | | |
| 2 | 1 | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 2 ]_______________________________
| | | | | |
| 3 | 1 | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 3 ]_______________________________
| | | | | |
| 1 | 4 | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
```
We can also add values only to some of the bags within the batch:
```
bag_batch.push_(
torch.tensor([0, 2, 1, 0]),
where=torch.tensor([True, True, False, False])),
)
```
which would result in:
```
__[ batch item 0 ]_______________________________
| | | | | |
| 3 | 3 | 0 | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 1 ]_______________________________
| | | | | |
| 2 | 1 | 2 | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 2 ]_______________________________
| | | | | |
| 3 | 1 | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 3 ]_______________________________
| | | | | |
| 1 | 4 | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
```
Notice that the batch items 2 and 3 were not affected, because their
corresponding values in the `where` tensor were given as False.
Let us now assume that we wish to obtain a sample from each bag. We can do:
```python
sample_batch1 = bag_batch.pop_()
```
Since this is the first sampling operation on this bag batch, each bag
will first be shuffled. Let us assume that the shuffling resulted in:
```
__[ batch item 0 ]_______________________________
| | | | | |
| 0 | 3 | 3 | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 1 ]_______________________________
| | | | | |
| 1 | 2 | 2 | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 2 ]_______________________________
| | | | | |
| 3 | 1 | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 3 ]_______________________________
| | | | | |
| 4 | 1 | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
```
Given this shuffled state, the pop operation takes the leftmost element
from each bag. Therefore, the value of `sample_batch1` becomes a
1-dimensional tensor containing `[0, 1, 3, 4]`. Once the pop operation
is completed, the state of the batch of bags becomes:
```
__[ batch item 0 ]_______________________________
| | | | | |
| 3 | 3 | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 1 ]_______________________________
| | | | | |
| 2 | 2 | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 2 ]_______________________________
| | | | | |
| 1 | <empty> | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 3 ]_______________________________
| | | | | |
| 1 | <empty> | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
```
Now, if we wish to pop only from some of the bags, we can do:
```python
sample_batch2 = bag_batch.pop_(
where=torch.tensor([True, False, True, False]),
)
```
which makes the value of `sample_batch2` a 1-dimensional tensor containing
`[3, 2, 1, 1]` (the leftmost element for each bag). The state of our batch
of bags will become:
```
__[ batch item 0 ]_______________________________
| | | | | |
| 3 | <empty> | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 1 ]_______________________________
| | | | | |
| 2 | 2 | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 2 ]_______________________________
| | | | | |
| <empty> | <empty> | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
__[ batch item 3 ]_______________________________
| | | | | |
| 1 | <empty> | <empty> | <empty> | <empty> |
|_________|_________|_________|_________|_________|
```
Notice that the batch items 1 and 3 were not modified, because their
corresponding values in the `where` argument were given as False.
"""
def __init__(
self,
*,
max_length: int,
value_range: Optional[tuple] = None,
batch_size: Optional[Union[int, tuple, list]] = None,
batch_shape: Optional[Union[int, tuple, list]] = None,
generator: Any = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
verify: bool = True,
):
"""
Initialize the CBag.
Args:
max_length: Maximum length (i.e. maximum capacity for storing
elements).
value_range: Optionally expected as a tuple of integers in the
form `(a, b)` where `a` is the lower bound and `b` is the
exclusive upper bound for the range of acceptable integer
values. If this argument is omitted, the range will be
`(0, n)` where `n` is `max_length`.
batch_size: Optionally an integer or a size tuple, for when
one wishes to create not just a single bag, but a batch
of bags.
batch_shape: Alias for the argument `batch_size`.
generator: Optionally an instance of `torch.Generator` or any
object with an attribute (or a property) named `generator`
(in which case it will be expected that this attribute will
provide the actual `torch.Generator` instance). If this
argument is provided, then the shuffling operation will use
this generator. Otherwise, the global generator of PyTorch
will be used.
dtype: dtype for the values contained by the bag(s).
By default, the dtype is `torch.int64`.
device: The device on which the bag(s) will be stored.
By default, the device is `torch.device("cpu")`.
verify: Whether or not to do explicit checks for the correctness
of the operations (against popping from an empty bag or
pushing into a full bag). By default, this is True.
If you are sure that such errors will not occur, you might
turn this to False for getting a performance gain.
"""
if dtype is None:
dtype = torch.int64
else:
dtype = to_torch_dtype(dtype)
if dtype not in (torch.int16, torch.int32, torch.int64):
raise RuntimeError(
f"CBag currently supports only torch.int16, torch.int32, and torch.int64."
f" This dtype is not supported: {repr(dtype)}."
)
self._gen_kwargs = {}
if generator is not None:
if isinstance(generator, torch.Generator):
self._gen_kwargs["generator"] = generator
else:
generator = generator.generator
if generator is not None:
self._gen_kwargs["generator"] = generator
max_length = int(max_length)
self._data = CList(
max_length=max_length,
batch_size=batch_size,
batch_shape=batch_shape,
dtype=dtype,
device=device,
verify=verify,
)
if value_range is None:
a = 0
b = max_length
else:
a, b = value_range
self._low_item = int(a)
self._high_item = int(b) # upper bound is exclusive
self._choice_count = self._high_item - self._low_item
self._bignum = self._choice_count + 1
if self._low_item < 1:
self._shift = 1 - self._low_item
else:
self._shift = 0
self._empty = self._low_item - 1
self._data.data[:] = self._empty
self._sampling_phase: bool = False
def push_(self, value: Numbers, where: Optional[Numbers] = None):
"""
Push new value(s) into the bag(s).
Args:
value: The value(s) to be pushed into the bag(s).
where: Optionally a boolean tensor. If this is given, then only
the bags with their corresponding boolean flags set as True
will be affected.
"""
if self._sampling_phase:
raise RuntimeError("Cannot put a new element into the CBag after calling `sample_(...)`")
self._data.push_(value, where)
def _shuffle(self):
dtype = self._data.dtype
device = self._data.device
nrows, ncols = self._data.data.shape
try:
gaussian_noise = torch.randn(nrows, ncols, dtype=torch.float32, device=device, **(self._gen_kwargs))
noise = gaussian_noise.argsort().to(dtype=dtype) * self._bignum
self._data.data[:] += torch.where(
self._data.data != self._empty, self._shift + noise, torch.tensor(0, dtype=dtype, device=device)
)
self._data.data[:] = self._data.data.sort(dim=-1, descending=True, stable=False).values
finally:
self._data.data[:] %= self._bignum
self._data.data[:] -= self._shift
def pop_(self, where: Optional[Numbers] = None) -> torch.Tensor:
"""
Sample value(s) from the bag(s).
Upon being called for the first time, this method will cause the
contained elements to be shuffled.
Args:
where: Optionally a boolean tensor. If this is given, then only
the bags with their corresponding boolean flags set as True
will be affected.
"""
if not self._sampling_phase:
self._shuffle()
self._sampling_phase = True
return self._data.pop_(where)
def clear(self):
"""
Clear the bag(s).
"""
self._data.data[:] = self._empty
self._data.clear()
self._sampling_phase = False
@property
def length(self) -> torch.Tensor:
"""
The length(s) of the bag(s)
"""
return self._data.length
@property
def data(self) -> torch.Tensor:
"""
The underlying data tensor
"""
return self._data.data
data: Tensor
property
readonly
¶
The underlying data tensor
length: Tensor
property
readonly
¶
The length(s) of the bag(s)
__init__(self, *, max_length, value_range=None, batch_size=None, batch_shape=None, generator=None, dtype=None, device=None, verify=True)
special
¶
Initialize the CBag.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
max_length |
int |
Maximum length (i.e. maximum capacity for storing elements). |
required |
value_range |
Optional[tuple] |
Optionally expected as a tuple of integers in the
form |
None |
batch_size |
Union[int, tuple, list] |
Optionally an integer or a size tuple, for when one wishes to create not just a single bag, but a batch of bags. |
None |
batch_shape |
Union[int, tuple, list] |
Alias for the argument |
None |
generator |
Any |
Optionally an instance of |
None |
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
dtype for the values contained by the bag(s).
By default, the dtype is |
None |
device |
Union[str, torch.device] |
The device on which the bag(s) will be stored.
By default, the device is |
None |
verify |
bool |
Whether or not to do explicit checks for the correctness of the operations (against popping from an empty bag or pushing into a full bag). By default, this is True. If you are sure that such errors will not occur, you might turn this to False for getting a performance gain. |
True |
Source code in evotorch/tools/structures.py
def __init__(
self,
*,
max_length: int,
value_range: Optional[tuple] = None,
batch_size: Optional[Union[int, tuple, list]] = None,
batch_shape: Optional[Union[int, tuple, list]] = None,
generator: Any = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
verify: bool = True,
):
"""
Initialize the CBag.
Args:
max_length: Maximum length (i.e. maximum capacity for storing
elements).
value_range: Optionally expected as a tuple of integers in the
form `(a, b)` where `a` is the lower bound and `b` is the
exclusive upper bound for the range of acceptable integer
values. If this argument is omitted, the range will be
`(0, n)` where `n` is `max_length`.
batch_size: Optionally an integer or a size tuple, for when
one wishes to create not just a single bag, but a batch
of bags.
batch_shape: Alias for the argument `batch_size`.
generator: Optionally an instance of `torch.Generator` or any
object with an attribute (or a property) named `generator`
(in which case it will be expected that this attribute will
provide the actual `torch.Generator` instance). If this
argument is provided, then the shuffling operation will use
this generator. Otherwise, the global generator of PyTorch
will be used.
dtype: dtype for the values contained by the bag(s).
By default, the dtype is `torch.int64`.
device: The device on which the bag(s) will be stored.
By default, the device is `torch.device("cpu")`.
verify: Whether or not to do explicit checks for the correctness
of the operations (against popping from an empty bag or
pushing into a full bag). By default, this is True.
If you are sure that such errors will not occur, you might
turn this to False for getting a performance gain.
"""
if dtype is None:
dtype = torch.int64
else:
dtype = to_torch_dtype(dtype)
if dtype not in (torch.int16, torch.int32, torch.int64):
raise RuntimeError(
f"CBag currently supports only torch.int16, torch.int32, and torch.int64."
f" This dtype is not supported: {repr(dtype)}."
)
self._gen_kwargs = {}
if generator is not None:
if isinstance(generator, torch.Generator):
self._gen_kwargs["generator"] = generator
else:
generator = generator.generator
if generator is not None:
self._gen_kwargs["generator"] = generator
max_length = int(max_length)
self._data = CList(
max_length=max_length,
batch_size=batch_size,
batch_shape=batch_shape,
dtype=dtype,
device=device,
verify=verify,
)
if value_range is None:
a = 0
b = max_length
else:
a, b = value_range
self._low_item = int(a)
self._high_item = int(b) # upper bound is exclusive
self._choice_count = self._high_item - self._low_item
self._bignum = self._choice_count + 1
if self._low_item < 1:
self._shift = 1 - self._low_item
else:
self._shift = 0
self._empty = self._low_item - 1
self._data.data[:] = self._empty
self._sampling_phase: bool = False
clear(self)
¶
pop_(self, where=None)
¶
Sample value(s) from the bag(s).
Upon being called for the first time, this method will cause the contained elements to be shuffled.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
where |
Union[numbers.Number, Iterable[numbers.Number]] |
Optionally a boolean tensor. If this is given, then only the bags with their corresponding boolean flags set as True will be affected. |
None |
Source code in evotorch/tools/structures.py
def pop_(self, where: Optional[Numbers] = None) -> torch.Tensor:
"""
Sample value(s) from the bag(s).
Upon being called for the first time, this method will cause the
contained elements to be shuffled.
Args:
where: Optionally a boolean tensor. If this is given, then only
the bags with their corresponding boolean flags set as True
will be affected.
"""
if not self._sampling_phase:
self._shuffle()
self._sampling_phase = True
return self._data.pop_(where)
push_(self, value, where=None)
¶
Push new value(s) into the bag(s).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
value |
Union[numbers.Number, Iterable[numbers.Number]] |
The value(s) to be pushed into the bag(s). |
required |
where |
Union[numbers.Number, Iterable[numbers.Number]] |
Optionally a boolean tensor. If this is given, then only the bags with their corresponding boolean flags set as True will be affected. |
None |
Source code in evotorch/tools/structures.py
def push_(self, value: Numbers, where: Optional[Numbers] = None):
"""
Push new value(s) into the bag(s).
Args:
value: The value(s) to be pushed into the bag(s).
where: Optionally a boolean tensor. If this is given, then only
the bags with their corresponding boolean flags set as True
will be affected.
"""
if self._sampling_phase:
raise RuntimeError("Cannot put a new element into the CBag after calling `sample_(...)`")
self._data.push_(value, where)
CDict (Structure)
¶
Representation of a batchable dictionary.
This structure is very similar to a CMemory
, but with the additional
behavior of separately keeping track of which keys exist and which keys
do not exist.
Let us consider an example where we have 5 keys, and each key is associated with a tensor of length 7. Such a dictionary could be allocated like this:
Our allocated dictionary can be visualized as follows:
_______________________________________
| key 0 -> ( missing ) |
| key 1 -> ( missing ) |
| key 2 -> ( missing ) |
| key 3 -> ( missing ) |
| key 4 -> ( missing ) |
|_______________________________________|
Let us now sample a Gaussian noise and put it into the 0-th slot:
which results in:
_________________________________________
| key 0 -> [ Gaussian noise of length 7 ] |
| key 1 -> ( missing ) |
| key 2 -> ( missing ) |
| key 3 -> ( missing ) |
| key 4 -> ( missing ) |
|_________________________________________|
Let us now consider another example where we deal with not a single dictionary but with a dictionary batch. For the sake of this example, let us say that our desired batch size is 3. The allocation of such a batch would be as follows:
Our dictionary batch can be visualized like this:
__[ batch item 0 ]_____________________
| key 0 -> ( missing ) |
| key 1 -> ( missing ) |
| key 2 -> ( missing ) |
| key 3 -> ( missing ) |
| key 4 -> ( missing ) |
|_______________________________________|
__[ batch item 1 ]_____________________
| key 0 -> ( missing ) |
| key 1 -> ( missing ) |
| key 2 -> ( missing ) |
| key 3 -> ( missing ) |
| key 4 -> ( missing ) |
|_______________________________________|
__[ batch item 2 ]_____________________
| key 0 -> ( missing ) |
| key 1 -> ( missing ) |
| key 2 -> ( missing ) |
| key 3 -> ( missing ) |
| key 4 -> ( missing ) |
|_______________________________________|
If we wish to set the 0-th element of each batch item, we could do:
dict_batch[0] = torch.tensor(
[
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
[2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0],
],
)
and the result would be:
__[ batch item 0 ]_____________________
| key 0 -> [ 0. 0. 0. 0. 0. 0. 0. ] |
| key 1 -> ( missing ) |
| key 2 -> ( missing ) |
| key 3 -> ( missing ) |
| key 4 -> ( missing ) |
|_______________________________________|
__[ batch item 1 ]_____________________
| key 0 -> [ 1. 1. 1. 1. 1. 1. 1. ] |
| key 1 -> ( missing ) |
| key 2 -> ( missing ) |
| key 3 -> ( missing ) |
| key 4 -> ( missing ) |
|_______________________________________|
__[ batch item 2 ]_____________________
| key 0 -> [ 2. 2. 2. 2. 2. 2. 2. ] |
| key 1 -> ( missing ) |
| key 2 -> ( missing ) |
| key 3 -> ( missing ) |
| key 4 -> ( missing ) |
|_______________________________________|
Continuing from the same example, if we wish to set the slot with key 1 in the 0th batch item, slot with key 2 in the 1st batch item, and slot with key 3 in the 2nd batch item, all in one go, we could do:
# Longer version: dict_batch[torch.tensor([1, 2, 3])] = ...
dict_batch[[1, 2, 3]] = torch.tensor(
[
[5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0],
[6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0],
[7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0],
],
)
Our updated dictionary batch would then look like this:
__[ batch item 0 ]_____________________
| key 0 -> [ 0. 0. 0. 0. 0. 0. 0. ] |
| key 1 -> [ 5. 5. 5. 5. 5. 5. 5. ] |
| key 2 -> ( missing ) |
| key 3 -> ( missing ) |
| key 4 -> ( missing ) |
|_______________________________________|
__[ batch item 1 ]_____________________
| key 0 -> [ 1. 1. 1. 1. 1. 1. 1. ] |
| key 1 -> ( missing ) |
| key 2 -> [ 6. 6. 6. 6. 6. 6. 6. ] |
| key 3 -> ( missing ) |
| key 4 -> ( missing ) |
|_______________________________________|
__[ batch item 2 ]_____________________
| key 0 -> [ 2. 2. 2. 2. 2. 2. 2. ] |
| key 1 -> ( missing ) |
| key 2 -> ( missing ) |
| key 3 -> [ 7. 7. 7. 7. 7. 7. 7. ] |
| key 4 -> ( missing ) |
|_______________________________________|
Conditional modifications via boolean masks is also supported.
For example, the following update on our dict_batch
:
dict_batch.set_(
[4, 3, 1],
torch.tensor(
[
[8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0],
[9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0],
[10.0, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0],
]
),
where=[True, True, False], # or: where=torch.tensor([True,True,False]),
)
would result in:
__[ batch item 0 ]_____________________
| key 0 -> [ 0. 0. 0. 0. 0. 0. 0. ] |
| key 1 -> [ 5. 5. 5. 5. 5. 5. 5. ] |
| key 2 -> ( missing ) |
| key 3 -> ( missing ) |
| key 4 -> [ 8. 8. 8. 8. 8. 8. 8. ] |
|_______________________________________|
__[ batch item 1 ]_____________________
| key 0 -> [ 1. 1. 1. 1. 1. 1. 1. ] |
| key 1 -> ( missing ) |
| key 2 -> [ 6. 6. 6. 6. 6. 6. 6. ] |
| key 3 -> [ 9. 9. 9. 9. 9. 9. 9. ] |
| key 4 -> ( missing ) |
|_______________________________________|
__[ batch item 2 ]_____________________
| key 0 -> [ 2. 2. 2. 2. 2. 2. 2. ] |
| key 1 -> ( missing ) |
| key 2 -> ( missing ) |
| key 3 -> [ 7. 7. 7. 7. 7. 7. 7. ] |
| key 4 -> ( missing ) |
|_______________________________________|
Please notice above that the slot with key 1 of the batch item 2 was not modified because its corresponding mask value was given as False.
After all these modifications, querying whether or not an element with key 0 would give us the following output:
which means that, for each dictionary within the batch, an element with key 0 exists. The same query for the key 3 would give us:
which means that the 0-th dictionary within the batch does not have an element with key 3, but the dictionaries 1 and 2 do have their elements with that key.
Source code in evotorch/tools/structures.py
class CDict(Structure):
"""
Representation of a batchable dictionary.
This structure is very similar to a `CMemory`, but with the additional
behavior of separately keeping track of which keys exist and which keys
do not exist.
Let us consider an example where we have 5 keys, and each key is associated
with a tensor of length 7. Such a dictionary could be allocated like this:
```python
dictnry = CDict(7, num_keys=5)
```
Our allocated dictionary can be visualized as follows:
```text
_______________________________________
| key 0 -> ( missing ) |
| key 1 -> ( missing ) |
| key 2 -> ( missing ) |
| key 3 -> ( missing ) |
| key 4 -> ( missing ) |
|_______________________________________|
```
Let us now sample a Gaussian noise and put it into the 0-th slot:
```python
dictnry[0] = torch.randn(7) # or: dictnry[torch.tensor(0)] = torch.randn(7)
```
which results in:
```text
_________________________________________
| key 0 -> [ Gaussian noise of length 7 ] |
| key 1 -> ( missing ) |
| key 2 -> ( missing ) |
| key 3 -> ( missing ) |
| key 4 -> ( missing ) |
|_________________________________________|
```
Let us now consider another example where we deal with not a single
dictionary but with a dictionary batch. For the sake of this example, let
us say that our desired batch size is 3. The allocation of such a batch
would be as follows:
```python
dict_batch = CDict(7, num_keys=5, batch_size=3)
```
Our dictionary batch can be visualized like this:
```text
__[ batch item 0 ]_____________________
| key 0 -> ( missing ) |
| key 1 -> ( missing ) |
| key 2 -> ( missing ) |
| key 3 -> ( missing ) |
| key 4 -> ( missing ) |
|_______________________________________|
__[ batch item 1 ]_____________________
| key 0 -> ( missing ) |
| key 1 -> ( missing ) |
| key 2 -> ( missing ) |
| key 3 -> ( missing ) |
| key 4 -> ( missing ) |
|_______________________________________|
__[ batch item 2 ]_____________________
| key 0 -> ( missing ) |
| key 1 -> ( missing ) |
| key 2 -> ( missing ) |
| key 3 -> ( missing ) |
| key 4 -> ( missing ) |
|_______________________________________|
```
If we wish to set the 0-th element of each batch item, we could do:
```python
dict_batch[0] = torch.tensor(
[
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
[2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0],
],
)
```
and the result would be:
```text
__[ batch item 0 ]_____________________
| key 0 -> [ 0. 0. 0. 0. 0. 0. 0. ] |
| key 1 -> ( missing ) |
| key 2 -> ( missing ) |
| key 3 -> ( missing ) |
| key 4 -> ( missing ) |
|_______________________________________|
__[ batch item 1 ]_____________________
| key 0 -> [ 1. 1. 1. 1. 1. 1. 1. ] |
| key 1 -> ( missing ) |
| key 2 -> ( missing ) |
| key 3 -> ( missing ) |
| key 4 -> ( missing ) |
|_______________________________________|
__[ batch item 2 ]_____________________
| key 0 -> [ 2. 2. 2. 2. 2. 2. 2. ] |
| key 1 -> ( missing ) |
| key 2 -> ( missing ) |
| key 3 -> ( missing ) |
| key 4 -> ( missing ) |
|_______________________________________|
```
Continuing from the same example, if we wish to set the slot with key 1
in the 0th batch item, slot with key 2 in the 1st batch item, and
slot with key 3 in the 2nd batch item, all in one go, we could do:
```python
# Longer version: dict_batch[torch.tensor([1, 2, 3])] = ...
dict_batch[[1, 2, 3]] = torch.tensor(
[
[5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0],
[6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0],
[7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0],
],
)
```
Our updated dictionary batch would then look like this:
```text
__[ batch item 0 ]_____________________
| key 0 -> [ 0. 0. 0. 0. 0. 0. 0. ] |
| key 1 -> [ 5. 5. 5. 5. 5. 5. 5. ] |
| key 2 -> ( missing ) |
| key 3 -> ( missing ) |
| key 4 -> ( missing ) |
|_______________________________________|
__[ batch item 1 ]_____________________
| key 0 -> [ 1. 1. 1. 1. 1. 1. 1. ] |
| key 1 -> ( missing ) |
| key 2 -> [ 6. 6. 6. 6. 6. 6. 6. ] |
| key 3 -> ( missing ) |
| key 4 -> ( missing ) |
|_______________________________________|
__[ batch item 2 ]_____________________
| key 0 -> [ 2. 2. 2. 2. 2. 2. 2. ] |
| key 1 -> ( missing ) |
| key 2 -> ( missing ) |
| key 3 -> [ 7. 7. 7. 7. 7. 7. 7. ] |
| key 4 -> ( missing ) |
|_______________________________________|
```
Conditional modifications via boolean masks is also supported.
For example, the following update on our `dict_batch`:
```python
dict_batch.set_(
[4, 3, 1],
torch.tensor(
[
[8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0],
[9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0],
[10.0, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0],
]
),
where=[True, True, False], # or: where=torch.tensor([True,True,False]),
)
```
would result in:
```text
__[ batch item 0 ]_____________________
| key 0 -> [ 0. 0. 0. 0. 0. 0. 0. ] |
| key 1 -> [ 5. 5. 5. 5. 5. 5. 5. ] |
| key 2 -> ( missing ) |
| key 3 -> ( missing ) |
| key 4 -> [ 8. 8. 8. 8. 8. 8. 8. ] |
|_______________________________________|
__[ batch item 1 ]_____________________
| key 0 -> [ 1. 1. 1. 1. 1. 1. 1. ] |
| key 1 -> ( missing ) |
| key 2 -> [ 6. 6. 6. 6. 6. 6. 6. ] |
| key 3 -> [ 9. 9. 9. 9. 9. 9. 9. ] |
| key 4 -> ( missing ) |
|_______________________________________|
__[ batch item 2 ]_____________________
| key 0 -> [ 2. 2. 2. 2. 2. 2. 2. ] |
| key 1 -> ( missing ) |
| key 2 -> ( missing ) |
| key 3 -> [ 7. 7. 7. 7. 7. 7. 7. ] |
| key 4 -> ( missing ) |
|_______________________________________|
```
Please notice above that the slot with key 1 of the batch item 2 was not
modified because its corresponding mask value was given as False.
After all these modifications, querying whether or not an element with
key 0 would give us the following output:
```text
>>> dict_batch.contains(0)
torch.tensor([True, True, True], dtype=torch.bool)
```
which means that, for each dictionary within the batch, an element with
key 0 exists. The same query for the key 3 would give us:
```text
>>> dict_batch.contains(3)
torch.tensor([False, True, True], dtype=torch.bool)
```
which means that the 0-th dictionary within the batch does not have an
element with key 3, but the dictionaries 1 and 2 do have their elements
with that key.
"""
def __init__(
self,
*size: Union[int, tuple, list],
num_keys: Union[int, tuple, list],
key_offset: Optional[Union[int, tuple, list]] = None,
batch_size: Optional[Union[int, tuple, list]] = None,
batch_shape: Optional[Union[int, tuple, list]] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
verify: bool = True,
):
"""
`__init__(...)`: Initialize the CDict.
Args:
size: Size of a tensor associated with a key, expected as an
integer, or as multiple positional arguments (each positional
argument being an integer), or as a tuple of integers.
num_keys: How many keys (and therefore how many slots) can the
dictionary have. If given as an integer `n`, then there will be
`n` slots available in the dictionary, and to access a slot one
will need to use an integer key `k` (where, by default, the
minimum acceptable `k` is 0 and the maximum acceptable `k` is
`n-1`). If given as a tuple of integers, then the number of slots
available in the dictionary will be computed as the product of
all the integers in the tuple, and a key will be expected as a
tuple. For example, when `num_keys` is `(3, 5)`, there will be
15 slots available in the dictionary (where, by default, the
minimum acceptable key will be `(0, 0)` and the maximum
acceptable key will be `(2, 4)`.
key_offset: Optionally can be used to shift the integer values of
the keys. For example, if `num_keys` is 10, then, by default,
the minimum key is 0 and the maximum key is 9. But together
with `num_keys=10`, if `key_offset` is given as 1, then the
minimum key will be 1 and the maximum key will be 10.
This argument can also be used together with a tuple-valued
`num_keys`. For example, with `num_keys` set as `(3, 5)`,
if `key_offset` is given as 1, then the minimum key value
will be `(1, 1)` (instead of `(0, 0)`) and the maximum key
value will be `(3, 5)` (instead of `(2, 4)`).
Also, with a tuple-valued `num_keys`, `key_offset` can be
given as a tuple, to shift the key values differently for each
item in the tuple.
batch_size: If given as None, then this dictionary will not be
batched. If given as an integer `n`, then this object will
represent a contiguous batch containing `n` dictionary blocks.
If given as a tuple `(size0, size1, ...)`, then this object
will represent a contiguous batch of dictionary, shape of this
batch being determined by the given tuple.
batch_shape: Alias for the argument `batch_size`.
fill_with: Optionally a numeric value using which the values will
be initialized. If no initialization is needed, then this
argument can be left as None.
dtype: The `dtype` of the values stored by this CDict.
device: The device on which the dictionary will be allocated.
verify: If True, then explicit checks will be done to verify
that there are no indexing errors. Can be set as False for
performance.
"""
self._data = CMemory(
*size,
num_keys=num_keys,
key_offset=key_offset,
batch_size=batch_size,
batch_shape=batch_shape,
dtype=dtype,
device=device,
verify=verify,
)
self._exist = CMemory(
num_keys=num_keys,
key_offset=key_offset,
batch_size=batch_size,
batch_shape=batch_shape,
dtype=torch.bool,
device=device,
verify=verify,
)
def get(self, key: Numbers, default: Optional[Numbers] = None) -> torch.Tensor:
"""
Get the value(s) associated with the given key(s).
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
default: Optionally can be specified as the fallback value for when
the element(s) with the given key(s) do not exist.
Returns:
The value(s) associated with the given key(s).
"""
if default is None:
return self._data[key]
else:
exist = self._exist[key]
default = self._get_value(default)
return do_where(exist, self._data[key], default)
def set_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Set the value(s) associated with the given key(s).
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
value: The new value(s).
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then modifications will happen only
on the memory slots whose corresponding mask values are True.
"""
self._data.set_(key, value, where)
self._exist.set_(key, True, where)
def add_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Add value(s) onto the existing values of slots with the given key(s).
Note that this operation does not change the existence flags of the
keys. In other words, if element(s) with `key` do not exist, then
they will still be flagged as non-existent after this operation.
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
value: The value(s) that will be added onto the existing value(s).
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then modifications will happen only
on the memory slots whose corresponding mask values are True.
"""
self._data.add_(key, value, where)
def subtract_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Subtract value(s) from existing values of slots with the given key(s).
Note that this operation does not change the existence flags of the
keys. In other words, if element(s) with `key` do not exist, then
they will still be flagged as non-existent after this operation.
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
value: The value(s) that will be subtracted from existing value(s).
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then modifications will happen only
on the memory slots whose corresponding mask values are True.
"""
self._data.subtract_(key, value, where)
def divide_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Divide the existing values of slots with the given key(s).
Note that this operation does not change the existence flags of the
keys. In other words, if element(s) with `key` do not exist, then
they will still be flagged as non-existent after this operation.
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
value: The value(s) that will be used as divisor(s).
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then modifications will happen only
on the memory slots whose corresponding mask values are True.
"""
self._data.divide_(key, value, where)
def multiply_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Multiply the existing values of slots with the given key(s).
Note that this operation does not change the existence flags of the
keys. In other words, if element(s) with `key` do not exist, then
they will still be flagged as non-existent after this operation.
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
value: The value(s) that will be used as the multiplier(s).
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then modifications will happen only
on the memory slots whose corresponding mask values are True.
"""
self._data.multiply_(key, value, where)
def contains(self, key: Numbers) -> torch.Tensor:
"""
Query whether or not the element(s) with the given key(s) exist.
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
Returns:
A boolean tensor indicating whether or not the element(s) with the
specified key(s) exist.
"""
return self._exist[key]
def __getitem__(self, key: Numbers) -> torch.Tensor:
"""
Get the value(s) associated with the given key(s).
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
Returns:
The value(s) associated with the given key(s).
"""
return self.get(key)
def __setitem__(self, key: Numbers, value: Numbers):
"""
Set the value(s) associated with the given key(s).
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
value: The new value(s).
"""
self.set_(key, value)
def clear(self, where: Optional[torch.Tensor] = None):
"""
Clear the dictionaries.
In the context of this data structure, to "clear" means to set the
status for each key to non-existent.
Args:
where: Optionally a boolean tensor, specifying which dictionaries
within the batch should be cleared. If this argument is omitted
(i.e. left as None), then all dictionaries will be cleared.
"""
if where is None:
self._exist.data[:] = False
else:
where = self._get_where(where)
all_false = torch.tensor(False, dtype=torch.bool, device=self._exist.device).expand(self._exist.shape)
self._exist.data[:] = do_where(where, all_false, self._exist.data[:])
@property
def data(self) -> torch.Tensor:
"""
The entire value tensor
"""
return self._data.data
data: Tensor
property
readonly
¶
The entire value tensor
__getitem__(self, key)
special
¶
Get the value(s) associated with the given key(s).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
Union[numbers.Number, Iterable[numbers.Number]] |
A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the |
required |
Returns:
Type | Description |
---|---|
Tensor |
The value(s) associated with the given key(s). |
Source code in evotorch/tools/structures.py
def __getitem__(self, key: Numbers) -> torch.Tensor:
"""
Get the value(s) associated with the given key(s).
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
Returns:
The value(s) associated with the given key(s).
"""
return self.get(key)
__init__(self, *size, *, num_keys, key_offset=None, batch_size=None, batch_shape=None, dtype=None, device=None, verify=True)
special
¶
__init__(...)
: Initialize the CDict.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
size |
Union[int, tuple, list] |
Size of a tensor associated with a key, expected as an integer, or as multiple positional arguments (each positional argument being an integer), or as a tuple of integers. |
() |
num_keys |
Union[int, tuple, list] |
How many keys (and therefore how many slots) can the
dictionary have. If given as an integer |
required |
key_offset |
Union[int, tuple, list] |
Optionally can be used to shift the integer values of
the keys. For example, if |
None |
batch_size |
Union[int, tuple, list] |
If given as None, then this dictionary will not be
batched. If given as an integer |
None |
batch_shape |
Union[int, tuple, list] |
Alias for the argument |
None |
fill_with |
Optionally a numeric value using which the values will be initialized. If no initialization is needed, then this argument can be left as None. |
required | |
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
The |
None |
device |
Union[str, torch.device] |
The device on which the dictionary will be allocated. |
None |
verify |
bool |
If True, then explicit checks will be done to verify that there are no indexing errors. Can be set as False for performance. |
True |
Source code in evotorch/tools/structures.py
def __init__(
self,
*size: Union[int, tuple, list],
num_keys: Union[int, tuple, list],
key_offset: Optional[Union[int, tuple, list]] = None,
batch_size: Optional[Union[int, tuple, list]] = None,
batch_shape: Optional[Union[int, tuple, list]] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
verify: bool = True,
):
"""
`__init__(...)`: Initialize the CDict.
Args:
size: Size of a tensor associated with a key, expected as an
integer, or as multiple positional arguments (each positional
argument being an integer), or as a tuple of integers.
num_keys: How many keys (and therefore how many slots) can the
dictionary have. If given as an integer `n`, then there will be
`n` slots available in the dictionary, and to access a slot one
will need to use an integer key `k` (where, by default, the
minimum acceptable `k` is 0 and the maximum acceptable `k` is
`n-1`). If given as a tuple of integers, then the number of slots
available in the dictionary will be computed as the product of
all the integers in the tuple, and a key will be expected as a
tuple. For example, when `num_keys` is `(3, 5)`, there will be
15 slots available in the dictionary (where, by default, the
minimum acceptable key will be `(0, 0)` and the maximum
acceptable key will be `(2, 4)`.
key_offset: Optionally can be used to shift the integer values of
the keys. For example, if `num_keys` is 10, then, by default,
the minimum key is 0 and the maximum key is 9. But together
with `num_keys=10`, if `key_offset` is given as 1, then the
minimum key will be 1 and the maximum key will be 10.
This argument can also be used together with a tuple-valued
`num_keys`. For example, with `num_keys` set as `(3, 5)`,
if `key_offset` is given as 1, then the minimum key value
will be `(1, 1)` (instead of `(0, 0)`) and the maximum key
value will be `(3, 5)` (instead of `(2, 4)`).
Also, with a tuple-valued `num_keys`, `key_offset` can be
given as a tuple, to shift the key values differently for each
item in the tuple.
batch_size: If given as None, then this dictionary will not be
batched. If given as an integer `n`, then this object will
represent a contiguous batch containing `n` dictionary blocks.
If given as a tuple `(size0, size1, ...)`, then this object
will represent a contiguous batch of dictionary, shape of this
batch being determined by the given tuple.
batch_shape: Alias for the argument `batch_size`.
fill_with: Optionally a numeric value using which the values will
be initialized. If no initialization is needed, then this
argument can be left as None.
dtype: The `dtype` of the values stored by this CDict.
device: The device on which the dictionary will be allocated.
verify: If True, then explicit checks will be done to verify
that there are no indexing errors. Can be set as False for
performance.
"""
self._data = CMemory(
*size,
num_keys=num_keys,
key_offset=key_offset,
batch_size=batch_size,
batch_shape=batch_shape,
dtype=dtype,
device=device,
verify=verify,
)
self._exist = CMemory(
num_keys=num_keys,
key_offset=key_offset,
batch_size=batch_size,
batch_shape=batch_shape,
dtype=torch.bool,
device=device,
verify=verify,
)
__setitem__(self, key, value)
special
¶
Set the value(s) associated with the given key(s).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
Union[numbers.Number, Iterable[numbers.Number]] |
A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the |
required |
value |
Union[numbers.Number, Iterable[numbers.Number]] |
The new value(s). |
required |
Source code in evotorch/tools/structures.py
add_(self, key, value, where=None)
¶
Add value(s) onto the existing values of slots with the given key(s).
Note that this operation does not change the existence flags of the
keys. In other words, if element(s) with key
do not exist, then
they will still be flagged as non-existent after this operation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
Union[numbers.Number, Iterable[numbers.Number]] |
A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the |
required |
value |
Union[numbers.Number, Iterable[numbers.Number]] |
The value(s) that will be added onto the existing value(s). |
required |
where |
Union[numbers.Number, Iterable[numbers.Number]] |
Optionally a boolean mask whose shape matches |
None |
Source code in evotorch/tools/structures.py
def add_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Add value(s) onto the existing values of slots with the given key(s).
Note that this operation does not change the existence flags of the
keys. In other words, if element(s) with `key` do not exist, then
they will still be flagged as non-existent after this operation.
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
value: The value(s) that will be added onto the existing value(s).
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then modifications will happen only
on the memory slots whose corresponding mask values are True.
"""
self._data.add_(key, value, where)
clear(self, where=None)
¶
Clear the dictionaries.
In the context of this data structure, to "clear" means to set the status for each key to non-existent.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
where |
Optional[torch.Tensor] |
Optionally a boolean tensor, specifying which dictionaries within the batch should be cleared. If this argument is omitted (i.e. left as None), then all dictionaries will be cleared. |
None |
Source code in evotorch/tools/structures.py
def clear(self, where: Optional[torch.Tensor] = None):
"""
Clear the dictionaries.
In the context of this data structure, to "clear" means to set the
status for each key to non-existent.
Args:
where: Optionally a boolean tensor, specifying which dictionaries
within the batch should be cleared. If this argument is omitted
(i.e. left as None), then all dictionaries will be cleared.
"""
if where is None:
self._exist.data[:] = False
else:
where = self._get_where(where)
all_false = torch.tensor(False, dtype=torch.bool, device=self._exist.device).expand(self._exist.shape)
self._exist.data[:] = do_where(where, all_false, self._exist.data[:])
contains(self, key)
¶
Query whether or not the element(s) with the given key(s) exist.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
Union[numbers.Number, Iterable[numbers.Number]] |
A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the |
required |
Returns:
Type | Description |
---|---|
Tensor |
A boolean tensor indicating whether or not the element(s) with the specified key(s) exist. |
Source code in evotorch/tools/structures.py
def contains(self, key: Numbers) -> torch.Tensor:
"""
Query whether or not the element(s) with the given key(s) exist.
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
Returns:
A boolean tensor indicating whether or not the element(s) with the
specified key(s) exist.
"""
return self._exist[key]
divide_(self, key, value, where=None)
¶
Divide the existing values of slots with the given key(s).
Note that this operation does not change the existence flags of the
keys. In other words, if element(s) with key
do not exist, then
they will still be flagged as non-existent after this operation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
Union[numbers.Number, Iterable[numbers.Number]] |
A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the |
required |
value |
Union[numbers.Number, Iterable[numbers.Number]] |
The value(s) that will be used as divisor(s). |
required |
where |
Union[numbers.Number, Iterable[numbers.Number]] |
Optionally a boolean mask whose shape matches |
None |
Source code in evotorch/tools/structures.py
def divide_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Divide the existing values of slots with the given key(s).
Note that this operation does not change the existence flags of the
keys. In other words, if element(s) with `key` do not exist, then
they will still be flagged as non-existent after this operation.
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
value: The value(s) that will be used as divisor(s).
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then modifications will happen only
on the memory slots whose corresponding mask values are True.
"""
self._data.divide_(key, value, where)
get(self, key, default=None)
¶
Get the value(s) associated with the given key(s).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
Union[numbers.Number, Iterable[numbers.Number]] |
A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the |
required |
default |
Union[numbers.Number, Iterable[numbers.Number]] |
Optionally can be specified as the fallback value for when the element(s) with the given key(s) do not exist. |
None |
Returns:
Type | Description |
---|---|
Tensor |
The value(s) associated with the given key(s). |
Source code in evotorch/tools/structures.py
def get(self, key: Numbers, default: Optional[Numbers] = None) -> torch.Tensor:
"""
Get the value(s) associated with the given key(s).
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
default: Optionally can be specified as the fallback value for when
the element(s) with the given key(s) do not exist.
Returns:
The value(s) associated with the given key(s).
"""
if default is None:
return self._data[key]
else:
exist = self._exist[key]
default = self._get_value(default)
return do_where(exist, self._data[key], default)
multiply_(self, key, value, where=None)
¶
Multiply the existing values of slots with the given key(s).
Note that this operation does not change the existence flags of the
keys. In other words, if element(s) with key
do not exist, then
they will still be flagged as non-existent after this operation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
Union[numbers.Number, Iterable[numbers.Number]] |
A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the |
required |
value |
Union[numbers.Number, Iterable[numbers.Number]] |
The value(s) that will be used as the multiplier(s). |
required |
where |
Union[numbers.Number, Iterable[numbers.Number]] |
Optionally a boolean mask whose shape matches |
None |
Source code in evotorch/tools/structures.py
def multiply_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Multiply the existing values of slots with the given key(s).
Note that this operation does not change the existence flags of the
keys. In other words, if element(s) with `key` do not exist, then
they will still be flagged as non-existent after this operation.
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
value: The value(s) that will be used as the multiplier(s).
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then modifications will happen only
on the memory slots whose corresponding mask values are True.
"""
self._data.multiply_(key, value, where)
set_(self, key, value, where=None)
¶
Set the value(s) associated with the given key(s).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
Union[numbers.Number, Iterable[numbers.Number]] |
A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the |
required |
value |
Union[numbers.Number, Iterable[numbers.Number]] |
The new value(s). |
required |
where |
Union[numbers.Number, Iterable[numbers.Number]] |
Optionally a boolean mask whose shape matches |
None |
Source code in evotorch/tools/structures.py
def set_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Set the value(s) associated with the given key(s).
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
value: The new value(s).
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then modifications will happen only
on the memory slots whose corresponding mask values are True.
"""
self._data.set_(key, value, where)
self._exist.set_(key, True, where)
subtract_(self, key, value, where=None)
¶
Subtract value(s) from existing values of slots with the given key(s).
Note that this operation does not change the existence flags of the
keys. In other words, if element(s) with key
do not exist, then
they will still be flagged as non-existent after this operation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
Union[numbers.Number, Iterable[numbers.Number]] |
A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the |
required |
value |
Union[numbers.Number, Iterable[numbers.Number]] |
The value(s) that will be subtracted from existing value(s). |
required |
where |
Union[numbers.Number, Iterable[numbers.Number]] |
Optionally a boolean mask whose shape matches |
None |
Source code in evotorch/tools/structures.py
def subtract_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Subtract value(s) from existing values of slots with the given key(s).
Note that this operation does not change the existence flags of the
keys. In other words, if element(s) with `key` do not exist, then
they will still be flagged as non-existent after this operation.
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
value: The value(s) that will be subtracted from existing value(s).
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then modifications will happen only
on the memory slots whose corresponding mask values are True.
"""
self._data.subtract_(key, value, where)
CList (Structure)
¶
Representation of a batchable, contiguous, variable-length list structure.
This CList structure works with a pre-allocated contiguous block of memory with a separately stored length. In the batched case, each batch item has its own length.
This structure supports negative indexing (meaning that -1 refers to the last item, -2 refers to the second last item, etc.).
Let us imagine that we need a list where each element has a shape (3,)
,
and our maximum length is 5.
Such a list could be instantiated via:
In its initial state, the list is empty, which can be visualized like:
_______________________________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | <unused> | <unused> | <unused> | <unused> | <unused> |
|________|__________|__________|__________|__________|__________|
We can add elements into our list like this:
After these two push operations, our list looks like this:
__________________________________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | [1. 2. 3.] | [4. 5. 6] | <unused> | <unused> | <unused> |
|________|____________|___________|__________|__________|__________|
Here, lst[0]
returns [1. 2. 3.]
and lst[1]
returns [4. 5. 6.]
.
A CList
also supports negative indices, allowing lst[-1]
to return
[4. 5. 6.]
(the last element) and lst[-2]
to return [1. 2. 3.]
(the second last element).
One can also create a batch of lists. Let us imagine that we wish to create a batch of lists such that the batch size is 4, length of an element is 3, and the maximum length is 5. Such a batch can be created as follows:
Our batch can be visualized like this:
__[ batch item 0 ]_____________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | <unused> | <unused> | <unused> | <unused> | <unused> |
|________|__________|__________|__________|__________|__________|
__[ batch item 1 ]_____________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | <unused> | <unused> | <unused> | <unused> | <unused> |
|________|__________|__________|__________|__________|__________|
__[ batch item 2 ]_____________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | <unused> | <unused> | <unused> | <unused> | <unused> |
|________|__________|__________|__________|__________|__________|
__[ batch item 3 ]_____________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | <unused> | <unused> | <unused> | <unused> | <unused> |
|________|__________|__________|__________|__________|__________|
Let us now add [1. 1. 1.]
to the batch item 0, [2. 2. 2.]
to the batch
item 1, and so on:
list_batch.append_(
torch.tensor(
[
[1.0, 1.0, 1.0],
[2.0, 2.0, 2.0],
[3.0, 3.0, 3.0],
[4.0, 4.0, 4.0],
]
)
)
After these operations, list_batch
looks like this:
__[ batch item 0 ]_______________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | [1. 1. 1.] | <unused> | <unused> | <unused> | <unused> |
|________|____________|__________|__________|__________|__________|
__[ batch item 1 ]_______________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | [2. 2. 2.] | <unused> | <unused> | <unused> | <unused> |
|________|____________|__________|__________|__________|__________|
__[ batch item 2 ]_______________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | [3. 3. 3.] | <unused> | <unused> | <unused> | <unused> |
|________|____________|__________|__________|__________|__________|
__[ batch item 3 ]_______________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | [4. 4. 4.] | <unused> | <unused> | <unused> | <unused> |
|________|____________|__________|__________|__________|__________|
We can also use a boolean mask to add to only some of the lists within the batch:
list_batch.append_(
torch.tensor(
[
[5.0, 5.0, 5.0],
[6.0, 6.0, 6.0],
[7.0, 7.0, 7.0],
[8.0, 8.0, 8.0],
]
),
where=torch.tensor([True, False, False, True]),
)
which would update our batch of lists like this:
__[ batch item 0 ]_________________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | [1. 1. 1.] | [5. 5. 5.] | <unused> | <unused> | <unused> |
|________|____________|____________|__________|__________|__________|
__[ batch item 1 ]_________________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | [2. 2. 2.] | <unused> | <unused> | <unused> | <unused> |
|________|____________|____________|__________|__________|__________|
__[ batch item 2 ]_________________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | [3. 3. 3.] | <unused> | <unused> | <unused> | <unused> |
|________|____________|____________|__________|__________|__________|
__[ batch item 3 ]_________________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | [4. 4. 4.] | [8. 8. 8.] | <unused> | <unused> | <unused> |
|________|____________|____________|__________|__________|__________|
Please notice above how the batch items 1 and 2 were not modified because
their corresponding boolean values in the where
tensor were given as
False
.
After all these modifications we would get the following results:
>>> list_batch[0]
torch.tensor(
[[1. 1. 1.],
[2. 2. 2.],
[3. 3. 3.],
[4. 4. 4.]]
)
>>> list_batch[[1, 0, 0, 1]]
torch.tensor(
[[5. 5. 5.],
[2. 2. 2.],
[3. 3. 3.],
[8. 8. 8.]]
)
>>> list_batch[[-1, -1, -1, -1]]
torch.tensor(
[[5. 5. 5.],
[2. 2. 2.],
[3. 3. 3.],
[8. 8. 8.]]
)
Note that this CList structure also supports the ability to insert to the beginning, or to remove from the beginning. These operations internally shift the addresses for the beginning of the data within the underlying memory, and therefore, they are not any more costly than adding to or removing from the end of the list.
Source code in evotorch/tools/structures.py
class CList(Structure):
"""
Representation of a batchable, contiguous, variable-length list structure.
This CList structure works with a pre-allocated contiguous block of memory
with a separately stored length. In the batched case, each batch item
has its own length.
This structure supports negative indexing (meaning that -1 refers to the
last item, -2 refers to the second last item, etc.).
Let us imagine that we need a list where each element has a shape `(3,)`,
and our maximum length is 5.
Such a list could be instantiated via:
```python
lst = CList(3, max_length=5)
```
In its initial state, the list is empty, which can be visualized like:
```text
_______________________________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | <unused> | <unused> | <unused> | <unused> | <unused> |
|________|__________|__________|__________|__________|__________|
```
We can add elements into our list like this:
```python
lst.append_(torch.tensor([1.0, 2.0, 3.0]))
lst.append_(torch.tensor([4.0, 5.0, 6.0]))
```
After these two push operations, our list looks like this:
```text
__________________________________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | [1. 2. 3.] | [4. 5. 6] | <unused> | <unused> | <unused> |
|________|____________|___________|__________|__________|__________|
```
Here, `lst[0]` returns `[1. 2. 3.]` and `lst[1]` returns `[4. 5. 6.]`.
A `CList` also supports negative indices, allowing `lst[-1]` to return
`[4. 5. 6.]` (the last element) and `lst[-2]` to return `[1. 2. 3.]`
(the second last element).
One can also create a batch of lists. Let us imagine that we wish to
create a batch of lists such that the batch size is 4, length of an
element is 3, and the maximum length is 5. Such a batch can be created
as follows:
```python
list_batch = CList(3, max_length=5, batch_size=4)
```
Our batch can be visualized like this:
```text
__[ batch item 0 ]_____________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | <unused> | <unused> | <unused> | <unused> | <unused> |
|________|__________|__________|__________|__________|__________|
__[ batch item 1 ]_____________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | <unused> | <unused> | <unused> | <unused> | <unused> |
|________|__________|__________|__________|__________|__________|
__[ batch item 2 ]_____________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | <unused> | <unused> | <unused> | <unused> | <unused> |
|________|__________|__________|__________|__________|__________|
__[ batch item 3 ]_____________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | <unused> | <unused> | <unused> | <unused> | <unused> |
|________|__________|__________|__________|__________|__________|
```
Let us now add `[1. 1. 1.]` to the batch item 0, `[2. 2. 2.]` to the batch
item 1, and so on:
```python
list_batch.append_(
torch.tensor(
[
[1.0, 1.0, 1.0],
[2.0, 2.0, 2.0],
[3.0, 3.0, 3.0],
[4.0, 4.0, 4.0],
]
)
)
```
After these operations, `list_batch` looks like this:
```text
__[ batch item 0 ]_______________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | [1. 1. 1.] | <unused> | <unused> | <unused> | <unused> |
|________|____________|__________|__________|__________|__________|
__[ batch item 1 ]_______________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | [2. 2. 2.] | <unused> | <unused> | <unused> | <unused> |
|________|____________|__________|__________|__________|__________|
__[ batch item 2 ]_______________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | [3. 3. 3.] | <unused> | <unused> | <unused> | <unused> |
|________|____________|__________|__________|__________|__________|
__[ batch item 3 ]_______________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | [4. 4. 4.] | <unused> | <unused> | <unused> | <unused> |
|________|____________|__________|__________|__________|__________|
```
We can also use a boolean mask to add to only some of the lists within
the batch:
```python
list_batch.append_(
torch.tensor(
[
[5.0, 5.0, 5.0],
[6.0, 6.0, 6.0],
[7.0, 7.0, 7.0],
[8.0, 8.0, 8.0],
]
),
where=torch.tensor([True, False, False, True]),
)
```
which would update our batch of lists like this:
```text
__[ batch item 0 ]_________________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | [1. 1. 1.] | [5. 5. 5.] | <unused> | <unused> | <unused> |
|________|____________|____________|__________|__________|__________|
__[ batch item 1 ]_________________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | [2. 2. 2.] | <unused> | <unused> | <unused> | <unused> |
|________|____________|____________|__________|__________|__________|
__[ batch item 2 ]_________________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | [3. 3. 3.] | <unused> | <unused> | <unused> | <unused> |
|________|____________|____________|__________|__________|__________|
__[ batch item 3 ]_________________________________________________
| index | 0 | 1 | 2 | 3 | 4 |
| values | [4. 4. 4.] | [8. 8. 8.] | <unused> | <unused> | <unused> |
|________|____________|____________|__________|__________|__________|
```
Please notice above how the batch items 1 and 2 were not modified because
their corresponding boolean values in the `where` tensor were given as
`False`.
After all these modifications we would get the following results:
```text
>>> list_batch[0]
torch.tensor(
[[1. 1. 1.],
[2. 2. 2.],
[3. 3. 3.],
[4. 4. 4.]]
)
>>> list_batch[[1, 0, 0, 1]]
torch.tensor(
[[5. 5. 5.],
[2. 2. 2.],
[3. 3. 3.],
[8. 8. 8.]]
)
>>> list_batch[[-1, -1, -1, -1]]
torch.tensor(
[[5. 5. 5.],
[2. 2. 2.],
[3. 3. 3.],
[8. 8. 8.]]
)
```
Note that this CList structure also supports the ability to insert to the
beginning, or to remove from the beginning. These operations internally
shift the addresses for the beginning of the data within the underlying
memory, and therefore, they are not any more costly than adding to or
removing from the end of the list.
"""
def __init__(
self,
*size: Union[int, list, tuple],
max_length: int,
batch_size: Optional[Union[int, tuple, list]] = None,
batch_shape: Optional[Union[int, tuple, list]] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
verify: bool = True,
):
self._verify = bool(verify)
self._max_length = int(max_length)
self._data = CMemory(
*size,
num_keys=self._max_length,
batch_size=batch_size,
batch_shape=batch_shape,
dtype=dtype,
device=device,
verify=False,
)
self._begin, self._end = [
CMemory(
num_keys=1,
batch_size=batch_size,
batch_shape=batch_shape,
dtype=torch.int64,
device=device,
verify=False,
fill_with=-1,
)
for _ in range(2)
]
if "float" in str(self._data.dtype):
self._pop_fallback = float("nan")
else:
self._pop_fallback = 0
if self._begin.batch_ndim == 0:
self._all_zeros = torch.tensor(0, dtype=torch.int64, device=self._begin.device)
else:
self._all_zeros = torch.zeros(1, dtype=torch.int64, device=self._begin.device).expand(
self._begin.batch_shape
)
def _is_empty(self) -> torch.Tensor:
# return (self._begin[self._all_zeros] == -1) & (self._end[self._all_zeros] == -1)
return self._begin[self._all_zeros] == -1
def _has_one_element(self) -> torch.Tensor:
begin = self._begin[self._all_zeros]
end = self._end[self._all_zeros]
return (begin == end) & (begin >= 0)
def _is_full(self) -> torch.Tensor:
begin = self._begin[self._all_zeros]
end = self._end[self._all_zeros]
return ((end - begin) % self._max_length) == (self._max_length - 1)
@staticmethod
def _considering_where(other_mask: torch.Tensor, where: Optional[torch.Tensor]) -> torch.Tensor:
return other_mask if where is None else other_mask & where
def _get_info_for_adding_element(self, where: Optional[torch.Tensor]) -> _InfoForAddingElement:
is_empty = self._is_empty()
is_full = self._is_full()
to_be_declared_non_empty = self._considering_where(is_empty, where)
if self._verify:
invalid_move = self._considering_where(is_full, where)
if torch.any(invalid_move):
raise IndexError("Some of the queues are full, and therefore elements cannot be added to them")
valid_move = self._considering_where((~is_empty) & (~is_full), where)
return _InfoForAddingElement(valid_move=valid_move, to_be_declared_non_empty=to_be_declared_non_empty)
def _get_info_for_removing_element(self, where: Optional[torch.Tensor]) -> _InfoForRemovingElement:
is_empty = self._is_empty()
has_one_element = self._has_one_element()
if self._verify:
invalid_move = self._considering_where(is_empty, where)
if torch.any(invalid_move):
raise IndexError(
"Some of the queues are already empty, and therefore elements cannot be removed from them"
)
to_be_declared_empty = self._considering_where(has_one_element, where)
valid_move = self._considering_where((~is_empty) & (~has_one_element), where)
return _InfoForRemovingElement(valid_move=valid_move, to_be_declared_empty=to_be_declared_empty)
def _move_begin_forward(self, where: Optional[torch.Tensor]):
valid_move, to_be_declared_empty = self._get_info_for_removing_element(where)
self._begin.set_(self._all_zeros, -1, where=to_be_declared_empty)
self._end.set_(self._all_zeros, -1, where=to_be_declared_empty)
self._begin.add_circular_(self._all_zeros, 1, self._max_length, where=valid_move)
def _move_end_forward(self, where: Optional[torch.Tensor]):
valid_move, to_be_declared_non_empty = self._get_info_for_adding_element(where)
self._begin.set_(self._all_zeros, 0, where=to_be_declared_non_empty)
self._end.set_(self._all_zeros, 0, where=to_be_declared_non_empty)
self._end.add_circular_(self._all_zeros, 1, self._max_length, where=valid_move)
def _move_begin_backward(self, where: Optional[torch.Tensor]):
valid_move, to_be_declared_non_empty = self._get_info_for_adding_element(where)
self._begin.set_(self._all_zeros, 0, where=to_be_declared_non_empty)
self._end.set_(self._all_zeros, 0, where=to_be_declared_non_empty)
self._begin.add_circular_(self._all_zeros, -1, self._max_length, where=valid_move)
def _move_end_backward(self, where: Optional[torch.Tensor]):
valid_move, to_be_declared_empty = self._get_info_for_removing_element(where)
self._begin.set_(self._all_zeros, -1, where=to_be_declared_empty)
self._end.set_(self._all_zeros, -1, where=to_be_declared_empty)
self._end.add_circular_(self._all_zeros, -1, self._max_length, where=valid_move)
def _get_key(self, key: Numbers) -> torch.Tensor:
key = torch.as_tensor(key, dtype=torch.int64, device=self._data.device)
batch_shape = self._data.batch_shape
if key.shape != batch_shape:
if key.ndim == 0:
key = key.expand(self._data.batch_shape)
else:
raise ValueError(
f"Expected the keys of shape {batch_shape}, but received them in this shape: {key.shape}"
)
return key
def _is_underlying_key_valid(self, underlying_key: torch.Tensor) -> torch.Tensor:
within_valid_range = (underlying_key >= 0) & (underlying_key < self._max_length)
begin = self._begin[self._all_zeros]
end = self._end[self._all_zeros]
empty = self._is_empty()
non_empty = ~empty
larger_end = non_empty & (end > begin)
smaller_end = non_empty & (end < begin)
same_begin_end = (begin == end) & (~empty)
valid = within_valid_range & (
(same_begin_end & (underlying_key == begin))
| (larger_end & (underlying_key >= begin) & (underlying_key <= end))
| (smaller_end & ((underlying_key <= end) | (underlying_key >= begin)))
)
return valid
def _mod_underlying_key(self, underlying_key: torch.Tensor, *, verify: Optional[bool] = None) -> torch.Tensor:
verify = self._verify if verify is None else verify
if self._verify:
where_negative = underlying_key < 0
where_too_large = underlying_key >= self._max_length
underlying_key = underlying_key.clone()
underlying_key[where_negative] += self._max_length
underlying_key[where_too_large] -= self._max_length
else:
underlying_key = underlying_key % self._max_length
return underlying_key
def _get_underlying_key(
self,
key: Numbers,
*,
verify: Optional[bool] = None,
return_validity: bool = False,
where: Optional[torch.Tensor] = None,
) -> Union[torch.Tensor, tuple]:
if where is not None:
where = self._get_where(where)
verify = self._verify if verify is None else verify
key = self._get_key(key)
underlying_key_for_pos_index = self._begin[self._all_zeros] + key
underlying_key_for_neg_index = self._end[self._all_zeros] + key + 1
underlying_key = torch.where(key >= 0, underlying_key_for_pos_index, underlying_key_for_neg_index)
underlying_key = self._mod_underlying_key(underlying_key, verify=verify)
if verify or return_validity:
valid = self._is_underlying_key_valid(underlying_key)
else:
valid = None
if verify:
okay = valid if where is None else valid | (~where)
if not torch.all(okay):
raise IndexError("Encountered invalid index/indices")
if return_validity:
return underlying_key, valid
else:
return underlying_key
def get(self, key: Numbers, default: Optional[Numbers] = None) -> torch.Tensor:
"""
Get the value(s) from the specified element(s).
Args:
key: The index/indices pointing to the element(s) whose value(s)
is/are queried.
default: Default value(s) to be returned for when the specified
index/indices are invalid and/or out of range.
Returns:
The value(s) stored by the element(s).
"""
if default is None:
underlying_key = self._get_underlying_key(key)
return self._data[underlying_key]
else:
default = self._data._get_value(default)
underlying_key, valid_key = self._get_underlying_key(key, verify=False, return_validity=True)
return do_where(valid_key, self._data[underlying_key % self._max_length], default)
def __getitem__(self, key: Numbers) -> torch.Tensor:
"""
Get the value(s) from the specified element(s).
Args:
key: The index/indices pointing to the element(s) whose value(s)
is/are queried.
Returns:
The value(s) stored by the element(s).
"""
return self.get(key)
def _apply_modification_method(
self, method_name: str, key: Numbers, value: Numbers, where: Optional[Numbers] = None
):
underlying_key = self._get_underlying_key(key, where=where)
getattr(self._data, method_name)(underlying_key, value, where)
def set_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Set the element(s) addressed to by the given key(s).
Args:
key: The index/indices tensor.
value: The new value(s).
where: Optionally a boolean mask. When provided, only the elements
whose corresponding mask value(s) is/are True will be subject
to modification.
"""
self._apply_modification_method("set_", key, value, where)
def __setitem__(self, key: Numbers, value: Numbers):
"""
Set the element(s) addressed to by the given key(s).
Args:
key: The index/indices tensor.
value: The new value(s).
"""
self._apply_modification_method("set_", key, value)
def add_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Add to the element(s) addressed to by the given key(s).
Please note that the word "add" is used in the arithmetic sense
(i.e. in the sense of performing addition). For putting a new
element into this list, please see the method `append_(...)`.
Args:
key: The index/indices tensor.
value: The value(s) that will be added onto the existing
element(s).
where: Optionally a boolean mask. When provided, only the elements
whose corresponding mask value(s) is/are True will be subject
to modification.
"""
self._apply_modification_method("add_", key, value, where)
def subtract_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Subtract from the element(s) addressed to by the given key(s).
Args:
key: The index/indices tensor.
value: The value(s) that will be subtracted from the existing
element(s).
where: Optionally a boolean mask. When provided, only the elements
whose corresponding mask value(s) is/are True will be subject
to modification.
"""
self._apply_modification_method("subtract_", key, value, where)
def multiply_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Multiply the element(s) addressed to by the given key(s).
Args:
key: The index/indices tensor.
value: The value(s) that will be used as the multiplier(s) on the
existing element(s).
where: Optionally a boolean mask. When provided, only the elements
whose corresponding mask value(s) is/are True will be subject
to modification.
"""
self._apply_modification_method("multiply_", key, value, where)
def divide_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Divide the element(s) addressed to by the given key(s).
Args:
key: The index/indices tensor.
value: The value(s) that will be used as the divisor(s) on the
existing element(s).
where: Optionally a boolean mask. When provided, only the elements
whose corresponding mask value(s) is/are True will be subject
to modification.
"""
self._apply_modification_method("divide_", key, value, where)
def append_(self, value: Numbers, where: Optional[Numbers] = None):
"""
Add new item(s) to the end(s) of the list(s).
The length(s) of the updated list(s) will increase by 1.
Args:
value: The element that will be added to the list.
In the non-batched case, this element is expected as a tensor
whose shape matches `value_shape`.
In the batched case, this value is expected as a batch of
elements with extra leftmost dimensions (those extra leftmost
dimensions being expressed by `batch_shape`).
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then additions will happen only
on the lists whose corresponding mask values are True.
"""
where = None if where is None else self._get_where(where)
self._move_end_forward(where)
self.set_(-1, value, where=where)
def push_(self, value: Numbers, where: Optional[Numbers] = None):
"""
Alias for the method `append_(...)`.
We provide this alternative name so that users who wish to use this
CList structure like a stack will be able to use familiar terminology.
"""
return self.append_(value, where=where)
def appendleft_(self, value: Numbers, where: Optional[Numbers] = None):
"""
Add new item(s) to the beginning point(s) of the list(s).
The length(s) of the updated list(s) will increase by 1.
Args:
value: The element that will be added to the list.
In the non-batched case, this element is expected as a tensor
whose shape matches `value_shape`.
In the batched case, this value is expected as a batch of
elements with extra leftmost dimensions (those extra leftmost
dimensions being expressed by `batch_shape`).
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then additions will happen only
on the lists whose corresponding mask values are True.
"""
where = None if where is None else self._get_where(where)
self._move_begin_backward(where)
self.set_(0, value, where=where)
def pop_(self, where: Optional[Numbers] = None):
"""
Pop the last item(s) from the ending point(s) list(s).
The length(s) of the updated list(s) will decrease by 1.
Args:
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then the pop operations will happen
only on the lists whose corresponding mask values are True.
Returns:
The popped item(s).
"""
where = None if where is None else self._get_where(where)
result = self.get(-1, default=self._pop_fallback)
self._move_end_backward(where)
return result
def popleft_(self, where: Optional[Numbers] = None):
"""
Pop the last item(s) from the beginning point(s) list(s).
The length(s) of the updated list(s) will decrease by 1.
Args:
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then the pop operations will happen
only on the lists whose corresponding mask values are True.
Returns:
The popped item(s).
"""
where = None if where is None else self._get_where(where)
result = self.get(0, default=self._pop_fallback)
self._move_begin_forward(where)
return result
def clear(self, where: Optional[torch.Tensor] = None):
"""
Clear the list(s).
In the context of this data structure, to "clear" means to reduce their
lengths to 0.
Args:
where: Optionally a boolean tensor, specifying which lists within
the batch will be cleared. If this argument is omitted (i.e.
left as None), then all of the lists will be cleared.
"""
if where is None:
self._begin.data[:] = -1
self._end.data[:] = -1
else:
where = self._get_where(where)
all_minus_ones = torch.tensor(-1, dtype=torch.int64, device=self._begin.device).expand(self._begin.shape)
self._begin.data[:] = do_where(where, all_minus_ones, self._begin.data)
self._end.data[:] = do_where(where, all_minus_ones, self._end.data)
@property
def data(self) -> torch.Tensor:
"""
The underlying tensor which stores all the data
"""
return self._data.data
@property
def length(self) -> torch.Tensor:
"""
The length(s) of the list(s)
"""
is_empty = self._is_empty()
is_full = self._is_full()
result = ((self._end[self._all_zeros] - self._begin[self._all_zeros]) % self._max_length) + 1
result[is_empty] = 0
result[is_full] = self._max_length
return result
@property
def max_length(self) -> int:
"""
Maximum length for the list(s)
"""
return self._max_length
data: Tensor
property
readonly
¶
The underlying tensor which stores all the data
length: Tensor
property
readonly
¶
The length(s) of the list(s)
max_length: int
property
readonly
¶
Maximum length for the list(s)
__getitem__(self, key)
special
¶
Get the value(s) from the specified element(s).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
Union[numbers.Number, Iterable[numbers.Number]] |
The index/indices pointing to the element(s) whose value(s) is/are queried. |
required |
Returns:
Type | Description |
---|---|
Tensor |
The value(s) stored by the element(s). |
Source code in evotorch/tools/structures.py
__setitem__(self, key, value)
special
¶
Set the element(s) addressed to by the given key(s).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
Union[numbers.Number, Iterable[numbers.Number]] |
The index/indices tensor. |
required |
value |
Union[numbers.Number, Iterable[numbers.Number]] |
The new value(s). |
required |
add_(self, key, value, where=None)
¶
Add to the element(s) addressed to by the given key(s).
Please note that the word "add" is used in the arithmetic sense
(i.e. in the sense of performing addition). For putting a new
element into this list, please see the method append_(...)
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
Union[numbers.Number, Iterable[numbers.Number]] |
The index/indices tensor. |
required |
value |
Union[numbers.Number, Iterable[numbers.Number]] |
The value(s) that will be added onto the existing element(s). |
required |
where |
Union[numbers.Number, Iterable[numbers.Number]] |
Optionally a boolean mask. When provided, only the elements whose corresponding mask value(s) is/are True will be subject to modification. |
None |
Source code in evotorch/tools/structures.py
def add_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Add to the element(s) addressed to by the given key(s).
Please note that the word "add" is used in the arithmetic sense
(i.e. in the sense of performing addition). For putting a new
element into this list, please see the method `append_(...)`.
Args:
key: The index/indices tensor.
value: The value(s) that will be added onto the existing
element(s).
where: Optionally a boolean mask. When provided, only the elements
whose corresponding mask value(s) is/are True will be subject
to modification.
"""
self._apply_modification_method("add_", key, value, where)
append_(self, value, where=None)
¶
Add new item(s) to the end(s) of the list(s).
The length(s) of the updated list(s) will increase by 1.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
value |
Union[numbers.Number, Iterable[numbers.Number]] |
The element that will be added to the list.
In the non-batched case, this element is expected as a tensor
whose shape matches |
required |
where |
Union[numbers.Number, Iterable[numbers.Number]] |
Optionally a boolean mask whose shape matches |
None |
Source code in evotorch/tools/structures.py
def append_(self, value: Numbers, where: Optional[Numbers] = None):
"""
Add new item(s) to the end(s) of the list(s).
The length(s) of the updated list(s) will increase by 1.
Args:
value: The element that will be added to the list.
In the non-batched case, this element is expected as a tensor
whose shape matches `value_shape`.
In the batched case, this value is expected as a batch of
elements with extra leftmost dimensions (those extra leftmost
dimensions being expressed by `batch_shape`).
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then additions will happen only
on the lists whose corresponding mask values are True.
"""
where = None if where is None else self._get_where(where)
self._move_end_forward(where)
self.set_(-1, value, where=where)
appendleft_(self, value, where=None)
¶
Add new item(s) to the beginning point(s) of the list(s).
The length(s) of the updated list(s) will increase by 1.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
value |
Union[numbers.Number, Iterable[numbers.Number]] |
The element that will be added to the list.
In the non-batched case, this element is expected as a tensor
whose shape matches |
required |
where |
Union[numbers.Number, Iterable[numbers.Number]] |
Optionally a boolean mask whose shape matches |
None |
Source code in evotorch/tools/structures.py
def appendleft_(self, value: Numbers, where: Optional[Numbers] = None):
"""
Add new item(s) to the beginning point(s) of the list(s).
The length(s) of the updated list(s) will increase by 1.
Args:
value: The element that will be added to the list.
In the non-batched case, this element is expected as a tensor
whose shape matches `value_shape`.
In the batched case, this value is expected as a batch of
elements with extra leftmost dimensions (those extra leftmost
dimensions being expressed by `batch_shape`).
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then additions will happen only
on the lists whose corresponding mask values are True.
"""
where = None if where is None else self._get_where(where)
self._move_begin_backward(where)
self.set_(0, value, where=where)
clear(self, where=None)
¶
Clear the list(s).
In the context of this data structure, to "clear" means to reduce their lengths to 0.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
where |
Optional[torch.Tensor] |
Optionally a boolean tensor, specifying which lists within the batch will be cleared. If this argument is omitted (i.e. left as None), then all of the lists will be cleared. |
None |
Source code in evotorch/tools/structures.py
def clear(self, where: Optional[torch.Tensor] = None):
"""
Clear the list(s).
In the context of this data structure, to "clear" means to reduce their
lengths to 0.
Args:
where: Optionally a boolean tensor, specifying which lists within
the batch will be cleared. If this argument is omitted (i.e.
left as None), then all of the lists will be cleared.
"""
if where is None:
self._begin.data[:] = -1
self._end.data[:] = -1
else:
where = self._get_where(where)
all_minus_ones = torch.tensor(-1, dtype=torch.int64, device=self._begin.device).expand(self._begin.shape)
self._begin.data[:] = do_where(where, all_minus_ones, self._begin.data)
self._end.data[:] = do_where(where, all_minus_ones, self._end.data)
divide_(self, key, value, where=None)
¶
Divide the element(s) addressed to by the given key(s).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
Union[numbers.Number, Iterable[numbers.Number]] |
The index/indices tensor. |
required |
value |
Union[numbers.Number, Iterable[numbers.Number]] |
The value(s) that will be used as the divisor(s) on the existing element(s). |
required |
where |
Union[numbers.Number, Iterable[numbers.Number]] |
Optionally a boolean mask. When provided, only the elements whose corresponding mask value(s) is/are True will be subject to modification. |
None |
Source code in evotorch/tools/structures.py
def divide_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Divide the element(s) addressed to by the given key(s).
Args:
key: The index/indices tensor.
value: The value(s) that will be used as the divisor(s) on the
existing element(s).
where: Optionally a boolean mask. When provided, only the elements
whose corresponding mask value(s) is/are True will be subject
to modification.
"""
self._apply_modification_method("divide_", key, value, where)
get(self, key, default=None)
¶
Get the value(s) from the specified element(s).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
Union[numbers.Number, Iterable[numbers.Number]] |
The index/indices pointing to the element(s) whose value(s) is/are queried. |
required |
default |
Union[numbers.Number, Iterable[numbers.Number]] |
Default value(s) to be returned for when the specified index/indices are invalid and/or out of range. |
None |
Returns:
Type | Description |
---|---|
Tensor |
The value(s) stored by the element(s). |
Source code in evotorch/tools/structures.py
def get(self, key: Numbers, default: Optional[Numbers] = None) -> torch.Tensor:
"""
Get the value(s) from the specified element(s).
Args:
key: The index/indices pointing to the element(s) whose value(s)
is/are queried.
default: Default value(s) to be returned for when the specified
index/indices are invalid and/or out of range.
Returns:
The value(s) stored by the element(s).
"""
if default is None:
underlying_key = self._get_underlying_key(key)
return self._data[underlying_key]
else:
default = self._data._get_value(default)
underlying_key, valid_key = self._get_underlying_key(key, verify=False, return_validity=True)
return do_where(valid_key, self._data[underlying_key % self._max_length], default)
multiply_(self, key, value, where=None)
¶
Multiply the element(s) addressed to by the given key(s).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
Union[numbers.Number, Iterable[numbers.Number]] |
The index/indices tensor. |
required |
value |
Union[numbers.Number, Iterable[numbers.Number]] |
The value(s) that will be used as the multiplier(s) on the existing element(s). |
required |
where |
Union[numbers.Number, Iterable[numbers.Number]] |
Optionally a boolean mask. When provided, only the elements whose corresponding mask value(s) is/are True will be subject to modification. |
None |
Source code in evotorch/tools/structures.py
def multiply_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Multiply the element(s) addressed to by the given key(s).
Args:
key: The index/indices tensor.
value: The value(s) that will be used as the multiplier(s) on the
existing element(s).
where: Optionally a boolean mask. When provided, only the elements
whose corresponding mask value(s) is/are True will be subject
to modification.
"""
self._apply_modification_method("multiply_", key, value, where)
pop_(self, where=None)
¶
Pop the last item(s) from the ending point(s) list(s).
The length(s) of the updated list(s) will decrease by 1.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
where |
Union[numbers.Number, Iterable[numbers.Number]] |
Optionally a boolean mask whose shape matches |
None |
Returns:
Type | Description |
---|---|
The popped item(s). |
Source code in evotorch/tools/structures.py
def pop_(self, where: Optional[Numbers] = None):
"""
Pop the last item(s) from the ending point(s) list(s).
The length(s) of the updated list(s) will decrease by 1.
Args:
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then the pop operations will happen
only on the lists whose corresponding mask values are True.
Returns:
The popped item(s).
"""
where = None if where is None else self._get_where(where)
result = self.get(-1, default=self._pop_fallback)
self._move_end_backward(where)
return result
popleft_(self, where=None)
¶
Pop the last item(s) from the beginning point(s) list(s).
The length(s) of the updated list(s) will decrease by 1.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
where |
Union[numbers.Number, Iterable[numbers.Number]] |
Optionally a boolean mask whose shape matches |
None |
Returns:
Type | Description |
---|---|
The popped item(s). |
Source code in evotorch/tools/structures.py
def popleft_(self, where: Optional[Numbers] = None):
"""
Pop the last item(s) from the beginning point(s) list(s).
The length(s) of the updated list(s) will decrease by 1.
Args:
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then the pop operations will happen
only on the lists whose corresponding mask values are True.
Returns:
The popped item(s).
"""
where = None if where is None else self._get_where(where)
result = self.get(0, default=self._pop_fallback)
self._move_begin_forward(where)
return result
push_(self, value, where=None)
¶
Alias for the method append_(...)
.
We provide this alternative name so that users who wish to use this
CList structure like a stack will be able to use familiar terminology.
Source code in evotorch/tools/structures.py
set_(self, key, value, where=None)
¶
Set the element(s) addressed to by the given key(s).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
Union[numbers.Number, Iterable[numbers.Number]] |
The index/indices tensor. |
required |
value |
Union[numbers.Number, Iterable[numbers.Number]] |
The new value(s). |
required |
where |
Union[numbers.Number, Iterable[numbers.Number]] |
Optionally a boolean mask. When provided, only the elements whose corresponding mask value(s) is/are True will be subject to modification. |
None |
Source code in evotorch/tools/structures.py
def set_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Set the element(s) addressed to by the given key(s).
Args:
key: The index/indices tensor.
value: The new value(s).
where: Optionally a boolean mask. When provided, only the elements
whose corresponding mask value(s) is/are True will be subject
to modification.
"""
self._apply_modification_method("set_", key, value, where)
subtract_(self, key, value, where=None)
¶
Subtract from the element(s) addressed to by the given key(s).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
Union[numbers.Number, Iterable[numbers.Number]] |
The index/indices tensor. |
required |
value |
Union[numbers.Number, Iterable[numbers.Number]] |
The value(s) that will be subtracted from the existing element(s). |
required |
where |
Union[numbers.Number, Iterable[numbers.Number]] |
Optionally a boolean mask. When provided, only the elements whose corresponding mask value(s) is/are True will be subject to modification. |
None |
Source code in evotorch/tools/structures.py
def subtract_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Subtract from the element(s) addressed to by the given key(s).
Args:
key: The index/indices tensor.
value: The value(s) that will be subtracted from the existing
element(s).
where: Optionally a boolean mask. When provided, only the elements
whose corresponding mask value(s) is/are True will be subject
to modification.
"""
self._apply_modification_method("subtract_", key, value, where)
CMemory
¶
Representation of a batchable contiguous memory.
This container can be seen as a batchable primitive dictionary where the keys are allowed either as integers or as tuples of integers. Please also note that, a memory block for each key is already allocated, meaning that unlike a dictionary of Python, each key already exists and is associated with a tensor.
Let us consider an example where we have 5 keys, and each key is associated with a tensor of length 7. Such a memory could be allocated like this:
Our allocated memory can be visualized as follows:
_______________________________________
| key 0 -> [ empty tensor of length 7 ] |
| key 1 -> [ empty tensor of length 7 ] |
| key 2 -> [ empty tensor of length 7 ] |
| key 3 -> [ empty tensor of length 7 ] |
| key 4 -> [ empty tensor of length 7 ] |
|_______________________________________|
Let us now sample a Gaussian noise and put it into the 0-th slot:
which results in:
_________________________________________
| key 0 -> [ Gaussian noise of length 7 ] |
| key 1 -> [ empty tensor of length 7 ] |
| key 2 -> [ empty tensor of length 7 ] |
| key 3 -> [ empty tensor of length 7 ] |
| key 4 -> [ empty tensor of length 7 ] |
|_________________________________________|
Let us now consider another example where we deal with not a single CMemory, but with a CMemory batch. For the sake of this example, let us say that our desired batch size is 3. The allocation of such a batch would be as follows:
Our memory batch can be visualized like this:
__[ batch item 0 ]_____________________
| key 0 -> [ empty tensor of length 7 ] |
| key 1 -> [ empty tensor of length 7 ] |
| key 2 -> [ empty tensor of length 7 ] |
| key 3 -> [ empty tensor of length 7 ] |
| key 4 -> [ empty tensor of length 7 ] |
|_______________________________________|
__[ batch item 1 ]_____________________
| key 0 -> [ empty tensor of length 7 ] |
| key 1 -> [ empty tensor of length 7 ] |
| key 2 -> [ empty tensor of length 7 ] |
| key 3 -> [ empty tensor of length 7 ] |
| key 4 -> [ empty tensor of length 7 ] |
|_______________________________________|
__[ batch item 2 ]_____________________
| key 0 -> [ empty tensor of length 7 ] |
| key 1 -> [ empty tensor of length 7 ] |
| key 2 -> [ empty tensor of length 7 ] |
| key 3 -> [ empty tensor of length 7 ] |
| key 4 -> [ empty tensor of length 7 ] |
|_______________________________________|
If we wish to set the 0-th element of each batch item, we could do:
memory_batch[0] = torch.tensor(
[
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
[2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0],
],
)
and the result would be:
__[ batch item 0 ]_____________________
| key 0 -> [ 0. 0. 0. 0. 0. 0. 0. ] |
| key 1 -> [ empty tensor of length 7 ] |
| key 2 -> [ empty tensor of length 7 ] |
| key 3 -> [ empty tensor of length 7 ] |
| key 4 -> [ empty tensor of length 7 ] |
|_______________________________________|
__[ batch item 1 ]_____________________
| key 0 -> [ 1. 1. 1. 1. 1. 1. 1. ] |
| key 1 -> [ empty tensor of length 7 ] |
| key 2 -> [ empty tensor of length 7 ] |
| key 3 -> [ empty tensor of length 7 ] |
| key 4 -> [ empty tensor of length 7 ] |
|_______________________________________|
__[ batch item 2 ]_____________________
| key 0 -> [ 2. 2. 2. 2. 2. 2. 2. ] |
| key 1 -> [ empty tensor of length 7 ] |
| key 2 -> [ empty tensor of length 7 ] |
| key 3 -> [ empty tensor of length 7 ] |
| key 4 -> [ empty tensor of length 7 ] |
|_______________________________________|
Continuing from the same example, if we wish to set the slot with key 1 in the 0th batch item, slot with key 2 in the 1st batch item, and slot with key 3 in the 2nd batch item, all in one go, we could do:
# Longer version: memory_batch[torch.tensor([1, 2, 3])] = ...
memory_batch[[1, 2, 3]] = torch.tensor(
[
[5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0],
[6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0],
[7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0],
],
)
Our updated memory batch would then look like this:
__[ batch item 0 ]_____________________
| key 0 -> [ 0. 0. 0. 0. 0. 0. 0. ] |
| key 1 -> [ 5. 5. 5. 5. 5. 5. 5. ] |
| key 2 -> [ empty tensor of length 7 ] |
| key 3 -> [ empty tensor of length 7 ] |
| key 4 -> [ empty tensor of length 7 ] |
|_______________________________________|
__[ batch item 1 ]_____________________
| key 0 -> [ 1. 1. 1. 1. 1. 1. 1. ] |
| key 1 -> [ empty tensor of length 7 ] |
| key 2 -> [ 6. 6. 6. 6. 6. 6. 6. ] |
| key 3 -> [ empty tensor of length 7 ] |
| key 4 -> [ empty tensor of length 7 ] |
|_______________________________________|
__[ batch item 2 ]_____________________
| key 0 -> [ 2. 2. 2. 2. 2. 2. 2. ] |
| key 1 -> [ empty tensor of length 7 ] |
| key 2 -> [ empty tensor of length 7 ] |
| key 3 -> [ 7. 7. 7. 7. 7. 7. 7. ] |
| key 4 -> [ empty tensor of length 7 ] |
|_______________________________________|
Conditional modifications via boolean masks is also supported.
For example, the following update on our memory_batch
:
memory_batch.set_(
[4, 3, 1],
torch.tensor(
[
[8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0],
[9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0],
[10.0, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0],
]
),
where=[True, True, False], # or: where=torch.tensor([True,True,False]),
)
would result in:
__[ batch item 0 ]_____________________
| key 0 -> [ 0. 0. 0. 0. 0. 0. 0. ] |
| key 1 -> [ 5. 5. 5. 5. 5. 5. 5. ] |
| key 2 -> [ empty tensor of length 7 ] |
| key 3 -> [ empty tensor of length 7 ] |
| key 4 -> [ 8. 8. 8. 8. 8. 8. 8. ] |
|_______________________________________|
__[ batch item 1 ]_____________________
| key 0 -> [ 1. 1. 1. 1. 1. 1. 1. ] |
| key 1 -> [ empty tensor of length 7 ] |
| key 2 -> [ 6. 6. 6. 6. 6. 6. 6. ] |
| key 3 -> [ 9. 9. 9. 9. 9. 9. 9. ] |
| key 4 -> [ empty tensor of length 7 ] |
|_______________________________________|
__[ batch item 2 ]_____________________
| key 0 -> [ 2. 2. 2. 2. 2. 2. 2. ] |
| key 1 -> [ empty tensor of length 7 ] |
| key 2 -> [ empty tensor of length 7 ] |
| key 3 -> [ 7. 7. 7. 7. 7. 7. 7. ] |
| key 4 -> [ empty tensor of length 7 ] |
|_______________________________________|
Please notice above that the slot with key 1 of the batch item 2 was not modified because its corresponding mask value was given as False.
Source code in evotorch/tools/structures.py
class CMemory:
"""
Representation of a batchable contiguous memory.
This container can be seen as a batchable primitive dictionary where the
keys are allowed either as integers or as tuples of integers. Please also
note that, a memory block for each key is already allocated, meaning that
unlike a dictionary of Python, each key already exists and is associated
with a tensor.
Let us consider an example where we have 5 keys, and each key is associated
with a tensor of length 7. Such a memory could be allocated like this:
```python
memory = CMemory(7, num_keys=5)
```
Our allocated memory can be visualized as follows:
```text
_______________________________________
| key 0 -> [ empty tensor of length 7 ] |
| key 1 -> [ empty tensor of length 7 ] |
| key 2 -> [ empty tensor of length 7 ] |
| key 3 -> [ empty tensor of length 7 ] |
| key 4 -> [ empty tensor of length 7 ] |
|_______________________________________|
```
Let us now sample a Gaussian noise and put it into the 0-th slot:
```python
memory[0] = torch.randn(7) # or: memory[torch.tensor(0)] = torch.randn(7)
```
which results in:
```text
_________________________________________
| key 0 -> [ Gaussian noise of length 7 ] |
| key 1 -> [ empty tensor of length 7 ] |
| key 2 -> [ empty tensor of length 7 ] |
| key 3 -> [ empty tensor of length 7 ] |
| key 4 -> [ empty tensor of length 7 ] |
|_________________________________________|
```
Let us now consider another example where we deal with not a single CMemory,
but with a CMemory batch. For the sake of this example, let us say that our
desired batch size is 3. The allocation of such a batch would be as
follows:
```python
memory_batch = CMemory(7, num_keys=5, batch_size=3)
```
Our memory batch can be visualized like this:
```text
__[ batch item 0 ]_____________________
| key 0 -> [ empty tensor of length 7 ] |
| key 1 -> [ empty tensor of length 7 ] |
| key 2 -> [ empty tensor of length 7 ] |
| key 3 -> [ empty tensor of length 7 ] |
| key 4 -> [ empty tensor of length 7 ] |
|_______________________________________|
__[ batch item 1 ]_____________________
| key 0 -> [ empty tensor of length 7 ] |
| key 1 -> [ empty tensor of length 7 ] |
| key 2 -> [ empty tensor of length 7 ] |
| key 3 -> [ empty tensor of length 7 ] |
| key 4 -> [ empty tensor of length 7 ] |
|_______________________________________|
__[ batch item 2 ]_____________________
| key 0 -> [ empty tensor of length 7 ] |
| key 1 -> [ empty tensor of length 7 ] |
| key 2 -> [ empty tensor of length 7 ] |
| key 3 -> [ empty tensor of length 7 ] |
| key 4 -> [ empty tensor of length 7 ] |
|_______________________________________|
```
If we wish to set the 0-th element of each batch item, we could do:
```python
memory_batch[0] = torch.tensor(
[
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
[2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0],
],
)
```
and the result would be:
```text
__[ batch item 0 ]_____________________
| key 0 -> [ 0. 0. 0. 0. 0. 0. 0. ] |
| key 1 -> [ empty tensor of length 7 ] |
| key 2 -> [ empty tensor of length 7 ] |
| key 3 -> [ empty tensor of length 7 ] |
| key 4 -> [ empty tensor of length 7 ] |
|_______________________________________|
__[ batch item 1 ]_____________________
| key 0 -> [ 1. 1. 1. 1. 1. 1. 1. ] |
| key 1 -> [ empty tensor of length 7 ] |
| key 2 -> [ empty tensor of length 7 ] |
| key 3 -> [ empty tensor of length 7 ] |
| key 4 -> [ empty tensor of length 7 ] |
|_______________________________________|
__[ batch item 2 ]_____________________
| key 0 -> [ 2. 2. 2. 2. 2. 2. 2. ] |
| key 1 -> [ empty tensor of length 7 ] |
| key 2 -> [ empty tensor of length 7 ] |
| key 3 -> [ empty tensor of length 7 ] |
| key 4 -> [ empty tensor of length 7 ] |
|_______________________________________|
```
Continuing from the same example, if we wish to set the slot with key 1
in the 0th batch item, slot with key 2 in the 1st batch item, and
slot with key 3 in the 2nd batch item, all in one go, we could do:
```python
# Longer version: memory_batch[torch.tensor([1, 2, 3])] = ...
memory_batch[[1, 2, 3]] = torch.tensor(
[
[5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0],
[6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0],
[7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0],
],
)
```
Our updated memory batch would then look like this:
```text
__[ batch item 0 ]_____________________
| key 0 -> [ 0. 0. 0. 0. 0. 0. 0. ] |
| key 1 -> [ 5. 5. 5. 5. 5. 5. 5. ] |
| key 2 -> [ empty tensor of length 7 ] |
| key 3 -> [ empty tensor of length 7 ] |
| key 4 -> [ empty tensor of length 7 ] |
|_______________________________________|
__[ batch item 1 ]_____________________
| key 0 -> [ 1. 1. 1. 1. 1. 1. 1. ] |
| key 1 -> [ empty tensor of length 7 ] |
| key 2 -> [ 6. 6. 6. 6. 6. 6. 6. ] |
| key 3 -> [ empty tensor of length 7 ] |
| key 4 -> [ empty tensor of length 7 ] |
|_______________________________________|
__[ batch item 2 ]_____________________
| key 0 -> [ 2. 2. 2. 2. 2. 2. 2. ] |
| key 1 -> [ empty tensor of length 7 ] |
| key 2 -> [ empty tensor of length 7 ] |
| key 3 -> [ 7. 7. 7. 7. 7. 7. 7. ] |
| key 4 -> [ empty tensor of length 7 ] |
|_______________________________________|
```
Conditional modifications via boolean masks is also supported.
For example, the following update on our `memory_batch`:
```python
memory_batch.set_(
[4, 3, 1],
torch.tensor(
[
[8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0],
[9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0],
[10.0, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0],
]
),
where=[True, True, False], # or: where=torch.tensor([True,True,False]),
)
```
would result in:
```text
__[ batch item 0 ]_____________________
| key 0 -> [ 0. 0. 0. 0. 0. 0. 0. ] |
| key 1 -> [ 5. 5. 5. 5. 5. 5. 5. ] |
| key 2 -> [ empty tensor of length 7 ] |
| key 3 -> [ empty tensor of length 7 ] |
| key 4 -> [ 8. 8. 8. 8. 8. 8. 8. ] |
|_______________________________________|
__[ batch item 1 ]_____________________
| key 0 -> [ 1. 1. 1. 1. 1. 1. 1. ] |
| key 1 -> [ empty tensor of length 7 ] |
| key 2 -> [ 6. 6. 6. 6. 6. 6. 6. ] |
| key 3 -> [ 9. 9. 9. 9. 9. 9. 9. ] |
| key 4 -> [ empty tensor of length 7 ] |
|_______________________________________|
__[ batch item 2 ]_____________________
| key 0 -> [ 2. 2. 2. 2. 2. 2. 2. ] |
| key 1 -> [ empty tensor of length 7 ] |
| key 2 -> [ empty tensor of length 7 ] |
| key 3 -> [ 7. 7. 7. 7. 7. 7. 7. ] |
| key 4 -> [ empty tensor of length 7 ] |
|_______________________________________|
```
Please notice above that the slot with key 1 of the batch item 2 was not
modified because its corresponding mask value was given as False.
"""
def __init__(
self,
*size: Union[int, tuple, list],
num_keys: Union[int, tuple, list],
key_offset: Optional[Union[int, tuple, list]] = None,
batch_size: Optional[Union[int, tuple, list]] = None,
batch_shape: Optional[Union[int, tuple, list]] = None,
fill_with: Optional[Numbers] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
verify: bool = True,
):
"""
`__init__(...)`: Initialize the CMemory.
Args:
size: Size of a tensor associated with a key, expected as an
integer, or as multiple positional arguments (each positional
argument being an integer), or as a tuple of integers.
num_keys: How many keys (and therefore how many slots) will the
memory have. If given as an integer `n`, then there will be `n`
slots in the memory, and to access a slot one will need to use
an integer key `k` (where, by default, the minimum acceptable
`k` is 0 and the maximum acceptable `k` is `n-1`).
If given as a tuple of integers, then the number of slots in
the memory will be computed as the product of all the integers
in the tuple, and a key will be expected as a tuple.
For example, when `num_keys` is `(3, 5)`, there will be 15
slots in the memory (where, by default, the minimum acceptable
key will be `(0, 0)` and the maximum acceptable key will be
`(2, 4)`.
key_offset: Optionally can be used to shift the integer values of
the keys. For example, if `num_keys` is 10, then, by default,
the minimum key is 0 and the maximum key is 9. But together
with `num_keys=10`, if `key_offset` is given as 1, then the
minimum key will be 1 and the maximum key will be 10.
This argument can also be used together with a tuple-valued
`num_keys`. For example, with `num_keys` set as `(3, 5)`,
if `key_offset` is given as 1, then the minimum key value
will be `(1, 1)` (instead of `(0, 0)`) and the maximum key
value will be `(3, 5)` (instead of `(2, 4)`).
Also, with a tuple-valued `num_keys`, `key_offset` can be
given as a tuple, to shift the key values differently for each
item in the tuple.
batch_size: If given as None, then this memory will not be batched.
If given as an integer `n`, then this object will represent
a contiguous batch containing `n` memory blocks.
If given as a tuple `(size0, size1, ...)`, then this object
will represent a contiguous batch of memory, shape of this
batch being determined by the given tuple.
batch_shape: Alias for the argument `batch_size`.
fill_with: Optionally a numeric value using which the values will
be initialized. If no initialization is needed, then this
argument can be left as None.
dtype: The `dtype` of the memory tensor.
device: The device on which the memory will be allocated.
verify: If True, then explicit checks will be done to verify
that there are no indexing errors. Can be set as False for
performance.
"""
self._dtype = torch.float32 if dtype is None else to_torch_dtype(dtype)
self._device = torch.device("cpu") if device is None else torch.device(device)
self._verify = bool(verify)
if isinstance(num_keys, (list, tuple)):
if len(num_keys) < 2:
raise RuntimeError(
f"When expressed via a list or a tuple, the length of `num_keys` must be at least 2."
f" However, the encountered `num_keys` is {repr(num_keys)}, whose length is {len(num_keys)}."
)
self._multi_key = True
self._num_keys = tuple((int(n) for n in num_keys))
self._internal_key_shape = torch.Size(self._num_keys)
else:
self._multi_key = False
self._num_keys = int(num_keys)
self._internal_key_shape = torch.Size([self._num_keys])
self._internal_key_ndim = len(self._internal_key_shape)
if key_offset is None:
self._key_offset = None
else:
if self._multi_key:
if isinstance(key_offset, (list, tuple)):
key_offset = [int(n) for n in key_offset]
if len(key_offset) != len(self._num_keys):
raise RuntimeError("The length of `key_offset` does not match the length of `num_keys`")
else:
key_offset = [int(key_offset) for _ in range(len(self._num_keys))]
self._key_offset = torch.as_tensor(key_offset, dtype=torch.int64, device=self._device)
else:
if isinstance(key_offset, (list, tuple)):
raise RuntimeError("`key_offset` cannot be a sequence of integers when `num_keys` is a scalar")
else:
self._key_offset = torch.as_tensor(int(key_offset), dtype=torch.int64, device=self._device)
if self._verify:
if self._multi_key:
self._min_key = torch.zeros(len(self._num_keys), dtype=torch.int64, device=self._device)
self._max_key = torch.tensor(list(self._num_keys), dtype=torch.int64, device=self._device) - 1
else:
self._min_key = torch.tensor(0, dtype=torch.int64, device=self._device)
self._max_key = torch.tensor(self._num_keys - 1, dtype=torch.int64, device=self._device)
if self._key_offset is not None:
self._min_key += self._key_offset
self._max_key += self._key_offset
else:
self._min_key = None
self._max_key = None
nsize = len(size)
if nsize == 0:
self._value_shape = torch.Size([])
elif nsize == 1:
if isinstance(size[0], (tuple, list)):
self._value_shape = torch.Size((int(n) for n in size[0]))
else:
self._value_shape = torch.Size([int(size[0])])
else:
self._value_shape = torch.Size((int(n) for n in size))
self._value_ndim = len(self._value_shape)
if (batch_size is None) and (batch_shape is None):
batch_size = None
elif (batch_size is not None) and (batch_shape is None):
pass
elif (batch_size is None) and (batch_shape is not None):
batch_size = batch_shape
else:
raise RuntimeError(
"Encountered both `batch_shape` and `batch_size` at the same time."
" None of them or one of them can be accepted, but not both of them at the same time."
)
if batch_size is None:
self._batch_shape = torch.Size([])
elif isinstance(batch_size, (tuple, list)):
self._batch_shape = torch.Size((int(n) for n in batch_size))
else:
self._batch_shape = torch.Size([int(batch_size)])
self._batch_ndim = len(self._batch_shape)
self._for_all_batches = tuple(
(
torch.arange(self._batch_shape[i], dtype=torch.int64, device=self._device)
for i in range(self._batch_ndim)
)
)
self._data = torch.empty(
self._batch_shape + self._internal_key_shape + self._value_shape,
dtype=(self._dtype),
device=(self._device),
)
if fill_with is not None:
self._data[:] = fill_with
@property
def _is_dtype_bool(self) -> bool:
return self._data.dtype is torch.bool
def _key_must_be_valid(self, key: torch.Tensor) -> torch.Tensor:
lb_satisfied = key >= self._min_key
ub_satisfied = key <= self._max_key
all_satisfied = lb_satisfied & ub_satisfied
if not torch.all(all_satisfied):
raise KeyError("Encountered invalid key(s)")
def _get_key(self, key: Numbers, where: Optional[torch.Tensor] = None) -> torch.Tensor:
key = torch.as_tensor(key, dtype=torch.int64, device=self._data.device)
expected_shape = self.batch_shape + self.key_shape
if key.shape == expected_shape:
result = key
elif key.shape == self.key_shape:
result = key.expand(expected_shape)
else:
raise RuntimeError(f"The key tensor has an incompatible shape: {key.shape}")
if where is not None:
min_key = (
torch.tensor(0, dtype=torch.int64, device=self._data.device) if self._min_key is None else self._min_key
)
key = do_where(where, key, min_key.expand(expected_shape))
if self._verify:
self._key_must_be_valid(key)
return result
def _get_value(self, value: Numbers) -> torch.Tensor:
value = torch.as_tensor(value, dtype=self._data.dtype, device=self._data.device)
expected_shape = self.batch_shape + self.value_shape
if value.shape == expected_shape:
return value
elif (value.ndim == 0) or (value.shape == self.value_shape):
return value.expand(expected_shape)
else:
raise RuntimeError(f"The value tensor has an incompatible shape: {value.shape}")
return value
def _get_where(self, where: Numbers) -> torch.Tensor:
where = torch.as_tensor(where, dtype=torch.bool, device=self._data.device)
if where.shape != self.batch_shape:
raise RuntimeError(
f"The boolean mask `where` has an incompatible shape: {where.shape}."
f" Acceptable shape is: {self.batch_shape}"
)
return where
def prepare_key_tensor(self, key: Numbers) -> torch.Tensor:
"""
Return the tensor-counterpart of a key.
Args:
key: A key which can be a sequence of integers or a PyTorch tensor
with an integer dtype.
The shape of the given key must conform with the `key_shape`
of this memory object.
To address to a different key in each batch item, the shape of
the given key can also have extra leftmost dimensions expressed
by `batch_shape`.
Returns:
A copy of the key that is converted to PyTorch tensor.
"""
return self._get_key(key)
def prepare_value_tensor(self, value: Numbers) -> torch.Tensor:
"""
Return the tensor-counterpart of a value.
Args:
value: A value that can be a numeric sequence or a PyTorch tensor.
The shape of the given value must conform with the
`value_shape` of this memory object.
To express a different value for each batch item, the shape of
the given value can also have extra leftmost dimensions
expressed by `value_shape`.
Returns:
A copy of the given value(s), converted to PyTorch tensor.
"""
return self._get_value(value)
def prepare_where_tensor(self, where: Numbers) -> torch.Tensor:
"""
Return the tensor-counterpart of a boolean mask.
Args:
where: A boolean mask expressed as a sequence of bools or as a
boolean PyTorch tensor.
The shape of the given mask must conform with the batch shape
that is expressed by the property `batch_shape`.
Returns:
A copy of the boolean mask, converted to PyTorch tensor.
"""
return self._get_where(where)
def _get_address(self, key: Numbers, where: Optional[torch.Tensor] = None) -> tuple:
key = self._get_key(key, where=where)
if self._key_offset is not None:
key = key - self._key_offset
if self._multi_key:
keys = tuple((key[..., j] for j in range(self._internal_key_ndim)))
return self._for_all_batches + keys
else:
return self._for_all_batches + (key,)
def get(self, key: Numbers) -> torch.Tensor:
"""
Get the value(s) associated with the given key(s).
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
Returns:
The value(s) associated with the given key(s).
"""
address = self._get_address(key)
return self._data[address]
def set_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Set the value(s) associated with the given key(s).
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
value: The new value(s).
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then modifications will happen only
on the memory slots whose corresponding mask values are True.
"""
where = None if where is None else self._get_where(where)
address = self._get_address(key, where=where)
value = self._get_value(value)
if where is None:
self._data[address] = value
else:
old_value = self._data[address]
new_value = value
self._data[address] = do_where(where, new_value, old_value)
def add_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Add value(s) onto the existing values of slots with the given key(s).
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
value: The value(s) that will be added onto the existing value(s).
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then modifications will happen only
on the memory slots whose corresponding mask values are True.
"""
where = None if where is None else self._get_where(where)
address = self._get_address(key, where=where)
value = self._get_value(value)
if where is None:
if self._is_dtype_bool:
self._data[address] |= value
else:
self._data[address] += value
else:
if self._is_dtype_bool:
mask_shape = self._batch_shape + tuple((1 for _ in range(self._value_ndim)))
self._data[address] |= value & where.reshape(mask_shape)
else:
self._data[address] += do_where(where, value, torch.tensor(0, dtype=value.dtype, device=value.device))
def add_circular_(self, key: Numbers, value: Numbers, mod: Numbers, where: Optional[Numbers] = None):
"""
Increase the values of the specified slots in a circular manner.
This operation combines the add and modulo operations.
Circularly adding `value` onto `x` with a modulo `mod` means:
`x = (x + value) % mod`.
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
value: The value(s) that will be added onto the existing value(s).
mod: After the raw adding operation, the modulos according to this
`mod` argument will be computed and placed.
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then modifications will happen only
on the memory slots whose corresponding mask values are True.
"""
where = None if where is None else self._get_where(where)
address = self._get_address(key, where=where)
value = self._get_value(value)
mod = self._get_value(mod)
if self._is_dtype_bool:
raise ValueError("Circular addition is not supported for dtype `torch.bool`")
if where is None:
self._data[address] = (self._data[address] + value) % mod
else:
old_value = self._data[address]
new_value = (old_value + value) % mod
self._data[address] = do_where(where, new_value, old_value)
def multiply_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Multiply the existing values of slots with the given key(s).
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
value: The value(s) that will be used as the multiplier(s).
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then modifications will happen only
on the memory slots whose corresponding mask values are True.
"""
where = None if where is None else self._get_where(where)
address = self._get_address(key, where=where)
value = self._get_value(value)
if where is None:
if self._is_dtype_bool:
self._data[address] &= value
else:
self._data[address] += value
else:
if self._is_dtype_bool:
self._data[address] &= do_where(
where, value, torch.tensor(True, dtype=value.dtype, device=value.device)
)
else:
self._data[address] *= do_where(where, value, torch.tensor(1, dtype=value.dtype, device=value.device))
def subtract_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Subtract value(s) from existing values of slots with the given key(s).
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
value: The value(s) that will be subtracted from existing value(s).
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then modifications will happen only
on the memory slots whose corresponding mask values are True.
"""
self.add_(key, -value, where)
def divide_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Divide the existing values of slots with the given key(s).
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
value: The value(s) that will be used as divisor(s).
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then modifications will happen only
on the memory slots whose corresponding mask values are True.
"""
self.multiply_(key, 1 / value, where)
def __getitem__(self, key: Numbers) -> torch.Tensor:
"""
Get the value(s) associated with the given key(s).
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
Returns:
The value(s) associated with the given key(s).
"""
return self.get(key)
def __setitem__(self, key: Numbers, value: Numbers):
"""
Set the value(s) associated with the given key(s).
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
value: The new value(s).
"""
self.set_(key, value)
@property
def data(self) -> torch.Tensor:
"""
The entire value tensor
"""
return self._data
@property
def key_shape(self) -> torch.Size:
"""
Shape of a key
"""
return torch.Size([self._internal_key_ndim]) if self._multi_key else torch.Size([])
@property
def key_ndim(self) -> int:
"""
Number of dimensions of a key
"""
return 1 if self._multi_key else 0
@property
def batch_shape(self) -> torch.Size:
"""
Batch size of this memory object
"""
return self._batch_shape
@property
def batch_ndim(self) -> int:
"""
Number of dimensions expressed by `batch_shape`
"""
return self._batch_ndim
@property
def is_batched(self) -> bool:
"""
True if this CMemory object is batched; False otherwise.
"""
return self._batch_ndim > 0
@property
def value_shape(self) -> torch.Size:
"""
Tensor shape of a single value
"""
return self._value_shape
@property
def value_ndim(self) -> int:
"""
Number of dimensions expressed by `value_shape`
"""
return self._value_ndim
@property
def dtype(self) -> torch.dtype:
"""
`dtype` of the value tensor
"""
return self._data.dtype
@property
def device(self) -> torch.device:
"""
The device on which this memory object lives
"""
return self._data.device
batch_ndim: int
property
readonly
¶
Number of dimensions expressed by batch_shape
batch_shape: Size
property
readonly
¶
Batch size of this memory object
data: Tensor
property
readonly
¶
The entire value tensor
device: device
property
readonly
¶
The device on which this memory object lives
dtype: dtype
property
readonly
¶
dtype
of the value tensor
is_batched: bool
property
readonly
¶
True if this CMemory object is batched; False otherwise.
key_ndim: int
property
readonly
¶
Number of dimensions of a key
key_shape: Size
property
readonly
¶
Shape of a key
value_ndim: int
property
readonly
¶
Number of dimensions expressed by value_shape
value_shape: Size
property
readonly
¶
Tensor shape of a single value
__getitem__(self, key)
special
¶
Get the value(s) associated with the given key(s).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
Union[numbers.Number, Iterable[numbers.Number]] |
A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the |
required |
Returns:
Type | Description |
---|---|
Tensor |
The value(s) associated with the given key(s). |
Source code in evotorch/tools/structures.py
def __getitem__(self, key: Numbers) -> torch.Tensor:
"""
Get the value(s) associated with the given key(s).
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
Returns:
The value(s) associated with the given key(s).
"""
return self.get(key)
__init__(self, *size, *, num_keys, key_offset=None, batch_size=None, batch_shape=None, fill_with=None, dtype=None, device=None, verify=True)
special
¶
__init__(...)
: Initialize the CMemory.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
size |
Union[int, tuple, list] |
Size of a tensor associated with a key, expected as an integer, or as multiple positional arguments (each positional argument being an integer), or as a tuple of integers. |
() |
num_keys |
Union[int, tuple, list] |
How many keys (and therefore how many slots) will the
memory have. If given as an integer |
required |
key_offset |
Union[int, tuple, list] |
Optionally can be used to shift the integer values of
the keys. For example, if |
None |
batch_size |
Union[int, tuple, list] |
If given as None, then this memory will not be batched.
If given as an integer |
None |
batch_shape |
Union[int, tuple, list] |
Alias for the argument |
None |
fill_with |
Union[numbers.Number, Iterable[numbers.Number]] |
Optionally a numeric value using which the values will be initialized. If no initialization is needed, then this argument can be left as None. |
None |
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
The |
None |
device |
Union[str, torch.device] |
The device on which the memory will be allocated. |
None |
verify |
bool |
If True, then explicit checks will be done to verify that there are no indexing errors. Can be set as False for performance. |
True |
Source code in evotorch/tools/structures.py
def __init__(
self,
*size: Union[int, tuple, list],
num_keys: Union[int, tuple, list],
key_offset: Optional[Union[int, tuple, list]] = None,
batch_size: Optional[Union[int, tuple, list]] = None,
batch_shape: Optional[Union[int, tuple, list]] = None,
fill_with: Optional[Numbers] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
verify: bool = True,
):
"""
`__init__(...)`: Initialize the CMemory.
Args:
size: Size of a tensor associated with a key, expected as an
integer, or as multiple positional arguments (each positional
argument being an integer), or as a tuple of integers.
num_keys: How many keys (and therefore how many slots) will the
memory have. If given as an integer `n`, then there will be `n`
slots in the memory, and to access a slot one will need to use
an integer key `k` (where, by default, the minimum acceptable
`k` is 0 and the maximum acceptable `k` is `n-1`).
If given as a tuple of integers, then the number of slots in
the memory will be computed as the product of all the integers
in the tuple, and a key will be expected as a tuple.
For example, when `num_keys` is `(3, 5)`, there will be 15
slots in the memory (where, by default, the minimum acceptable
key will be `(0, 0)` and the maximum acceptable key will be
`(2, 4)`.
key_offset: Optionally can be used to shift the integer values of
the keys. For example, if `num_keys` is 10, then, by default,
the minimum key is 0 and the maximum key is 9. But together
with `num_keys=10`, if `key_offset` is given as 1, then the
minimum key will be 1 and the maximum key will be 10.
This argument can also be used together with a tuple-valued
`num_keys`. For example, with `num_keys` set as `(3, 5)`,
if `key_offset` is given as 1, then the minimum key value
will be `(1, 1)` (instead of `(0, 0)`) and the maximum key
value will be `(3, 5)` (instead of `(2, 4)`).
Also, with a tuple-valued `num_keys`, `key_offset` can be
given as a tuple, to shift the key values differently for each
item in the tuple.
batch_size: If given as None, then this memory will not be batched.
If given as an integer `n`, then this object will represent
a contiguous batch containing `n` memory blocks.
If given as a tuple `(size0, size1, ...)`, then this object
will represent a contiguous batch of memory, shape of this
batch being determined by the given tuple.
batch_shape: Alias for the argument `batch_size`.
fill_with: Optionally a numeric value using which the values will
be initialized. If no initialization is needed, then this
argument can be left as None.
dtype: The `dtype` of the memory tensor.
device: The device on which the memory will be allocated.
verify: If True, then explicit checks will be done to verify
that there are no indexing errors. Can be set as False for
performance.
"""
self._dtype = torch.float32 if dtype is None else to_torch_dtype(dtype)
self._device = torch.device("cpu") if device is None else torch.device(device)
self._verify = bool(verify)
if isinstance(num_keys, (list, tuple)):
if len(num_keys) < 2:
raise RuntimeError(
f"When expressed via a list or a tuple, the length of `num_keys` must be at least 2."
f" However, the encountered `num_keys` is {repr(num_keys)}, whose length is {len(num_keys)}."
)
self._multi_key = True
self._num_keys = tuple((int(n) for n in num_keys))
self._internal_key_shape = torch.Size(self._num_keys)
else:
self._multi_key = False
self._num_keys = int(num_keys)
self._internal_key_shape = torch.Size([self._num_keys])
self._internal_key_ndim = len(self._internal_key_shape)
if key_offset is None:
self._key_offset = None
else:
if self._multi_key:
if isinstance(key_offset, (list, tuple)):
key_offset = [int(n) for n in key_offset]
if len(key_offset) != len(self._num_keys):
raise RuntimeError("The length of `key_offset` does not match the length of `num_keys`")
else:
key_offset = [int(key_offset) for _ in range(len(self._num_keys))]
self._key_offset = torch.as_tensor(key_offset, dtype=torch.int64, device=self._device)
else:
if isinstance(key_offset, (list, tuple)):
raise RuntimeError("`key_offset` cannot be a sequence of integers when `num_keys` is a scalar")
else:
self._key_offset = torch.as_tensor(int(key_offset), dtype=torch.int64, device=self._device)
if self._verify:
if self._multi_key:
self._min_key = torch.zeros(len(self._num_keys), dtype=torch.int64, device=self._device)
self._max_key = torch.tensor(list(self._num_keys), dtype=torch.int64, device=self._device) - 1
else:
self._min_key = torch.tensor(0, dtype=torch.int64, device=self._device)
self._max_key = torch.tensor(self._num_keys - 1, dtype=torch.int64, device=self._device)
if self._key_offset is not None:
self._min_key += self._key_offset
self._max_key += self._key_offset
else:
self._min_key = None
self._max_key = None
nsize = len(size)
if nsize == 0:
self._value_shape = torch.Size([])
elif nsize == 1:
if isinstance(size[0], (tuple, list)):
self._value_shape = torch.Size((int(n) for n in size[0]))
else:
self._value_shape = torch.Size([int(size[0])])
else:
self._value_shape = torch.Size((int(n) for n in size))
self._value_ndim = len(self._value_shape)
if (batch_size is None) and (batch_shape is None):
batch_size = None
elif (batch_size is not None) and (batch_shape is None):
pass
elif (batch_size is None) and (batch_shape is not None):
batch_size = batch_shape
else:
raise RuntimeError(
"Encountered both `batch_shape` and `batch_size` at the same time."
" None of them or one of them can be accepted, but not both of them at the same time."
)
if batch_size is None:
self._batch_shape = torch.Size([])
elif isinstance(batch_size, (tuple, list)):
self._batch_shape = torch.Size((int(n) for n in batch_size))
else:
self._batch_shape = torch.Size([int(batch_size)])
self._batch_ndim = len(self._batch_shape)
self._for_all_batches = tuple(
(
torch.arange(self._batch_shape[i], dtype=torch.int64, device=self._device)
for i in range(self._batch_ndim)
)
)
self._data = torch.empty(
self._batch_shape + self._internal_key_shape + self._value_shape,
dtype=(self._dtype),
device=(self._device),
)
if fill_with is not None:
self._data[:] = fill_with
__setitem__(self, key, value)
special
¶
Set the value(s) associated with the given key(s).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
Union[numbers.Number, Iterable[numbers.Number]] |
A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the |
required |
value |
Union[numbers.Number, Iterable[numbers.Number]] |
The new value(s). |
required |
Source code in evotorch/tools/structures.py
add_(self, key, value, where=None)
¶
Add value(s) onto the existing values of slots with the given key(s).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
Union[numbers.Number, Iterable[numbers.Number]] |
A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the |
required |
value |
Union[numbers.Number, Iterable[numbers.Number]] |
The value(s) that will be added onto the existing value(s). |
required |
where |
Union[numbers.Number, Iterable[numbers.Number]] |
Optionally a boolean mask whose shape matches |
None |
Source code in evotorch/tools/structures.py
def add_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Add value(s) onto the existing values of slots with the given key(s).
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
value: The value(s) that will be added onto the existing value(s).
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then modifications will happen only
on the memory slots whose corresponding mask values are True.
"""
where = None if where is None else self._get_where(where)
address = self._get_address(key, where=where)
value = self._get_value(value)
if where is None:
if self._is_dtype_bool:
self._data[address] |= value
else:
self._data[address] += value
else:
if self._is_dtype_bool:
mask_shape = self._batch_shape + tuple((1 for _ in range(self._value_ndim)))
self._data[address] |= value & where.reshape(mask_shape)
else:
self._data[address] += do_where(where, value, torch.tensor(0, dtype=value.dtype, device=value.device))
add_circular_(self, key, value, mod, where=None)
¶
Increase the values of the specified slots in a circular manner.
This operation combines the add and modulo operations.
Circularly adding value
onto x
with a modulo mod
means:
x = (x + value) % mod
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
Union[numbers.Number, Iterable[numbers.Number]] |
A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the |
required |
value |
Union[numbers.Number, Iterable[numbers.Number]] |
The value(s) that will be added onto the existing value(s). |
required |
mod |
Union[numbers.Number, Iterable[numbers.Number]] |
After the raw adding operation, the modulos according to this
|
required |
where |
Union[numbers.Number, Iterable[numbers.Number]] |
Optionally a boolean mask whose shape matches |
None |
Source code in evotorch/tools/structures.py
def add_circular_(self, key: Numbers, value: Numbers, mod: Numbers, where: Optional[Numbers] = None):
"""
Increase the values of the specified slots in a circular manner.
This operation combines the add and modulo operations.
Circularly adding `value` onto `x` with a modulo `mod` means:
`x = (x + value) % mod`.
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
value: The value(s) that will be added onto the existing value(s).
mod: After the raw adding operation, the modulos according to this
`mod` argument will be computed and placed.
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then modifications will happen only
on the memory slots whose corresponding mask values are True.
"""
where = None if where is None else self._get_where(where)
address = self._get_address(key, where=where)
value = self._get_value(value)
mod = self._get_value(mod)
if self._is_dtype_bool:
raise ValueError("Circular addition is not supported for dtype `torch.bool`")
if where is None:
self._data[address] = (self._data[address] + value) % mod
else:
old_value = self._data[address]
new_value = (old_value + value) % mod
self._data[address] = do_where(where, new_value, old_value)
divide_(self, key, value, where=None)
¶
Divide the existing values of slots with the given key(s).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
Union[numbers.Number, Iterable[numbers.Number]] |
A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the |
required |
value |
Union[numbers.Number, Iterable[numbers.Number]] |
The value(s) that will be used as divisor(s). |
required |
where |
Union[numbers.Number, Iterable[numbers.Number]] |
Optionally a boolean mask whose shape matches |
None |
Source code in evotorch/tools/structures.py
def divide_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Divide the existing values of slots with the given key(s).
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
value: The value(s) that will be used as divisor(s).
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then modifications will happen only
on the memory slots whose corresponding mask values are True.
"""
self.multiply_(key, 1 / value, where)
get(self, key)
¶
Get the value(s) associated with the given key(s).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
Union[numbers.Number, Iterable[numbers.Number]] |
A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the |
required |
Returns:
Type | Description |
---|---|
Tensor |
The value(s) associated with the given key(s). |
Source code in evotorch/tools/structures.py
def get(self, key: Numbers) -> torch.Tensor:
"""
Get the value(s) associated with the given key(s).
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
Returns:
The value(s) associated with the given key(s).
"""
address = self._get_address(key)
return self._data[address]
multiply_(self, key, value, where=None)
¶
Multiply the existing values of slots with the given key(s).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
Union[numbers.Number, Iterable[numbers.Number]] |
A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the |
required |
value |
Union[numbers.Number, Iterable[numbers.Number]] |
The value(s) that will be used as the multiplier(s). |
required |
where |
Union[numbers.Number, Iterable[numbers.Number]] |
Optionally a boolean mask whose shape matches |
None |
Source code in evotorch/tools/structures.py
def multiply_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Multiply the existing values of slots with the given key(s).
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
value: The value(s) that will be used as the multiplier(s).
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then modifications will happen only
on the memory slots whose corresponding mask values are True.
"""
where = None if where is None else self._get_where(where)
address = self._get_address(key, where=where)
value = self._get_value(value)
if where is None:
if self._is_dtype_bool:
self._data[address] &= value
else:
self._data[address] += value
else:
if self._is_dtype_bool:
self._data[address] &= do_where(
where, value, torch.tensor(True, dtype=value.dtype, device=value.device)
)
else:
self._data[address] *= do_where(where, value, torch.tensor(1, dtype=value.dtype, device=value.device))
prepare_key_tensor(self, key)
¶
Return the tensor-counterpart of a key.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
Union[numbers.Number, Iterable[numbers.Number]] |
A key which can be a sequence of integers or a PyTorch tensor
with an integer dtype.
The shape of the given key must conform with the |
required |
Returns:
Type | Description |
---|---|
Tensor |
A copy of the key that is converted to PyTorch tensor. |
Source code in evotorch/tools/structures.py
def prepare_key_tensor(self, key: Numbers) -> torch.Tensor:
"""
Return the tensor-counterpart of a key.
Args:
key: A key which can be a sequence of integers or a PyTorch tensor
with an integer dtype.
The shape of the given key must conform with the `key_shape`
of this memory object.
To address to a different key in each batch item, the shape of
the given key can also have extra leftmost dimensions expressed
by `batch_shape`.
Returns:
A copy of the key that is converted to PyTorch tensor.
"""
return self._get_key(key)
prepare_value_tensor(self, value)
¶
Return the tensor-counterpart of a value.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
value |
Union[numbers.Number, Iterable[numbers.Number]] |
A value that can be a numeric sequence or a PyTorch tensor.
The shape of the given value must conform with the
|
required |
Returns:
Type | Description |
---|---|
Tensor |
A copy of the given value(s), converted to PyTorch tensor. |
Source code in evotorch/tools/structures.py
def prepare_value_tensor(self, value: Numbers) -> torch.Tensor:
"""
Return the tensor-counterpart of a value.
Args:
value: A value that can be a numeric sequence or a PyTorch tensor.
The shape of the given value must conform with the
`value_shape` of this memory object.
To express a different value for each batch item, the shape of
the given value can also have extra leftmost dimensions
expressed by `value_shape`.
Returns:
A copy of the given value(s), converted to PyTorch tensor.
"""
return self._get_value(value)
prepare_where_tensor(self, where)
¶
Return the tensor-counterpart of a boolean mask.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
where |
Union[numbers.Number, Iterable[numbers.Number]] |
A boolean mask expressed as a sequence of bools or as a
boolean PyTorch tensor.
The shape of the given mask must conform with the batch shape
that is expressed by the property |
required |
Returns:
Type | Description |
---|---|
Tensor |
A copy of the boolean mask, converted to PyTorch tensor. |
Source code in evotorch/tools/structures.py
def prepare_where_tensor(self, where: Numbers) -> torch.Tensor:
"""
Return the tensor-counterpart of a boolean mask.
Args:
where: A boolean mask expressed as a sequence of bools or as a
boolean PyTorch tensor.
The shape of the given mask must conform with the batch shape
that is expressed by the property `batch_shape`.
Returns:
A copy of the boolean mask, converted to PyTorch tensor.
"""
return self._get_where(where)
set_(self, key, value, where=None)
¶
Set the value(s) associated with the given key(s).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
Union[numbers.Number, Iterable[numbers.Number]] |
A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the |
required |
value |
Union[numbers.Number, Iterable[numbers.Number]] |
The new value(s). |
required |
where |
Union[numbers.Number, Iterable[numbers.Number]] |
Optionally a boolean mask whose shape matches |
None |
Source code in evotorch/tools/structures.py
def set_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Set the value(s) associated with the given key(s).
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
value: The new value(s).
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then modifications will happen only
on the memory slots whose corresponding mask values are True.
"""
where = None if where is None else self._get_where(where)
address = self._get_address(key, where=where)
value = self._get_value(value)
if where is None:
self._data[address] = value
else:
old_value = self._data[address]
new_value = value
self._data[address] = do_where(where, new_value, old_value)
subtract_(self, key, value, where=None)
¶
Subtract value(s) from existing values of slots with the given key(s).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
Union[numbers.Number, Iterable[numbers.Number]] |
A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the |
required |
value |
Union[numbers.Number, Iterable[numbers.Number]] |
The value(s) that will be subtracted from existing value(s). |
required |
where |
Union[numbers.Number, Iterable[numbers.Number]] |
Optionally a boolean mask whose shape matches |
None |
Source code in evotorch/tools/structures.py
def subtract_(self, key: Numbers, value: Numbers, where: Optional[Numbers] = None):
"""
Subtract value(s) from existing values of slots with the given key(s).
Args:
key: A single key, or multiple keys (where the leftmost dimension
of the given keys conform with the `batch_shape`).
value: The value(s) that will be subtracted from existing value(s).
where: Optionally a boolean mask whose shape matches `batch_shape`.
If a `where` mask is given, then modifications will happen only
on the memory slots whose corresponding mask values are True.
"""
self.add_(key, -value, where)
Structure
¶
A mixin class for vectorized structures.
This mixin class assumes that the inheriting structure has a protected
attribute _data
which is either a CMemory
object or another
Structure
. With this assumption, this mixin class provides certain
methods and properties to bring a unified interface for all vectorized
structures provided in this namespace.
Source code in evotorch/tools/structures.py
class Structure:
"""
A mixin class for vectorized structures.
This mixin class assumes that the inheriting structure has a protected
attribute `_data` which is either a `CMemory` object or another
`Structure`. With this assumption, this mixin class provides certain
methods and properties to bring a unified interface for all vectorized
structures provided in this namespace.
"""
_data: Union[CMemory, "Structure"]
@property
def value_shape(self) -> torch.Size:
"""
Shape of a single value
"""
return self._data.value_shape
@property
def value_ndim(self) -> int:
"""
Number of dimensions expressed by `value_shape`
"""
return self._data.value_ndim
@property
def batch_shape(self) -> torch.Size:
"""
Batch size of this structure
"""
return self._data.batch_shape
@property
def batch_ndim(self) -> int:
"""
Number of dimensions expressed by `batch_shape`
"""
return self._data.batch_ndim
@property
def is_batched(self) -> bool:
"""
True if this structure is batched; False otherwise.
"""
return self._batch_ndim > 0
@property
def dtype(self) -> torch.dtype:
"""
`dtype` of the values
"""
return self._data.dtype
@property
def device(self) -> torch.device:
"""
The device on which this structure lives
"""
return self._data.device
def prepare_value_tensor(self, value: Numbers) -> torch.Tensor:
"""
Return the tensor-counterpart of a value.
Args:
value: A value that can be a numeric sequence or a PyTorch tensor.
The shape of the given value must conform with the
`value_shape` of this memory object.
To express a different value for each batch item, the shape of
the given value can also have extra leftmost dimensions
expressed by `value_shape`.
Returns:
A copy of the given value(s), converted to PyTorch tensor.
"""
return self._data.prepare_value_tensor(value)
def prepare_where_tensor(self, where: Numbers) -> torch.Tensor:
"""
Return the tensor-counterpart of a boolean mask.
Args:
where: A boolean mask expressed as a sequence of bools or as a
boolean PyTorch tensor.
The shape of the given mask must conform with the batch shape
that is expressed by the property `batch_shape`.
Returns:
A copy of the boolean mask, converted to PyTorch tensor.
"""
return self._data.prepare_where_tensor(where)
def _get_value(self, value: Numbers) -> torch.Tensor:
return self._data.prepare_value_tensor(value)
def _get_where(self, where: Numbers) -> torch.Tensor:
return self._data.prepare_where_tensor(where)
def __contains__(self, x: Any) -> torch.Tensor:
raise TypeError("This structure does not support the `in` operator")
batch_ndim: int
property
readonly
¶
Number of dimensions expressed by batch_shape
batch_shape: Size
property
readonly
¶
Batch size of this structure
device: device
property
readonly
¶
The device on which this structure lives
dtype: dtype
property
readonly
¶
dtype
of the values
is_batched: bool
property
readonly
¶
True if this structure is batched; False otherwise.
value_ndim: int
property
readonly
¶
Number of dimensions expressed by value_shape
value_shape: Size
property
readonly
¶
Shape of a single value
prepare_value_tensor(self, value)
¶
Return the tensor-counterpart of a value.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
value |
Union[numbers.Number, Iterable[numbers.Number]] |
A value that can be a numeric sequence or a PyTorch tensor.
The shape of the given value must conform with the
|
required |
Returns:
Type | Description |
---|---|
Tensor |
A copy of the given value(s), converted to PyTorch tensor. |
Source code in evotorch/tools/structures.py
def prepare_value_tensor(self, value: Numbers) -> torch.Tensor:
"""
Return the tensor-counterpart of a value.
Args:
value: A value that can be a numeric sequence or a PyTorch tensor.
The shape of the given value must conform with the
`value_shape` of this memory object.
To express a different value for each batch item, the shape of
the given value can also have extra leftmost dimensions
expressed by `value_shape`.
Returns:
A copy of the given value(s), converted to PyTorch tensor.
"""
return self._data.prepare_value_tensor(value)
prepare_where_tensor(self, where)
¶
Return the tensor-counterpart of a boolean mask.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
where |
Union[numbers.Number, Iterable[numbers.Number]] |
A boolean mask expressed as a sequence of bools or as a
boolean PyTorch tensor.
The shape of the given mask must conform with the batch shape
that is expressed by the property |
required |
Returns:
Type | Description |
---|---|
Tensor |
A copy of the boolean mask, converted to PyTorch tensor. |
Source code in evotorch/tools/structures.py
def prepare_where_tensor(self, where: Numbers) -> torch.Tensor:
"""
Return the tensor-counterpart of a boolean mask.
Args:
where: A boolean mask expressed as a sequence of bools or as a
boolean PyTorch tensor.
The shape of the given mask must conform with the batch shape
that is expressed by the property `batch_shape`.
Returns:
A copy of the boolean mask, converted to PyTorch tensor.
"""
return self._data.prepare_where_tensor(where)
tensormaker
¶
Base classes with various utilities for creating tensors.
TensorMakerMixin
¶
Source code in evotorch/tools/tensormaker.py
class TensorMakerMixin:
def __get_dtype_and_device_kwargs(
self,
*,
dtype: Optional[DType],
device: Optional[Device],
use_eval_dtype: bool,
out: Optional[Iterable],
) -> dict:
result = {}
if out is None:
if dtype is None:
if use_eval_dtype:
if hasattr(self, "eval_dtype"):
result["dtype"] = self.eval_dtype
else:
raise AttributeError(
f"Received `use_eval_dtype` as {repr(use_eval_dtype)}, which represents boolean truth."
f" However, evaluation dtype cannot be determined, because this object does not have"
f" an attribute named `eval_dtype`."
)
else:
result["dtype"] = self.dtype
else:
if use_eval_dtype:
raise ValueError(
f"Received both a `dtype` argument ({repr(dtype)}) and `use_eval_dtype` as True."
f" These arguments are conflicting."
f" Please either provide a `dtype`, or leave `dtype` as None and pass `use_eval_dtype=True`."
)
else:
result["dtype"] = dtype
if device is None:
result["device"] = self.device
else:
result["device"] = device
return result
def __get_size_args(self, *size: Size, num_solutions: Optional[int], out: Optional[Iterable]) -> tuple:
if out is None:
nsize = len(size)
if (nsize == 0) and (num_solutions is None):
return tuple()
elif (nsize >= 1) and (num_solutions is None):
return size
elif (nsize == 0) and (num_solutions is not None):
if hasattr(self, "solution_length"):
num_solutions = int(num_solutions)
if self.solution_length is None:
return (num_solutions,)
else:
return (num_solutions, self.solution_length)
else:
raise AttributeError(
f"Received `num_solutions` as {repr(num_solutions)}."
f" However, to determine the target tensor's size via `num_solutions`, this object"
f" needs to have an attribute named `solution_length`, which seems to be missing."
)
else:
raise ValueError(
f"Encountered both `size` arguments ({repr(size)})"
f" and `num_solutions` keyword argument (num_solutions={repr(num_solutions)})."
f" Specifying both `size` and `num_solutions` is not valid."
)
else:
return tuple()
def __get_generator_kwargs(self, *, generator: Any) -> dict:
result = {}
if generator is None:
if hasattr(self, "generator"):
result["generator"] = self.generator
else:
result["generator"] = generator
return result
def __get_all_args_for_maker(
self,
*size: Size,
num_solutions: Optional[int],
out: Optional[Iterable],
dtype: Optional[DType],
device: Optional[Device],
use_eval_dtype: bool,
) -> tuple:
args = self.__get_size_args(*size, num_solutions=num_solutions, out=out)
kwargs = self.__get_dtype_and_device_kwargs(dtype=dtype, device=device, use_eval_dtype=use_eval_dtype, out=out)
if out is not None:
kwargs["out"] = out
return args, kwargs
def __get_all_args_for_random_maker(
self,
*size: Size,
num_solutions: Optional[int],
out: Optional[Iterable],
dtype: Optional[DType],
device: Optional[Device],
use_eval_dtype: bool,
generator: Any,
):
args = self.__get_size_args(*size, num_solutions=num_solutions, out=out)
kwargs = {}
kwargs.update(
self.__get_dtype_and_device_kwargs(dtype=dtype, device=device, use_eval_dtype=use_eval_dtype, out=out)
)
kwargs.update(self.__get_generator_kwargs(generator=generator))
if out is not None:
kwargs["out"] = out
return args, kwargs
def make_tensor(
self,
data: Any,
*,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
use_eval_dtype: bool = False,
read_only: bool = False,
) -> Iterable:
"""
Make a new tensor.
When not explicitly specified via arguments, the dtype and the device
of the resulting tensor is determined by this method's parent object.
Args:
data: The data to be converted to a tensor.
If one wishes to create a PyTorch tensor, this can be anything
that can be stored by a PyTorch tensor.
If one wishes to create an `ObjectArray` and therefore passes
`dtype=object`, then the provided `data` is expected as an
`Iterable`.
dtype: Optionally a string (e.g. "float32"), or a PyTorch dtype
(e.g. torch.float32), or `object` or "object" (as a string)
or `Any` if one wishes to create an `ObjectArray`.
If `dtype` is not specified it will be assumed that the user
wishes to create a tensor using the dtype of this method's
parent object.
device: The device in which the tensor will be stored.
If `device` is not specified, it will be assumed that the user
wishes to create a tensor on the device of this method's
parent object.
use_eval_dtype: If this is given as True and a `dtype` is not
specified, then the `dtype` of the result will be taken
from the `eval_dtype` attribute of this method's parent
object.
read_only: Whether or not the created tensor will be read-only.
By default, this is False.
Returns:
A PyTorch tensor or an ObjectArray.
"""
kwargs = self.__get_dtype_and_device_kwargs(dtype=dtype, device=device, use_eval_dtype=use_eval_dtype, out=None)
return misc.make_tensor(data, read_only=read_only, **kwargs)
def make_empty(
self,
*size: Size,
num_solutions: Optional[int] = None,
out: Optional[Iterable] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
use_eval_dtype: bool = False,
) -> Iterable:
"""
Make an empty tensor.
When not explicitly specified via arguments, the dtype and the device
of the resulting tensor is determined by this method's parent object.
Args:
size: Shape of the empty tensor to be created.
expected as multiple positional arguments of integers,
or as a single positional argument containing a tuple of
integers.
Note that when the user wishes to create an `ObjectArray`
(i.e. when `dtype` is given as `object`), then the size
is expected as a single integer, or as a single-element
tuple containing an integer (because `ObjectArray` can only
be one-dimensional).
num_solutions: This can be used instead of the `size` arguments
for specifying the shape of the target tensor.
Expected as an integer, when `num_solutions` is specified
as `n`, the shape of the resulting tensor will be
`(n, m)` where `m` is the solution length reported by this
method's parent object's `solution_length` attribute.
dtype: Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32) or, for creating an `ObjectArray`,
"object" (as string) or `object` or `Any`.
If `dtype` is not specified (and also `out` is None),
it will be assumed that the user wishes to create a tensor
using the dtype of this method's parent object.
device: The device in which the new empty tensor will be stored.
If not specified (and also `out` is None), it will be
assumed that the user wishes to create a tensor on the
same device with this method's parent object.
use_eval_dtype: If this is given as True and a `dtype` is not
specified, then the `dtype` of the result will be taken
from the `eval_dtype` attribute of this method's parent
object.
Returns:
The new empty tensor, which can be a PyTorch tensor or an
`ObjectArray`.
"""
args, kwargs = self.__get_all_args_for_maker(
*size,
num_solutions=num_solutions,
out=out,
dtype=dtype,
device=device,
use_eval_dtype=use_eval_dtype,
)
return misc.make_empty(*args, **kwargs)
def make_zeros(
self,
*size: Size,
num_solutions: Optional[int] = None,
out: Optional[torch.Tensor] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
use_eval_dtype: bool = False,
) -> torch.Tensor:
"""
Make a new tensor filled with 0, or fill an existing tensor with 0.
When not explicitly specified via arguments, the dtype and the device
of the resulting tensor is determined by this method's parent object.
Args:
size: Size of the new tensor to be filled with 0.
This can be given as multiple positional arguments, each such
positional argument being an integer, or as a single positional
argument of a tuple, the tuple containing multiple integers.
Note that, if the user wishes to fill an existing tensor with
0 values, then no positional argument is expected.
num_solutions: This can be used instead of the `size` arguments
for specifying the shape of the target tensor.
Expected as an integer, when `num_solutions` is specified
as `n`, the shape of the resulting tensor will be
`(n, m)` where `m` is the solution length reported by this
method's parent object's `solution_length` attribute.
out: Optionally, the tensor to be filled by 0 values.
If an `out` tensor is given, then no `size` argument is expected.
dtype: Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If `dtype` is not specified (and also `out` is None),
it will be assumed that the user wishes to create a tensor
using the dtype of this method's parent object.
If an `out` tensor is specified, then `dtype` is expected
as None.
device: The device in which the new empty tensor will be stored.
If not specified (and also `out` is None), it will be
assumed that the user wishes to create a tensor on the
same device with this method's parent object.
If an `out` tensor is specified, then `device` is expected
as None.
use_eval_dtype: If this is given as True and a `dtype` is not
specified, then the `dtype` of the result will be taken
from the `eval_dtype` attribute of this method's parent
object.
Returns:
The created or modified tensor after placing 0 values.
"""
args, kwargs = self.__get_all_args_for_maker(
*size,
num_solutions=num_solutions,
out=out,
dtype=dtype,
device=device,
use_eval_dtype=use_eval_dtype,
)
return misc.make_zeros(*args, **kwargs)
def make_ones(
self,
*size: Size,
num_solutions: Optional[int] = None,
out: Optional[torch.Tensor] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
use_eval_dtype: bool = False,
) -> torch.Tensor:
"""
Make a new tensor filled with 1, or fill an existing tensor with 1.
When not explicitly specified via arguments, the dtype and the device
of the resulting tensor is determined by this method's parent object.
Args:
size: Size of the new tensor to be filled with 1.
This can be given as multiple positional arguments, each such
positional argument being an integer, or as a single positional
argument of a tuple, the tuple containing multiple integers.
Note that, if the user wishes to fill an existing tensor with
1 values, then no positional argument is expected.
num_solutions: This can be used instead of the `size` arguments
for specifying the shape of the target tensor.
Expected as an integer, when `num_solutions` is specified
as `n`, the shape of the resulting tensor will be
`(n, m)` where `m` is the solution length reported by this
method's parent object's `solution_length` attribute.
out: Optionally, the tensor to be filled by 1 values.
If an `out` tensor is given, then no `size` argument is expected.
dtype: Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If `dtype` is not specified (and also `out` is None),
it will be assumed that the user wishes to create a tensor
using the dtype of this method's parent object.
If an `out` tensor is specified, then `dtype` is expected
as None.
device: The device in which the new empty tensor will be stored.
If not specified (and also `out` is None), it will be
assumed that the user wishes to create a tensor on the
same device with this method's parent object.
If an `out` tensor is specified, then `device` is expected
as None.
use_eval_dtype: If this is given as True and a `dtype` is not
specified, then the `dtype` of the result will be taken
from the `eval_dtype` attribute of this method's parent
object.
Returns:
The created or modified tensor after placing 1 values.
"""
args, kwargs = self.__get_all_args_for_maker(
*size,
num_solutions=num_solutions,
out=out,
dtype=dtype,
device=device,
use_eval_dtype=use_eval_dtype,
)
return misc.make_ones(*args, **kwargs)
def make_nan(
self,
*size: Size,
num_solutions: Optional[int] = None,
out: Optional[torch.Tensor] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
use_eval_dtype: bool = False,
) -> torch.Tensor:
"""
Make a new tensor filled with NaN values, or fill an existing tensor
with NaN values.
When not explicitly specified via arguments, the dtype and the device
of the resulting tensor is determined by this method's parent object.
Args:
size: Size of the new tensor to be filled with NaN.
This can be given as multiple positional arguments, each such
positional argument being an integer, or as a single positional
argument of a tuple, the tuple containing multiple integers.
Note that, if the user wishes to fill an existing tensor with
NaN values, then no positional argument is expected.
num_solutions: This can be used instead of the `size` arguments
for specifying the shape of the target tensor.
Expected as an integer, when `num_solutions` is specified
as `n`, the shape of the resulting tensor will be
`(n, m)` where `m` is the solution length reported by this
method's parent object's `solution_length` attribute.
out: Optionally, the tensor to be filled by NaN values.
If an `out` tensor is given, then no `size` argument is expected.
dtype: Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If `dtype` is not specified (and also `out` is None),
it will be assumed that the user wishes to create a tensor
using the dtype of this method's parent object.
If an `out` tensor is specified, then `dtype` is expected
as None.
device: The device in which the new empty tensor will be stored.
If not specified (and also `out` is None), it will be
assumed that the user wishes to create a tensor on the
same device with this method's parent object.
If an `out` tensor is specified, then `device` is expected
as None.
use_eval_dtype: If this is given as True and a `dtype` is not
specified, then the `dtype` of the result will be taken
from the `eval_dtype` attribute of this method's parent
object.
Returns:
The created or modified tensor after placing NaN values.
"""
args, kwargs = self.__get_all_args_for_maker(
*size,
num_solutions=num_solutions,
out=out,
dtype=dtype,
device=device,
use_eval_dtype=use_eval_dtype,
)
return misc.make_nan(*args, **kwargs)
def make_I(
self,
size: Optional[Union[int, tuple]] = None,
*,
out: Optional[torch.Tensor] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
use_eval_dtype: bool = False,
) -> torch.Tensor:
"""
Make a new identity matrix (I), or change an existing tensor so that
it expresses the identity matrix.
When not explicitly specified via arguments, the dtype and the device
of the resulting tensor is determined by this method's parent object.
Args:
size: A single integer or a tuple containing a single integer,
where the integer specifies the length of the target square
matrix. In this context, "length" means both rowwise length
and columnwise length, since the target is a square matrix.
Note that, if the user wishes to fill an existing tensor with
identity values, then `size` is expected to be left as None.
out: Optionally, the existing tensor whose values will be changed
so that they represent an identity matrix.
If an `out` tensor is given, then `size` is expected as None.
dtype: Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If `dtype` is not specified (and also `out` is None),
it will be assumed that the user wishes to create a tensor
using the dtype of this method's parent object.
If an `out` tensor is specified, then `dtype` is expected
as None.
device: The device in which the new empty tensor will be stored.
If not specified (and also `out` is None), it will be
assumed that the user wishes to create a tensor on the
same device with this method's parent object.
If an `out` tensor is specified, then `device` is expected
as None.
use_eval_dtype: If this is given as True and a `dtype` is not
specified, then the `dtype` of the result will be taken
from the `eval_dtype` attribute of this method's parent
object.
Returns:
The created or modified tensor after placing the I matrix values
"""
if size is None:
if out is None:
if hasattr(self, "solution_length"):
size_args = (self.solution_length,)
else:
raise AttributeError(
"The method `.make_I(...)` was used without any `size`"
" arguments."
" When the `size` argument is missing, the default"
" behavior of this method is to create an identity matrix"
" of size (n, n), n being the length of a solution."
" However, the parent object of this method does not have"
" an attribute name `solution_length`."
)
else:
size_args = tuple()
elif isinstance(size, tuple):
if len(size) != 1:
raise ValueError(
f"When the size argument is given as a tuple, the method `make_I(...)` expects the tuple to have"
f" only one element. The given tuple is {size}."
)
size_args = size
else:
size_args = (int(size),)
args, kwargs = self.__get_all_args_for_maker(
*size_args,
num_solutions=None,
out=out,
dtype=dtype,
device=device,
use_eval_dtype=use_eval_dtype,
)
return misc.make_I(*args, **kwargs)
def make_uniform(
self,
*size: Size,
num_solutions: Optional[int] = None,
lb: Optional[RealOrVector] = None,
ub: Optional[RealOrVector] = None,
out: Optional[torch.Tensor] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
use_eval_dtype: bool = False,
generator: Any = None,
) -> torch.Tensor:
"""
Make a new or existing tensor filled by uniformly distributed values.
Both lower and upper bounds are inclusive.
This function can work with both float and int dtypes.
When not explicitly specified via arguments, the dtype and the device
of the resulting tensor is determined by this method's parent object.
Args:
size: Size of the new tensor to be filled with uniformly distributed
values. This can be given as multiple positional arguments, each
such positional argument being an integer, or as a single
positional argument of a tuple, the tuple containing multiple
integers. Note that, if the user wishes to fill an existing
tensor instead, then no positional argument is expected.
num_solutions: This can be used instead of the `size` arguments
for specifying the shape of the target tensor.
Expected as an integer, when `num_solutions` is specified
as `n`, the shape of the resulting tensor will be
`(n, m)` where `m` is the solution length reported by this
method's parent object's `solution_length` attribute.
lb: Lower bound for the uniformly distributed values.
Can be a scalar, or a tensor.
If not specified, the lower bound will be taken as 0.
Note that, if one specifies `lb`, then `ub` is also expected to
be explicitly specified.
ub: Upper bound for the uniformly distributed values.
Can be a scalar, or a tensor.
If not specified, the upper bound will be taken as 1.
Note that, if one specifies `ub`, then `lb` is also expected to
be explicitly specified.
out: Optionally, the tensor to be filled by uniformly distributed
values. If an `out` tensor is given, then no `size` argument is
expected.
dtype: Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If `dtype` is not specified (and also `out` is None),
it will be assumed that the user wishes to create a tensor
using the dtype of this method's parent object.
If an `out` tensor is specified, then `dtype` is expected
as None.
device: The device in which the new empty tensor will be stored.
If not specified (and also `out` is None), it will be
assumed that the user wishes to create a tensor on the
same device with this method's parent object.
If an `out` tensor is specified, then `device` is expected
as None.
use_eval_dtype: If this is given as True and a `dtype` is not
specified, then the `dtype` of the result will be taken
from the `eval_dtype` attribute of this method's parent
object.
generator: Pseudo-random generator to be used when sampling
the values. Can be a `torch.Generator` or any object with
a `generator` attribute (e.g. a Problem object).
If not given, then this method's parent object will be
analyzed whether or not it has its own generator.
If it does, that generator will be used.
If not, the global generator of PyTorch will be used.
Returns:
The created or modified tensor after placing the uniformly
distributed values.
"""
args, kwargs = self.__get_all_args_for_random_maker(
*size,
num_solutions=num_solutions,
out=out,
dtype=dtype,
device=device,
use_eval_dtype=use_eval_dtype,
generator=generator,
)
return misc.make_uniform(*args, lb=lb, ub=ub, **kwargs)
def make_gaussian(
self,
*size: Size,
num_solutions: Optional[int] = None,
center: Optional[RealOrVector] = None,
stdev: Optional[RealOrVector] = None,
symmetric: bool = False,
out: Optional[torch.Tensor] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
use_eval_dtype: bool = False,
generator: Any = None,
) -> torch.Tensor:
"""
Make a new or existing tensor filled by Gaussian distributed values.
This function can work only with float dtypes.
Args:
size: Size of the new tensor to be filled with Gaussian distributed
values. This can be given as multiple positional arguments, each
such positional argument being an integer, or as a single
positional argument of a tuple, the tuple containing multiple
integers. Note that, if the user wishes to fill an existing
tensor instead, then no positional argument is expected.
num_solutions: This can be used instead of the `size` arguments
for specifying the shape of the target tensor.
Expected as an integer, when `num_solutions` is specified
as `n`, the shape of the resulting tensor will be
`(n, m)` where `m` is the solution length reported by this
method's parent object's `solution_length` attribute.
center: Center point (i.e. mean) of the Gaussian distribution.
Can be a scalar, or a tensor.
If not specified, the center point will be taken as 0.
Note that, if one specifies `center`, then `stdev` is also
expected to be explicitly specified.
stdev: Standard deviation for the Gaussian distributed values.
Can be a scalar, or a tensor.
If not specified, the standard deviation will be taken as 1.
Note that, if one specifies `stdev`, then `center` is also
expected to be explicitly specified.
symmetric: Whether or not the values should be sampled in a
symmetric (i.e. antithetic) manner.
The default is False.
out: Optionally, the tensor to be filled by Gaussian distributed
values. If an `out` tensor is given, then no `size` argument is
expected.
dtype: Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If `dtype` is not specified (and also `out` is None),
it will be assumed that the user wishes to create a tensor
using the dtype of this method's parent object.
If an `out` tensor is specified, then `dtype` is expected
as None.
device: The device in which the new empty tensor will be stored.
If not specified (and also `out` is None), it will be
assumed that the user wishes to create a tensor on the
same device with this method's parent object.
If an `out` tensor is specified, then `device` is expected
as None.
use_eval_dtype: If this is given as True and a `dtype` is not
specified, then the `dtype` of the result will be taken
from the `eval_dtype` attribute of this method's parent
object.
generator: Pseudo-random generator to be used when sampling
the values. Can be a `torch.Generator` or any object with
a `generator` attribute (e.g. a Problem object).
If not given, then this method's parent object will be
analyzed whether or not it has its own generator.
If it does, that generator will be used.
If not, the global generator of PyTorch will be used.
Returns:
The created or modified tensor after placing the Gaussian
distributed values.
"""
args, kwargs = self.__get_all_args_for_random_maker(
*size,
num_solutions=num_solutions,
out=out,
dtype=dtype,
device=device,
use_eval_dtype=use_eval_dtype,
generator=generator,
)
return misc.make_gaussian(*args, center=center, stdev=stdev, symmetric=symmetric, **kwargs)
def make_randint(
self,
*size: Size,
n: Union[int, float, torch.Tensor],
num_solutions: Optional[int] = None,
out: Optional[torch.Tensor] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
use_eval_dtype: bool = False,
generator: Any = None,
) -> torch.Tensor:
"""
Make a new or existing tensor filled by random integers.
The integers are uniformly distributed within `[0 ... n-1]`.
This function can be used with integer or float dtypes.
Args:
size: Size of the new tensor to be filled with uniformly distributed
values. This can be given as multiple positional arguments, each
such positional argument being an integer, or as a single
positional argument of a tuple, the tuple containing multiple
integers. Note that, if the user wishes to fill an existing
tensor instead, then no positional argument is expected.
n: Number of choice(s) for integer sampling.
The lowest possible value will be 0, and the highest possible
value will be n - 1.
`n` can be a scalar, or a tensor.
out: Optionally, the tensor to be filled by the random integers.
If an `out` tensor is given, then no `size` argument is
expected.
dtype: Optionally a string (e.g. "int64") or a PyTorch dtype
(e.g. torch.int64).
If `dtype` is not specified (and also `out` is None),
`torch.int64` will be used.
If an `out` tensor is specified, then `dtype` is expected
as None.
device: The device in which the new empty tensor will be stored.
If not specified (and also `out` is None), it will be
assumed that the user wishes to create a tensor on the
same device with this method's parent object.
If an `out` tensor is specified, then `device` is expected
as None.
use_eval_dtype: If this is given as True and a `dtype` is not
specified, then the `dtype` of the result will be taken
from the `eval_dtype` attribute of this method's parent
object.
generator: Pseudo-random generator to be used when sampling
the values. Can be a `torch.Generator` or any object with
a `generator` attribute (e.g. a Problem object).
If not given, then this method's parent object will be
analyzed whether or not it has its own generator.
If it does, that generator will be used.
If not, the global generator of PyTorch will be used.
Returns:
The created or modified tensor after placing the uniformly
distributed values.
"""
if (dtype is None) and (out is None):
dtype = torch.int64
args, kwargs = self.__get_all_args_for_random_maker(
*size,
num_solutions=num_solutions,
out=out,
dtype=dtype,
device=device,
use_eval_dtype=use_eval_dtype,
generator=generator,
)
return misc.make_randint(*args, n=n, **kwargs)
def as_tensor(
self,
x: Any,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
use_eval_dtype: bool = False,
) -> torch.Tensor:
"""
Get the tensor counterpart of the given object `x`.
Args:
x: Any object to be converted to a tensor.
dtype: Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32) or, for creating an `ObjectArray`,
"object" (as string) or `object` or `Any`.
If `dtype` is not specified, the dtype of this method's
parent object will be used.
device: The device in which the resulting tensor will be stored.
If `device` is not specified, the device of this method's
parent object will be used.
use_eval_dtype: If this is given as True and a `dtype` is not
specified, then the `dtype` of the result will be taken
from the `eval_dtype` attribute of this method's parent
object.
Returns:
The tensor counterpart of the given object `x`.
"""
kwargs = self.__get_dtype_and_device_kwargs(dtype=dtype, device=device, use_eval_dtype=use_eval_dtype, out=None)
return misc.as_tensor(x, **kwargs)
def ensure_tensor_length_and_dtype(
self,
t: Any,
length: Optional[int] = None,
dtype: Optional[DType] = None,
about: Optional[str] = None,
*,
allow_scalar: bool = False,
device: Optional[Device] = None,
use_eval_dtype: bool = False,
) -> Iterable:
"""
Return the given sequence as a tensor while also confirming its
length, dtype, and device.
Default length, dtype, device are taken from this method's
parent object.
In more details, these attributes belonging to this method's parent
object will be used for determining the the defaults:
`solution_length`, `dtype`, and `device`.
Args:
t: The tensor, or a sequence which is convertible to a tensor.
length: The length to which the tensor is expected to conform.
If missing, the `solution_length` attribute of this method's
parent object will be used as the default value.
dtype: The dtype to which the tensor is expected to conform.
If `dtype` argument is missing and `use_eval_dtype` is False,
then the default dtype will be determined by the `dtype`
attribute of this method's parent object.
If `dtype` argument is missing and `use_eval_dtype` is True,
then the default dtype will be determined by the `eval_dtype`
attribute of this method's parent object.
about: The prefix for the error message. Can be left as None.
allow_scalar: Whether or not to accept scalars in addition
to vector of the desired length.
If `allow_scalar` is False, then scalars will be converted
to sequences of the desired length. The sequence will contain
the same scalar, repeated.
If `allow_scalar` is True, then the scalar itself will be
converted to a PyTorch scalar, and then will be returned.
device: The device in which the sequence is to be stored.
If the given sequence is on a different device than the
desired device, a copy on the correct device will be made.
If device is None, the default behavior of `torch.tensor(...)`
will be used, that is: if `t` is already a tensor, the result
will be on the same device, otherwise, the result will be on
the cpu.
use_eval_dtype: Whether or not to use the evaluation dtype
(instead of the dtype of decision values).
If this is given as True, the `dtype` argument is expected
as None.
If `dtype` argument is missing and `use_eval_dtype` is False,
then the default dtype will be determined by the `dtype`
attribute of this method's parent object.
If `dtype` argument is missing and `use_eval_dtype` is True,
then the default dtype will be determined by the `eval_dtype`
attribute of this method's parent object.
Returns:
The sequence whose correctness in terms of length, dtype, and
device is ensured.
Raises:
ValueError: if there is a length mismatch.
"""
if length is None:
if hasattr(self, "solution_length"):
length = self.solution_length
else:
raise AttributeError(
f"{about}: The argument `length` was found to be None."
f" When the `length` argument is None, the default behavior is to use the `solution_length`"
f" attribute of this method's parent object."
f" However, this method's parent object does NOT have a `solution_length` attribute."
)
dtype_and_device = self.__get_dtype_and_device_kwargs(
dtype=dtype, device=device, use_eval_dtype=use_eval_dtype, out=None
)
return misc.ensure_tensor_length_and_dtype(
t, length=length, about=about, allow_scalar=allow_scalar, **dtype_and_device
)
def make_uniform_shaped_like(
self,
t: torch.Tensor,
*,
lb: Optional[RealOrVector] = None,
ub: Optional[RealOrVector] = None,
) -> torch.Tensor:
"""
Make a new uniformly-filled tensor, shaped like the given tensor.
The `dtype` and `device` will be determined by the parent of this
method (not by the given tensor).
If the parent of this method has its own random generator, then that
generator will be used.
Args:
t: The tensor according to which the result will be shaped.
lb: The inclusive lower bounds for the uniform distribution.
Can be a scalar or a tensor.
If left as None, 0.0 will be used as the upper bound.
ub: The inclusive upper bounds for the uniform distribution.
Can be a scalar or a tensor.
If left as None, 1.0 will be used as the upper bound.
Returns:
A new tensor whose shape is the same with the given tensor.
"""
return self.make_uniform(t.shape, lb=lb, ub=ub)
def make_gaussian_shaped_like(
self,
t: torch.Tensor,
*,
center: Optional[RealOrVector] = None,
stdev: Optional[RealOrVector] = None,
) -> torch.Tensor:
"""
Make a new tensor, shaped like the given tensor, with its values
filled by the Gaussian distribution.
The `dtype` and `device` will be determined by the parent of this
method (not by the given tensor).
If the parent of this method has its own random generator, then that
generator will be used.
Args:
t: The tensor according to which the result will be shaped.
center: Center point for the Gaussian distribution.
Can be a scalar or a tensor.
If left as None, 0.0 will be used as the center point.
stdev: The standard deviation for the Gaussian distribution.
Can be a scalar or a tensor.
If left as None, 1.0 will be used as the standard deviation.
Returns:
A new tensor whose shape is the same with the given tensor.
"""
return self.make_gaussian(t.shape, center=center, stdev=stdev)
as_tensor(self, x, dtype=None, device=None, use_eval_dtype=False)
¶
Get the tensor counterpart of the given object x
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Any |
Any object to be converted to a tensor. |
required |
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32) or, for creating an |
None |
device |
Union[str, torch.device] |
The device in which the resulting tensor will be stored.
If |
None |
use_eval_dtype |
bool |
If this is given as True and a |
False |
Returns:
Type | Description |
---|---|
Tensor |
The tensor counterpart of the given object |
Source code in evotorch/tools/tensormaker.py
def as_tensor(
self,
x: Any,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
use_eval_dtype: bool = False,
) -> torch.Tensor:
"""
Get the tensor counterpart of the given object `x`.
Args:
x: Any object to be converted to a tensor.
dtype: Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32) or, for creating an `ObjectArray`,
"object" (as string) or `object` or `Any`.
If `dtype` is not specified, the dtype of this method's
parent object will be used.
device: The device in which the resulting tensor will be stored.
If `device` is not specified, the device of this method's
parent object will be used.
use_eval_dtype: If this is given as True and a `dtype` is not
specified, then the `dtype` of the result will be taken
from the `eval_dtype` attribute of this method's parent
object.
Returns:
The tensor counterpart of the given object `x`.
"""
kwargs = self.__get_dtype_and_device_kwargs(dtype=dtype, device=device, use_eval_dtype=use_eval_dtype, out=None)
return misc.as_tensor(x, **kwargs)
ensure_tensor_length_and_dtype(self, t, length=None, dtype=None, about=None, *, allow_scalar=False, device=None, use_eval_dtype=False)
¶
Return the given sequence as a tensor while also confirming its length, dtype, and device.
Default length, dtype, device are taken from this method's
parent object.
In more details, these attributes belonging to this method's parent
object will be used for determining the the defaults:
solution_length
, dtype
, and device
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
t |
Any |
The tensor, or a sequence which is convertible to a tensor. |
required |
length |
Optional[int] |
The length to which the tensor is expected to conform.
If missing, the |
None |
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
The dtype to which the tensor is expected to conform.
If |
None |
about |
Optional[str] |
The prefix for the error message. Can be left as None. |
None |
allow_scalar |
bool |
Whether or not to accept scalars in addition
to vector of the desired length.
If |
False |
device |
Union[str, torch.device] |
The device in which the sequence is to be stored.
If the given sequence is on a different device than the
desired device, a copy on the correct device will be made.
If device is None, the default behavior of |
None |
use_eval_dtype |
bool |
Whether or not to use the evaluation dtype
(instead of the dtype of decision values).
If this is given as True, the |
False |
Returns:
Type | Description |
---|---|
Iterable |
The sequence whose correctness in terms of length, dtype, and device is ensured. |
Exceptions:
Type | Description |
---|---|
ValueError |
if there is a length mismatch. |
Source code in evotorch/tools/tensormaker.py
def ensure_tensor_length_and_dtype(
self,
t: Any,
length: Optional[int] = None,
dtype: Optional[DType] = None,
about: Optional[str] = None,
*,
allow_scalar: bool = False,
device: Optional[Device] = None,
use_eval_dtype: bool = False,
) -> Iterable:
"""
Return the given sequence as a tensor while also confirming its
length, dtype, and device.
Default length, dtype, device are taken from this method's
parent object.
In more details, these attributes belonging to this method's parent
object will be used for determining the the defaults:
`solution_length`, `dtype`, and `device`.
Args:
t: The tensor, or a sequence which is convertible to a tensor.
length: The length to which the tensor is expected to conform.
If missing, the `solution_length` attribute of this method's
parent object will be used as the default value.
dtype: The dtype to which the tensor is expected to conform.
If `dtype` argument is missing and `use_eval_dtype` is False,
then the default dtype will be determined by the `dtype`
attribute of this method's parent object.
If `dtype` argument is missing and `use_eval_dtype` is True,
then the default dtype will be determined by the `eval_dtype`
attribute of this method's parent object.
about: The prefix for the error message. Can be left as None.
allow_scalar: Whether or not to accept scalars in addition
to vector of the desired length.
If `allow_scalar` is False, then scalars will be converted
to sequences of the desired length. The sequence will contain
the same scalar, repeated.
If `allow_scalar` is True, then the scalar itself will be
converted to a PyTorch scalar, and then will be returned.
device: The device in which the sequence is to be stored.
If the given sequence is on a different device than the
desired device, a copy on the correct device will be made.
If device is None, the default behavior of `torch.tensor(...)`
will be used, that is: if `t` is already a tensor, the result
will be on the same device, otherwise, the result will be on
the cpu.
use_eval_dtype: Whether or not to use the evaluation dtype
(instead of the dtype of decision values).
If this is given as True, the `dtype` argument is expected
as None.
If `dtype` argument is missing and `use_eval_dtype` is False,
then the default dtype will be determined by the `dtype`
attribute of this method's parent object.
If `dtype` argument is missing and `use_eval_dtype` is True,
then the default dtype will be determined by the `eval_dtype`
attribute of this method's parent object.
Returns:
The sequence whose correctness in terms of length, dtype, and
device is ensured.
Raises:
ValueError: if there is a length mismatch.
"""
if length is None:
if hasattr(self, "solution_length"):
length = self.solution_length
else:
raise AttributeError(
f"{about}: The argument `length` was found to be None."
f" When the `length` argument is None, the default behavior is to use the `solution_length`"
f" attribute of this method's parent object."
f" However, this method's parent object does NOT have a `solution_length` attribute."
)
dtype_and_device = self.__get_dtype_and_device_kwargs(
dtype=dtype, device=device, use_eval_dtype=use_eval_dtype, out=None
)
return misc.ensure_tensor_length_and_dtype(
t, length=length, about=about, allow_scalar=allow_scalar, **dtype_and_device
)
make_I(self, size=None, *, out=None, dtype=None, device=None, use_eval_dtype=False)
¶
Make a new identity matrix (I), or change an existing tensor so that it expresses the identity matrix.
When not explicitly specified via arguments, the dtype and the device of the resulting tensor is determined by this method's parent object.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
size |
Union[tuple, int] |
A single integer or a tuple containing a single integer,
where the integer specifies the length of the target square
matrix. In this context, "length" means both rowwise length
and columnwise length, since the target is a square matrix.
Note that, if the user wishes to fill an existing tensor with
identity values, then |
None |
out |
Optional[torch.Tensor] |
Optionally, the existing tensor whose values will be changed
so that they represent an identity matrix.
If an |
None |
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If |
None |
device |
Union[str, torch.device] |
The device in which the new empty tensor will be stored.
If not specified (and also |
None |
use_eval_dtype |
bool |
If this is given as True and a |
False |
Returns:
Type | Description |
---|---|
Tensor |
The created or modified tensor after placing the I matrix values |
Source code in evotorch/tools/tensormaker.py
def make_I(
self,
size: Optional[Union[int, tuple]] = None,
*,
out: Optional[torch.Tensor] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
use_eval_dtype: bool = False,
) -> torch.Tensor:
"""
Make a new identity matrix (I), or change an existing tensor so that
it expresses the identity matrix.
When not explicitly specified via arguments, the dtype and the device
of the resulting tensor is determined by this method's parent object.
Args:
size: A single integer or a tuple containing a single integer,
where the integer specifies the length of the target square
matrix. In this context, "length" means both rowwise length
and columnwise length, since the target is a square matrix.
Note that, if the user wishes to fill an existing tensor with
identity values, then `size` is expected to be left as None.
out: Optionally, the existing tensor whose values will be changed
so that they represent an identity matrix.
If an `out` tensor is given, then `size` is expected as None.
dtype: Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If `dtype` is not specified (and also `out` is None),
it will be assumed that the user wishes to create a tensor
using the dtype of this method's parent object.
If an `out` tensor is specified, then `dtype` is expected
as None.
device: The device in which the new empty tensor will be stored.
If not specified (and also `out` is None), it will be
assumed that the user wishes to create a tensor on the
same device with this method's parent object.
If an `out` tensor is specified, then `device` is expected
as None.
use_eval_dtype: If this is given as True and a `dtype` is not
specified, then the `dtype` of the result will be taken
from the `eval_dtype` attribute of this method's parent
object.
Returns:
The created or modified tensor after placing the I matrix values
"""
if size is None:
if out is None:
if hasattr(self, "solution_length"):
size_args = (self.solution_length,)
else:
raise AttributeError(
"The method `.make_I(...)` was used without any `size`"
" arguments."
" When the `size` argument is missing, the default"
" behavior of this method is to create an identity matrix"
" of size (n, n), n being the length of a solution."
" However, the parent object of this method does not have"
" an attribute name `solution_length`."
)
else:
size_args = tuple()
elif isinstance(size, tuple):
if len(size) != 1:
raise ValueError(
f"When the size argument is given as a tuple, the method `make_I(...)` expects the tuple to have"
f" only one element. The given tuple is {size}."
)
size_args = size
else:
size_args = (int(size),)
args, kwargs = self.__get_all_args_for_maker(
*size_args,
num_solutions=None,
out=out,
dtype=dtype,
device=device,
use_eval_dtype=use_eval_dtype,
)
return misc.make_I(*args, **kwargs)
make_empty(self, *size, *, num_solutions=None, out=None, dtype=None, device=None, use_eval_dtype=False)
¶
Make an empty tensor.
When not explicitly specified via arguments, the dtype and the device of the resulting tensor is determined by this method's parent object.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
size |
Union[int, torch.Size] |
Shape of the empty tensor to be created.
expected as multiple positional arguments of integers,
or as a single positional argument containing a tuple of
integers.
Note that when the user wishes to create an |
() |
num_solutions |
Optional[int] |
This can be used instead of the |
None |
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32) or, for creating an |
None |
device |
Union[str, torch.device] |
The device in which the new empty tensor will be stored.
If not specified (and also |
None |
use_eval_dtype |
bool |
If this is given as True and a |
False |
Returns:
Type | Description |
---|---|
Iterable |
The new empty tensor, which can be a PyTorch tensor or an
|
Source code in evotorch/tools/tensormaker.py
def make_empty(
self,
*size: Size,
num_solutions: Optional[int] = None,
out: Optional[Iterable] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
use_eval_dtype: bool = False,
) -> Iterable:
"""
Make an empty tensor.
When not explicitly specified via arguments, the dtype and the device
of the resulting tensor is determined by this method's parent object.
Args:
size: Shape of the empty tensor to be created.
expected as multiple positional arguments of integers,
or as a single positional argument containing a tuple of
integers.
Note that when the user wishes to create an `ObjectArray`
(i.e. when `dtype` is given as `object`), then the size
is expected as a single integer, or as a single-element
tuple containing an integer (because `ObjectArray` can only
be one-dimensional).
num_solutions: This can be used instead of the `size` arguments
for specifying the shape of the target tensor.
Expected as an integer, when `num_solutions` is specified
as `n`, the shape of the resulting tensor will be
`(n, m)` where `m` is the solution length reported by this
method's parent object's `solution_length` attribute.
dtype: Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32) or, for creating an `ObjectArray`,
"object" (as string) or `object` or `Any`.
If `dtype` is not specified (and also `out` is None),
it will be assumed that the user wishes to create a tensor
using the dtype of this method's parent object.
device: The device in which the new empty tensor will be stored.
If not specified (and also `out` is None), it will be
assumed that the user wishes to create a tensor on the
same device with this method's parent object.
use_eval_dtype: If this is given as True and a `dtype` is not
specified, then the `dtype` of the result will be taken
from the `eval_dtype` attribute of this method's parent
object.
Returns:
The new empty tensor, which can be a PyTorch tensor or an
`ObjectArray`.
"""
args, kwargs = self.__get_all_args_for_maker(
*size,
num_solutions=num_solutions,
out=out,
dtype=dtype,
device=device,
use_eval_dtype=use_eval_dtype,
)
return misc.make_empty(*args, **kwargs)
make_gaussian(self, *size, *, num_solutions=None, center=None, stdev=None, symmetric=False, out=None, dtype=None, device=None, use_eval_dtype=False, generator=None)
¶
Make a new or existing tensor filled by Gaussian distributed values. This function can work only with float dtypes.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
size |
Union[int, torch.Size] |
Size of the new tensor to be filled with Gaussian distributed values. This can be given as multiple positional arguments, each such positional argument being an integer, or as a single positional argument of a tuple, the tuple containing multiple integers. Note that, if the user wishes to fill an existing tensor instead, then no positional argument is expected. |
() |
num_solutions |
Optional[int] |
This can be used instead of the |
None |
center |
Union[float, Iterable[float], torch.Tensor] |
Center point (i.e. mean) of the Gaussian distribution.
Can be a scalar, or a tensor.
If not specified, the center point will be taken as 0.
Note that, if one specifies |
None |
stdev |
Union[float, Iterable[float], torch.Tensor] |
Standard deviation for the Gaussian distributed values.
Can be a scalar, or a tensor.
If not specified, the standard deviation will be taken as 1.
Note that, if one specifies |
None |
symmetric |
bool |
Whether or not the values should be sampled in a symmetric (i.e. antithetic) manner. The default is False. |
False |
out |
Optional[torch.Tensor] |
Optionally, the tensor to be filled by Gaussian distributed
values. If an |
None |
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If |
None |
device |
Union[str, torch.device] |
The device in which the new empty tensor will be stored.
If not specified (and also |
None |
use_eval_dtype |
bool |
If this is given as True and a |
False |
generator |
Any |
Pseudo-random generator to be used when sampling
the values. Can be a |
None |
Returns:
Type | Description |
---|---|
Tensor |
The created or modified tensor after placing the Gaussian distributed values. |
Source code in evotorch/tools/tensormaker.py
def make_gaussian(
self,
*size: Size,
num_solutions: Optional[int] = None,
center: Optional[RealOrVector] = None,
stdev: Optional[RealOrVector] = None,
symmetric: bool = False,
out: Optional[torch.Tensor] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
use_eval_dtype: bool = False,
generator: Any = None,
) -> torch.Tensor:
"""
Make a new or existing tensor filled by Gaussian distributed values.
This function can work only with float dtypes.
Args:
size: Size of the new tensor to be filled with Gaussian distributed
values. This can be given as multiple positional arguments, each
such positional argument being an integer, or as a single
positional argument of a tuple, the tuple containing multiple
integers. Note that, if the user wishes to fill an existing
tensor instead, then no positional argument is expected.
num_solutions: This can be used instead of the `size` arguments
for specifying the shape of the target tensor.
Expected as an integer, when `num_solutions` is specified
as `n`, the shape of the resulting tensor will be
`(n, m)` where `m` is the solution length reported by this
method's parent object's `solution_length` attribute.
center: Center point (i.e. mean) of the Gaussian distribution.
Can be a scalar, or a tensor.
If not specified, the center point will be taken as 0.
Note that, if one specifies `center`, then `stdev` is also
expected to be explicitly specified.
stdev: Standard deviation for the Gaussian distributed values.
Can be a scalar, or a tensor.
If not specified, the standard deviation will be taken as 1.
Note that, if one specifies `stdev`, then `center` is also
expected to be explicitly specified.
symmetric: Whether or not the values should be sampled in a
symmetric (i.e. antithetic) manner.
The default is False.
out: Optionally, the tensor to be filled by Gaussian distributed
values. If an `out` tensor is given, then no `size` argument is
expected.
dtype: Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If `dtype` is not specified (and also `out` is None),
it will be assumed that the user wishes to create a tensor
using the dtype of this method's parent object.
If an `out` tensor is specified, then `dtype` is expected
as None.
device: The device in which the new empty tensor will be stored.
If not specified (and also `out` is None), it will be
assumed that the user wishes to create a tensor on the
same device with this method's parent object.
If an `out` tensor is specified, then `device` is expected
as None.
use_eval_dtype: If this is given as True and a `dtype` is not
specified, then the `dtype` of the result will be taken
from the `eval_dtype` attribute of this method's parent
object.
generator: Pseudo-random generator to be used when sampling
the values. Can be a `torch.Generator` or any object with
a `generator` attribute (e.g. a Problem object).
If not given, then this method's parent object will be
analyzed whether or not it has its own generator.
If it does, that generator will be used.
If not, the global generator of PyTorch will be used.
Returns:
The created or modified tensor after placing the Gaussian
distributed values.
"""
args, kwargs = self.__get_all_args_for_random_maker(
*size,
num_solutions=num_solutions,
out=out,
dtype=dtype,
device=device,
use_eval_dtype=use_eval_dtype,
generator=generator,
)
return misc.make_gaussian(*args, center=center, stdev=stdev, symmetric=symmetric, **kwargs)
make_gaussian_shaped_like(self, t, *, center=None, stdev=None)
¶
Make a new tensor, shaped like the given tensor, with its values filled by the Gaussian distribution.
The dtype
and device
will be determined by the parent of this
method (not by the given tensor).
If the parent of this method has its own random generator, then that
generator will be used.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
t |
Tensor |
The tensor according to which the result will be shaped. |
required |
center |
Union[float, Iterable[float], torch.Tensor] |
Center point for the Gaussian distribution. Can be a scalar or a tensor. If left as None, 0.0 will be used as the center point. |
None |
stdev |
Union[float, Iterable[float], torch.Tensor] |
The standard deviation for the Gaussian distribution. Can be a scalar or a tensor. If left as None, 1.0 will be used as the standard deviation. |
None |
Returns:
Type | Description |
---|---|
Tensor |
A new tensor whose shape is the same with the given tensor. |
Source code in evotorch/tools/tensormaker.py
def make_gaussian_shaped_like(
self,
t: torch.Tensor,
*,
center: Optional[RealOrVector] = None,
stdev: Optional[RealOrVector] = None,
) -> torch.Tensor:
"""
Make a new tensor, shaped like the given tensor, with its values
filled by the Gaussian distribution.
The `dtype` and `device` will be determined by the parent of this
method (not by the given tensor).
If the parent of this method has its own random generator, then that
generator will be used.
Args:
t: The tensor according to which the result will be shaped.
center: Center point for the Gaussian distribution.
Can be a scalar or a tensor.
If left as None, 0.0 will be used as the center point.
stdev: The standard deviation for the Gaussian distribution.
Can be a scalar or a tensor.
If left as None, 1.0 will be used as the standard deviation.
Returns:
A new tensor whose shape is the same with the given tensor.
"""
return self.make_gaussian(t.shape, center=center, stdev=stdev)
make_nan(self, *size, *, num_solutions=None, out=None, dtype=None, device=None, use_eval_dtype=False)
¶
Make a new tensor filled with NaN values, or fill an existing tensor with NaN values.
When not explicitly specified via arguments, the dtype and the device of the resulting tensor is determined by this method's parent object.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
size |
Union[int, torch.Size] |
Size of the new tensor to be filled with NaN. This can be given as multiple positional arguments, each such positional argument being an integer, or as a single positional argument of a tuple, the tuple containing multiple integers. Note that, if the user wishes to fill an existing tensor with NaN values, then no positional argument is expected. |
() |
num_solutions |
Optional[int] |
This can be used instead of the |
None |
out |
Optional[torch.Tensor] |
Optionally, the tensor to be filled by NaN values.
If an |
None |
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If |
None |
device |
Union[str, torch.device] |
The device in which the new empty tensor will be stored.
If not specified (and also |
None |
use_eval_dtype |
bool |
If this is given as True and a |
False |
Returns:
Type | Description |
---|---|
Tensor |
The created or modified tensor after placing NaN values. |
Source code in evotorch/tools/tensormaker.py
def make_nan(
self,
*size: Size,
num_solutions: Optional[int] = None,
out: Optional[torch.Tensor] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
use_eval_dtype: bool = False,
) -> torch.Tensor:
"""
Make a new tensor filled with NaN values, or fill an existing tensor
with NaN values.
When not explicitly specified via arguments, the dtype and the device
of the resulting tensor is determined by this method's parent object.
Args:
size: Size of the new tensor to be filled with NaN.
This can be given as multiple positional arguments, each such
positional argument being an integer, or as a single positional
argument of a tuple, the tuple containing multiple integers.
Note that, if the user wishes to fill an existing tensor with
NaN values, then no positional argument is expected.
num_solutions: This can be used instead of the `size` arguments
for specifying the shape of the target tensor.
Expected as an integer, when `num_solutions` is specified
as `n`, the shape of the resulting tensor will be
`(n, m)` where `m` is the solution length reported by this
method's parent object's `solution_length` attribute.
out: Optionally, the tensor to be filled by NaN values.
If an `out` tensor is given, then no `size` argument is expected.
dtype: Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If `dtype` is not specified (and also `out` is None),
it will be assumed that the user wishes to create a tensor
using the dtype of this method's parent object.
If an `out` tensor is specified, then `dtype` is expected
as None.
device: The device in which the new empty tensor will be stored.
If not specified (and also `out` is None), it will be
assumed that the user wishes to create a tensor on the
same device with this method's parent object.
If an `out` tensor is specified, then `device` is expected
as None.
use_eval_dtype: If this is given as True and a `dtype` is not
specified, then the `dtype` of the result will be taken
from the `eval_dtype` attribute of this method's parent
object.
Returns:
The created or modified tensor after placing NaN values.
"""
args, kwargs = self.__get_all_args_for_maker(
*size,
num_solutions=num_solutions,
out=out,
dtype=dtype,
device=device,
use_eval_dtype=use_eval_dtype,
)
return misc.make_nan(*args, **kwargs)
make_ones(self, *size, *, num_solutions=None, out=None, dtype=None, device=None, use_eval_dtype=False)
¶
Make a new tensor filled with 1, or fill an existing tensor with 1.
When not explicitly specified via arguments, the dtype and the device of the resulting tensor is determined by this method's parent object.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
size |
Union[int, torch.Size] |
Size of the new tensor to be filled with 1. This can be given as multiple positional arguments, each such positional argument being an integer, or as a single positional argument of a tuple, the tuple containing multiple integers. Note that, if the user wishes to fill an existing tensor with 1 values, then no positional argument is expected. |
() |
num_solutions |
Optional[int] |
This can be used instead of the |
None |
out |
Optional[torch.Tensor] |
Optionally, the tensor to be filled by 1 values.
If an |
None |
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If |
None |
device |
Union[str, torch.device] |
The device in which the new empty tensor will be stored.
If not specified (and also |
None |
use_eval_dtype |
bool |
If this is given as True and a |
False |
Returns:
Type | Description |
---|---|
Tensor |
The created or modified tensor after placing 1 values. |
Source code in evotorch/tools/tensormaker.py
def make_ones(
self,
*size: Size,
num_solutions: Optional[int] = None,
out: Optional[torch.Tensor] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
use_eval_dtype: bool = False,
) -> torch.Tensor:
"""
Make a new tensor filled with 1, or fill an existing tensor with 1.
When not explicitly specified via arguments, the dtype and the device
of the resulting tensor is determined by this method's parent object.
Args:
size: Size of the new tensor to be filled with 1.
This can be given as multiple positional arguments, each such
positional argument being an integer, or as a single positional
argument of a tuple, the tuple containing multiple integers.
Note that, if the user wishes to fill an existing tensor with
1 values, then no positional argument is expected.
num_solutions: This can be used instead of the `size` arguments
for specifying the shape of the target tensor.
Expected as an integer, when `num_solutions` is specified
as `n`, the shape of the resulting tensor will be
`(n, m)` where `m` is the solution length reported by this
method's parent object's `solution_length` attribute.
out: Optionally, the tensor to be filled by 1 values.
If an `out` tensor is given, then no `size` argument is expected.
dtype: Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If `dtype` is not specified (and also `out` is None),
it will be assumed that the user wishes to create a tensor
using the dtype of this method's parent object.
If an `out` tensor is specified, then `dtype` is expected
as None.
device: The device in which the new empty tensor will be stored.
If not specified (and also `out` is None), it will be
assumed that the user wishes to create a tensor on the
same device with this method's parent object.
If an `out` tensor is specified, then `device` is expected
as None.
use_eval_dtype: If this is given as True and a `dtype` is not
specified, then the `dtype` of the result will be taken
from the `eval_dtype` attribute of this method's parent
object.
Returns:
The created or modified tensor after placing 1 values.
"""
args, kwargs = self.__get_all_args_for_maker(
*size,
num_solutions=num_solutions,
out=out,
dtype=dtype,
device=device,
use_eval_dtype=use_eval_dtype,
)
return misc.make_ones(*args, **kwargs)
make_randint(self, *size, *, n, num_solutions=None, out=None, dtype=None, device=None, use_eval_dtype=False, generator=None)
¶
Make a new or existing tensor filled by random integers.
The integers are uniformly distributed within [0 ... n-1]
.
This function can be used with integer or float dtypes.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
size |
Union[int, torch.Size] |
Size of the new tensor to be filled with uniformly distributed values. This can be given as multiple positional arguments, each such positional argument being an integer, or as a single positional argument of a tuple, the tuple containing multiple integers. Note that, if the user wishes to fill an existing tensor instead, then no positional argument is expected. |
() |
n |
Union[int, float, torch.Tensor] |
Number of choice(s) for integer sampling.
The lowest possible value will be 0, and the highest possible
value will be n - 1.
|
required |
out |
Optional[torch.Tensor] |
Optionally, the tensor to be filled by the random integers.
If an |
None |
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
Optionally a string (e.g. "int64") or a PyTorch dtype
(e.g. torch.int64).
If |
None |
device |
Union[str, torch.device] |
The device in which the new empty tensor will be stored.
If not specified (and also |
None |
use_eval_dtype |
bool |
If this is given as True and a |
False |
generator |
Any |
Pseudo-random generator to be used when sampling
the values. Can be a |
None |
Returns:
Type | Description |
---|---|
Tensor |
The created or modified tensor after placing the uniformly distributed values. |
Source code in evotorch/tools/tensormaker.py
def make_randint(
self,
*size: Size,
n: Union[int, float, torch.Tensor],
num_solutions: Optional[int] = None,
out: Optional[torch.Tensor] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
use_eval_dtype: bool = False,
generator: Any = None,
) -> torch.Tensor:
"""
Make a new or existing tensor filled by random integers.
The integers are uniformly distributed within `[0 ... n-1]`.
This function can be used with integer or float dtypes.
Args:
size: Size of the new tensor to be filled with uniformly distributed
values. This can be given as multiple positional arguments, each
such positional argument being an integer, or as a single
positional argument of a tuple, the tuple containing multiple
integers. Note that, if the user wishes to fill an existing
tensor instead, then no positional argument is expected.
n: Number of choice(s) for integer sampling.
The lowest possible value will be 0, and the highest possible
value will be n - 1.
`n` can be a scalar, or a tensor.
out: Optionally, the tensor to be filled by the random integers.
If an `out` tensor is given, then no `size` argument is
expected.
dtype: Optionally a string (e.g. "int64") or a PyTorch dtype
(e.g. torch.int64).
If `dtype` is not specified (and also `out` is None),
`torch.int64` will be used.
If an `out` tensor is specified, then `dtype` is expected
as None.
device: The device in which the new empty tensor will be stored.
If not specified (and also `out` is None), it will be
assumed that the user wishes to create a tensor on the
same device with this method's parent object.
If an `out` tensor is specified, then `device` is expected
as None.
use_eval_dtype: If this is given as True and a `dtype` is not
specified, then the `dtype` of the result will be taken
from the `eval_dtype` attribute of this method's parent
object.
generator: Pseudo-random generator to be used when sampling
the values. Can be a `torch.Generator` or any object with
a `generator` attribute (e.g. a Problem object).
If not given, then this method's parent object will be
analyzed whether or not it has its own generator.
If it does, that generator will be used.
If not, the global generator of PyTorch will be used.
Returns:
The created or modified tensor after placing the uniformly
distributed values.
"""
if (dtype is None) and (out is None):
dtype = torch.int64
args, kwargs = self.__get_all_args_for_random_maker(
*size,
num_solutions=num_solutions,
out=out,
dtype=dtype,
device=device,
use_eval_dtype=use_eval_dtype,
generator=generator,
)
return misc.make_randint(*args, n=n, **kwargs)
make_tensor(self, data, *, dtype=None, device=None, use_eval_dtype=False, read_only=False)
¶
Make a new tensor.
When not explicitly specified via arguments, the dtype and the device of the resulting tensor is determined by this method's parent object.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
data |
Any |
The data to be converted to a tensor.
If one wishes to create a PyTorch tensor, this can be anything
that can be stored by a PyTorch tensor.
If one wishes to create an |
required |
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
Optionally a string (e.g. "float32"), or a PyTorch dtype
(e.g. torch.float32), or |
None |
device |
Union[str, torch.device] |
The device in which the tensor will be stored.
If |
None |
use_eval_dtype |
bool |
If this is given as True and a |
False |
read_only |
bool |
Whether or not the created tensor will be read-only. By default, this is False. |
False |
Returns:
Type | Description |
---|---|
Iterable |
A PyTorch tensor or an ObjectArray. |
Source code in evotorch/tools/tensormaker.py
def make_tensor(
self,
data: Any,
*,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
use_eval_dtype: bool = False,
read_only: bool = False,
) -> Iterable:
"""
Make a new tensor.
When not explicitly specified via arguments, the dtype and the device
of the resulting tensor is determined by this method's parent object.
Args:
data: The data to be converted to a tensor.
If one wishes to create a PyTorch tensor, this can be anything
that can be stored by a PyTorch tensor.
If one wishes to create an `ObjectArray` and therefore passes
`dtype=object`, then the provided `data` is expected as an
`Iterable`.
dtype: Optionally a string (e.g. "float32"), or a PyTorch dtype
(e.g. torch.float32), or `object` or "object" (as a string)
or `Any` if one wishes to create an `ObjectArray`.
If `dtype` is not specified it will be assumed that the user
wishes to create a tensor using the dtype of this method's
parent object.
device: The device in which the tensor will be stored.
If `device` is not specified, it will be assumed that the user
wishes to create a tensor on the device of this method's
parent object.
use_eval_dtype: If this is given as True and a `dtype` is not
specified, then the `dtype` of the result will be taken
from the `eval_dtype` attribute of this method's parent
object.
read_only: Whether or not the created tensor will be read-only.
By default, this is False.
Returns:
A PyTorch tensor or an ObjectArray.
"""
kwargs = self.__get_dtype_and_device_kwargs(dtype=dtype, device=device, use_eval_dtype=use_eval_dtype, out=None)
return misc.make_tensor(data, read_only=read_only, **kwargs)
make_uniform(self, *size, *, num_solutions=None, lb=None, ub=None, out=None, dtype=None, device=None, use_eval_dtype=False, generator=None)
¶
Make a new or existing tensor filled by uniformly distributed values. Both lower and upper bounds are inclusive. This function can work with both float and int dtypes.
When not explicitly specified via arguments, the dtype and the device of the resulting tensor is determined by this method's parent object.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
size |
Union[int, torch.Size] |
Size of the new tensor to be filled with uniformly distributed values. This can be given as multiple positional arguments, each such positional argument being an integer, or as a single positional argument of a tuple, the tuple containing multiple integers. Note that, if the user wishes to fill an existing tensor instead, then no positional argument is expected. |
() |
num_solutions |
Optional[int] |
This can be used instead of the |
None |
lb |
Union[float, Iterable[float], torch.Tensor] |
Lower bound for the uniformly distributed values.
Can be a scalar, or a tensor.
If not specified, the lower bound will be taken as 0.
Note that, if one specifies |
None |
ub |
Union[float, Iterable[float], torch.Tensor] |
Upper bound for the uniformly distributed values.
Can be a scalar, or a tensor.
If not specified, the upper bound will be taken as 1.
Note that, if one specifies |
None |
out |
Optional[torch.Tensor] |
Optionally, the tensor to be filled by uniformly distributed
values. If an |
None |
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If |
None |
device |
Union[str, torch.device] |
The device in which the new empty tensor will be stored.
If not specified (and also |
None |
use_eval_dtype |
bool |
If this is given as True and a |
False |
generator |
Any |
Pseudo-random generator to be used when sampling
the values. Can be a |
None |
Returns:
Type | Description |
---|---|
Tensor |
The created or modified tensor after placing the uniformly distributed values. |
Source code in evotorch/tools/tensormaker.py
def make_uniform(
self,
*size: Size,
num_solutions: Optional[int] = None,
lb: Optional[RealOrVector] = None,
ub: Optional[RealOrVector] = None,
out: Optional[torch.Tensor] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
use_eval_dtype: bool = False,
generator: Any = None,
) -> torch.Tensor:
"""
Make a new or existing tensor filled by uniformly distributed values.
Both lower and upper bounds are inclusive.
This function can work with both float and int dtypes.
When not explicitly specified via arguments, the dtype and the device
of the resulting tensor is determined by this method's parent object.
Args:
size: Size of the new tensor to be filled with uniformly distributed
values. This can be given as multiple positional arguments, each
such positional argument being an integer, or as a single
positional argument of a tuple, the tuple containing multiple
integers. Note that, if the user wishes to fill an existing
tensor instead, then no positional argument is expected.
num_solutions: This can be used instead of the `size` arguments
for specifying the shape of the target tensor.
Expected as an integer, when `num_solutions` is specified
as `n`, the shape of the resulting tensor will be
`(n, m)` where `m` is the solution length reported by this
method's parent object's `solution_length` attribute.
lb: Lower bound for the uniformly distributed values.
Can be a scalar, or a tensor.
If not specified, the lower bound will be taken as 0.
Note that, if one specifies `lb`, then `ub` is also expected to
be explicitly specified.
ub: Upper bound for the uniformly distributed values.
Can be a scalar, or a tensor.
If not specified, the upper bound will be taken as 1.
Note that, if one specifies `ub`, then `lb` is also expected to
be explicitly specified.
out: Optionally, the tensor to be filled by uniformly distributed
values. If an `out` tensor is given, then no `size` argument is
expected.
dtype: Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If `dtype` is not specified (and also `out` is None),
it will be assumed that the user wishes to create a tensor
using the dtype of this method's parent object.
If an `out` tensor is specified, then `dtype` is expected
as None.
device: The device in which the new empty tensor will be stored.
If not specified (and also `out` is None), it will be
assumed that the user wishes to create a tensor on the
same device with this method's parent object.
If an `out` tensor is specified, then `device` is expected
as None.
use_eval_dtype: If this is given as True and a `dtype` is not
specified, then the `dtype` of the result will be taken
from the `eval_dtype` attribute of this method's parent
object.
generator: Pseudo-random generator to be used when sampling
the values. Can be a `torch.Generator` or any object with
a `generator` attribute (e.g. a Problem object).
If not given, then this method's parent object will be
analyzed whether or not it has its own generator.
If it does, that generator will be used.
If not, the global generator of PyTorch will be used.
Returns:
The created or modified tensor after placing the uniformly
distributed values.
"""
args, kwargs = self.__get_all_args_for_random_maker(
*size,
num_solutions=num_solutions,
out=out,
dtype=dtype,
device=device,
use_eval_dtype=use_eval_dtype,
generator=generator,
)
return misc.make_uniform(*args, lb=lb, ub=ub, **kwargs)
make_uniform_shaped_like(self, t, *, lb=None, ub=None)
¶
Make a new uniformly-filled tensor, shaped like the given tensor.
The dtype
and device
will be determined by the parent of this
method (not by the given tensor).
If the parent of this method has its own random generator, then that
generator will be used.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
t |
Tensor |
The tensor according to which the result will be shaped. |
required |
lb |
Union[float, Iterable[float], torch.Tensor] |
The inclusive lower bounds for the uniform distribution. Can be a scalar or a tensor. If left as None, 0.0 will be used as the upper bound. |
None |
ub |
Union[float, Iterable[float], torch.Tensor] |
The inclusive upper bounds for the uniform distribution. Can be a scalar or a tensor. If left as None, 1.0 will be used as the upper bound. |
None |
Returns:
Type | Description |
---|---|
Tensor |
A new tensor whose shape is the same with the given tensor. |
Source code in evotorch/tools/tensormaker.py
def make_uniform_shaped_like(
self,
t: torch.Tensor,
*,
lb: Optional[RealOrVector] = None,
ub: Optional[RealOrVector] = None,
) -> torch.Tensor:
"""
Make a new uniformly-filled tensor, shaped like the given tensor.
The `dtype` and `device` will be determined by the parent of this
method (not by the given tensor).
If the parent of this method has its own random generator, then that
generator will be used.
Args:
t: The tensor according to which the result will be shaped.
lb: The inclusive lower bounds for the uniform distribution.
Can be a scalar or a tensor.
If left as None, 0.0 will be used as the upper bound.
ub: The inclusive upper bounds for the uniform distribution.
Can be a scalar or a tensor.
If left as None, 1.0 will be used as the upper bound.
Returns:
A new tensor whose shape is the same with the given tensor.
"""
return self.make_uniform(t.shape, lb=lb, ub=ub)
make_zeros(self, *size, *, num_solutions=None, out=None, dtype=None, device=None, use_eval_dtype=False)
¶
Make a new tensor filled with 0, or fill an existing tensor with 0.
When not explicitly specified via arguments, the dtype and the device of the resulting tensor is determined by this method's parent object.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
size |
Union[int, torch.Size] |
Size of the new tensor to be filled with 0. This can be given as multiple positional arguments, each such positional argument being an integer, or as a single positional argument of a tuple, the tuple containing multiple integers. Note that, if the user wishes to fill an existing tensor with 0 values, then no positional argument is expected. |
() |
num_solutions |
Optional[int] |
This can be used instead of the |
None |
out |
Optional[torch.Tensor] |
Optionally, the tensor to be filled by 0 values.
If an |
None |
dtype |
Union[str, torch.dtype, numpy.dtype, Type] |
Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If |
None |
device |
Union[str, torch.device] |
The device in which the new empty tensor will be stored.
If not specified (and also |
None |
use_eval_dtype |
bool |
If this is given as True and a |
False |
Returns:
Type | Description |
---|---|
Tensor |
The created or modified tensor after placing 0 values. |
Source code in evotorch/tools/tensormaker.py
def make_zeros(
self,
*size: Size,
num_solutions: Optional[int] = None,
out: Optional[torch.Tensor] = None,
dtype: Optional[DType] = None,
device: Optional[Device] = None,
use_eval_dtype: bool = False,
) -> torch.Tensor:
"""
Make a new tensor filled with 0, or fill an existing tensor with 0.
When not explicitly specified via arguments, the dtype and the device
of the resulting tensor is determined by this method's parent object.
Args:
size: Size of the new tensor to be filled with 0.
This can be given as multiple positional arguments, each such
positional argument being an integer, or as a single positional
argument of a tuple, the tuple containing multiple integers.
Note that, if the user wishes to fill an existing tensor with
0 values, then no positional argument is expected.
num_solutions: This can be used instead of the `size` arguments
for specifying the shape of the target tensor.
Expected as an integer, when `num_solutions` is specified
as `n`, the shape of the resulting tensor will be
`(n, m)` where `m` is the solution length reported by this
method's parent object's `solution_length` attribute.
out: Optionally, the tensor to be filled by 0 values.
If an `out` tensor is given, then no `size` argument is expected.
dtype: Optionally a string (e.g. "float32") or a PyTorch dtype
(e.g. torch.float32).
If `dtype` is not specified (and also `out` is None),
it will be assumed that the user wishes to create a tensor
using the dtype of this method's parent object.
If an `out` tensor is specified, then `dtype` is expected
as None.
device: The device in which the new empty tensor will be stored.
If not specified (and also `out` is None), it will be
assumed that the user wishes to create a tensor on the
same device with this method's parent object.
If an `out` tensor is specified, then `device` is expected
as None.
use_eval_dtype: If this is given as True and a `dtype` is not
specified, then the `dtype` of the result will be taken
from the `eval_dtype` attribute of this method's parent
object.
Returns:
The created or modified tensor after placing 0 values.
"""
args, kwargs = self.__get_all_args_for_maker(
*size,
num_solutions=num_solutions,
out=out,
dtype=dtype,
device=device,
use_eval_dtype=use_eval_dtype,
)
return misc.make_zeros(*args, **kwargs)