Decorators
Module defining decorators for evotorch.
expects_ndim(*expected_ndims, allow_smaller_ndim=False, randomness='error')
¶
Decorator to declare the number of dimensions for each positional argument.
Let us imagine that we have a function f(a, b)
, where a
and b
are
PyTorch tensors. Let us also imagine that the function f
is implemented
in such a way that a
is assumed to be a 2-dimensional tensor, and b
is assumed to be a 1-dimensional tensor. In this case, the function f
can be decorated as follows:
from evotorch.decorators import expects_ndim
@expects_ndim(2, 1)
def f(a: torch.Tensor, b: torch.Tensor) -> torch.Tensor: ...
Once decorated like this, the function f
will gain the following
additional behaviors:
- If less-than-expected number of dimensions are provided either for
a
or forb
, an error will be raised (unless the decorator is provided with the keyword argumentallow_smaller_ndim=True
) - If either
a
orb
are given as tensors that have extra leftmost dimensions, those dimensions will be assumed as batch dimensions, and therefore, the functionf
will run in a vectorized manner (with the help ofvmap
behind the scene), and the result will be a tensor with extra leftmost dimension(s), representing a batch of resulting tensors. - For convenience, numpy arrays and scalar data that are subclasses
of
numbers.Number
will be converted to PyTorch tensors first, and then will be processed.
To be able to take advantage of this decorator, please ensure that the
decorated function is a vmap
-friendly function. Please also ensure
that the decorated function expects positional arguments only.
Randomness.
Like in torch.func.vmap
, the behavior of the decorated function in
terms of randomness can be configured via a keyword argument named
randomness
:
@expects_ndim(2, 1, randomness="error")
def f(a: torch.Tensor, b: torch.Tensor) -> torch.Tensor: ...
If randomness
is set as "error", then, when there is batching, any
attempt to generate random data using PyTorch will raise an error.
If randomness
is set as "different", then, a random generation
operation such as torch.randn(...)
will produce a BatchedTensor
,
where each batch item has its own re-sampled data.
If randomness
is set as "same", then, a random generation operation
such as torch.randn(...)
will produce a non-batched tensor containing
random data that is sampled only once.
Alternative usage.
expects_ndim
has an alternative interface that allows one to use it
as a tool for temporarily wrapping/transforming other functions. Let us
consider again our example function f
. Instead of using the decorator
syntax, one can do:
which will temporarily wrap the function f
with the additional behaviors
mentioned above, and immediately call it with the arguments a
and b
.
Source code in evotorch/decorators.py
613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 |
|
on_aux_device(*args)
¶
Decorator that informs a problem object that this function wants to receive its solutions on the auxiliary device of the problem.
According to its default (non-overriden) implementation, a problem
object returns torch.device("cuda")
as its auxiliary device if
PyTorch's cuda backend is available and if there is a visible cuda
device. Otherwise, the auxiliary device is returned as
torch.device("cpu")
.
The auxiliary device is meant as a secondary device (in addition
to the main device reported by the problem object's device
attribute) used mainly for boosting the performance of fitness
evaluations.
This decorator, therefore, tells a problem object that the fitness
function requests to receive its solutions on this secondary device.
What this decorator does is that it injects a new attribute named
__evotorch_on_aux_device__
onto the decorated callable object,
then sets that new attribute to True
, and then return the decorated
callable object itself. Upon seeing this new attribute with the
value True
, a Problem object will attempt
to move the solutions to its auxiliary device before calling the
decorated fitness function.
Let us imagine a fitness function f
whose definition looks like:
In its not-yet-decorated form, the function f
would be given x
on the
main device of the associated problem object. However, if one decorates
f
as follows:
from evotorch.decorators import on_aux_device
@on_aux_device
def f(x: torch.Tensor) -> torch.Tensor:
return torch.sum(x, dim=-1)
then the Problem object will first move x
onto its auxiliary device,
then will call f
.
This decorator is useful on multi-GPU settings. For details, please see the following example:
from evotorch import Problem
from evotorch.decorators import on_aux_device
@on_aux_device
def f(x: torch.Tensor) -> torch.Tensor: ...
problem = Problem(
"min",
f,
num_actors=4,
num_gpus_per_actor=1,
device="cpu",
)
In the example code above, we assume that there are 4 GPUs available.
The main device of the problem is "cpu", which means the populations
will be generated on the cpu. When evaluating a population, the population
will be split into 4 subbatches (because we have 4 actors), and each
subbatch will be sent to an actor. Thanks to the decorator @on_aux_device
,
the Problem instance on each actor will first move
its SolutionBatch to its auxiliary device
visible to the actor, and then the fitness function will perform its
fitness evaluations on that device. In summary, the actors will use their
associated auxiliary devices (most commonly "cuda") to evaluate the
fitnesses of the solutions in parallel.
This decorator can also be used to decorate the method _evaluate
or
_evaluate_batch
belonging to a custom subclass of
Problem. Please see the example below:
from evotorch import Problem
class MyCustomProblem(Problem):
def __init__(self):
super().__init__(
...,
device="cpu", # populations will be created on the cpu
...,
)
@on_aux_device("cuda") # evaluations will be on the auxiliary device
def _evaluate_batch(self, solutions: SolutionBatch):
fitnesses = ...
solutions.set_evals(fitnesses)
Source code in evotorch/decorators.py
440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 |
|
on_cuda(*args)
¶
Decorator that informs a problem object that this function wants to receive its solutions on a cuda device (optionally of the specified cuda index).
Decorating a fitness function like this:
is equivalent to:
Decorating a fitness function like this:
is equivalent to:
Please see the documentation of on_device for further details.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
args
|
An optional positional arguments using which one can specify the index of the cuda device to use. |
()
|
Source code in evotorch/decorators.py
330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 |
|
on_device(device)
¶
Decorator that informs a problem object that this function wants to receive its solutions on the specified device.
What this decorator does is that it injects a device
attribute onto
the decorated callable object. Then, this callable object itself is
returned. Upon seeing the device
attribute, the evaluate(...)
method
of the Problem object will attempt to move the
solutions to that device.
Let us imagine a fitness function f
whose definition looks like:
In its not-yet-decorated form, the function f
would be given x
on the
default device of the associated problem object. However, if one decorates
f
as follows:
from evotorch.decorators import on_device
@on_device("cuda:0")
def f(x: torch.Tensor) -> torch.Tensor:
return torch.sum(x, dim=-1)
then the Problem object will first move x
onto the device cuda:0, and
then will call f
.
This decorator is useful on multi-GPU settings. For details, please see the following example:
from evotorch import Problem
from evotorch.decorators import on_device
@on_device("cuda")
def f(x: torch.Tensor) -> torch.Tensor: ...
problem = Problem(
"min",
f,
num_actors=4,
num_gpus_per_actor=1,
device="cpu",
)
In the example code above, we assume that there are 4 GPUs available.
The main device of the problem is "cpu", which means the populations
will be generated on the cpu. When evaluating a population, the population
will be split into 4 subbatches (because we have 4 actors), and each
subbatch will be sent to an actor. Thanks to the decorator @on_device
,
the Problem instance on each actor will first move
its SolutionBatch to the cuda device visible
to its actor, and then the fitness function f
will perform its evaluation
operations on that SolutionBatch on the
the visible cuda. In summary, the actors will use their associated cuda
devices to evaluate the fitnesses of the solutions in parallel.
This decorator can also be used to decorate the method _evaluate
or
_evaluate_batch
belonging to a custom subclass of
Problem. Please see the example below:
from evotorch import Problem
class MyCustomProblem(Problem):
def __init__(self):
super().__init__(
...,
device="cpu", # populations will be created on the cpu
...,
)
@on_device("cuda") # fitness evaluations will happen on cuda
def _evaluate_batch(self, solutions: SolutionBatch):
fitnesses = ...
solutions.set_evals(fitnesses)
The attribute device
that is added by this decorator can be used to
query the fitness device, and also to modify/update it:
@on_device("cpu")
def f(x: torch.Tensor) -> torch.Tensor: ...
print(f.device) # Prints: torch.device("cpu")
f.device = "cuda:0" # Evaluations will be done on cuda:0 from now on
Parameters:
Name | Type | Description | Default |
---|---|---|---|
device
|
Device
|
The device on which the decorated fitness function will work. |
required |
Source code in evotorch/decorators.py
211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 |
|
pass_info(*args)
¶
Decorates a callable so that the neuroevolution problem class (e.g. GymNE) will pass information regarding the task at hand, in the form of keyword arguments.
This decorator adds a new attribute named __evotorch_pass_info__
to the
decorated callable object, sets this new attribute to True, and then returns
the callable object itself. Upon seeing this attribute with the value True
,
a neuroevolution problem class sends extra information as keyword arguments.
For example, in the case of GymNE or VecGymNE, the passed information would include dimensions of the observation and action spaces.
Example
@pass_info
class MyModule(nn.Module):
def __init__(self, obs_length: int, act_length: int, **kwargs):
# Because MyModule is decorated with @pass_info, it receives
# keyword arguments related to the environment "CartPole-v0",
# including obs_length and act_length.
...
problem = GymNE(
"CartPole-v0",
network=MyModule,
...,
)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
fn_or_class
|
Callable
|
Function or class to decorate |
required |
Returns:
Name | Type | Description |
---|---|---|
Callable |
Callable
|
Decorated function or class |
Source code in evotorch/decorators.py
rowwise(*args, randomness='error')
¶
Decorate a vector-expecting function to make it support batch dimensions.
To be able to decorate a function via @rowwise
, the following conditions
are required to be satisfied:
(i) the function expects a single positional argument, which is a PyTorch
tensor;
(ii) the function is implemented with the assumption that the tensor it
receives is a vector (i.e. is 1-dimensional).
Let us consider the example below:
Notice how the implementation of the function f
assumes that its argument
x
is 1-dimensional, and based on that assumption, omits the dim
keyword argument when calling torch.sum(...)
.
Upon receiving a 1-dimensional tensor, this decorated function f
will
perform its operations on the vector x
, like how it would work without
the decorator @rowwise
.
Upon receiving a 2-dimensional tensor, this decorated function f
will
perform its operations on each row of x
.
Upon receiving a tensor with 3 or more dimensions, this decorated function
f
will interpret its input as a batch of matrices, and perform its
operations on each matrix within the batch.
Defining fitness functions for Problem objects.
The decorator @rowwise
can be used for defining a fitness function for a
Problem object. The advantage of doing so is to be
able to implement the fitness function with the simple assumption that the
input is a vector (that stores decision values for a single solution),
and the output is a scalar (that represents the fitness of the solution).
The decorator @rowwise
also flags the decorated function (like
@vectorized
does), so, the fitness function is used correctly by the
Problem
instance, in a vectorized manner. See the example below:
@rowwise
def fitness(decision_values: torch.Tensor) -> torch.Tensor:
return torch.sqrt(torch.sum(decision_values**2))
my_problem = Problem("min", fitness, ...)
In the example above, thanks to the decorator @rowwise
, my_problem
will
use fitness
in a vectorized manner when evaluating a SolutionBatch
,
even though fitness
is defined in terms of a single solution.
Randomness.
Like in torch.func.vmap
, the behavior of the decorated function in
terms of randomness can be configured via a keyword argument named
randomness
:
If randomness
is set as "error", then, when there is batching, any
attempt to generate random data using PyTorch will raise an error.
If randomness
is set as "different", then, a random generation
operation such as torch.randn(...)
will produce a BatchedTensor
,
where each batch item has its own re-sampled data.
If randomness
is set as "same", then, a random generation operation
such as torch.randn(...)
will produce a non-batched tensor containing
random data that is sampled only once.
Source code in evotorch/decorators.py
877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 |
|
vectorized(*args)
¶
Decorates a fitness function so that the problem object (which can be an instance of evotorch.Problem) will send the fitness function a 2D tensor containing all the solutions, instead of a 1D tensor containing a single solution.
What this decorator does is that it adds the decorated fitness function a new
attribute named __evotorch_vectorized__
, the value of this new attribute being
True. Upon seeing this new attribute, the problem object will send this function
multiple solutions so that vectorized operations on multiple solutions can be
performed by this fitness function.
Let us imagine that we have the following fitness function which works on a
single solution x
, and returns a single fitness value:
...and let us now define the optimization problem associated with this fitness function:
While the fitness function f
and the definition p1
form a valid problem
description, it does not use PyTorch to its full potential in terms of performance.
If we were to request the evaluation results on a population of solutions via
p1.evaluate(population)
, p1
would use a classic for
loop to evaluate every
single solution within population
one by one.
We could greatly increase our performance by:
(i) re-defining our fitness function in a vectorized manner, i.e. in such a way
that it will operate on many solutions and compute all of their fitnesses at once;
(ii) label our fitness function via @vectorized
, so that the problem object
will be aware that this new fitness function expects n
solutions and returns
n
fitnesses. The re-designed and labeled fitness function looks like this:
from evotorch.decorators import vectorized
@vectorized
def f2(x: torch.Tensor) -> torch.Tensor:
return torch.sum(x**2, dim=-1)
The problem description for f2
is:
In this last example, p2
will realize that f2
is decorated via @vectorized
,
and will send it n
solutions, and will receive and process n
fitnesses.