Examples¶
We provide a number of reference examples directly on our GitHub. Each of these examples demonstrates how to recreate a particular experiment or result from recent evolutionary algorithm literature, to highlight that Evotorch is highly suited to both academic research in and advanced industrial applications of evolutionary algorithms.
Tip
We provide a number of examples as jupyter notebooks. The easiest way to get started with these examples is to run:
Examples in examples/notebooks
¶
- Gym Experiments with PGPE and CoSyNE
-
Demonstrates how you can solve "LunarLanderContinuous-v2" using both
PGPE
andCoSyNE
following the configurations described in the paper proposing ClipUp and the JMLR paper on the CoSyNE algorithm. - Minimizing Lennard-Jones Atom Cluster Potentials
-
Recreates experiments from the paper introducing
SNES
, showing that the algorithm can effectively solve the challenging task of minimising Lennard-Jones atom cluster potentials. - Model_Predictive_Control_with_CEM
-
Demonstrates the application of the Cross-Entropy Method
CEM
to Model Predictive Control (MPC) of the MuJoCo task named "Reacher-v4". - Training MNIST30K
-
Recreates experiments from a recent paper which demonstrates that
SNES
can be used to solve supervised learning problems. The script in particular recreates the training of the 30K-parameter 'MNIST30K' model on the MNIST dataset, but can easily be reconfigured to recreate other experiments from that paper. - Variational Quantum Eigensolvers with SNES
-
Re-implements (with some minor changes in experimental setup), experiments in a recent paper demonstrating that
SNES
is a scalable alternative to analytic gradients on a quantum computer, and can practically optimize Quantum Eigensolvers.
Examples in examples/scripts
¶
In addition, to help you to implement advanced neuroevolutionary reinforcement learning settings, we have provided 3 python scripts in the examples/scripts
directory:
bbo_vectorized.py
-
Demonstrates single objective black-box optimization using a distribution-based algorithm, accelerated using vectorization on a single GPU/CPU.
moo_parallel.py
-
Demonstrates multi-objective optimization using parallelization on all CPU cores without vectorization.
rl_clipup.py
-
Re-implements almost all experiments from the paper proposing ClipUp, and is easily reconfigured to replicate any particular experiment using
sacred
. rl_enjoy.py
-
Allows you to easily visualize and enjoy agents trained through
rl_clipup.py
. rl_gym.py
-
Demonstrates how to solve a simple Gym problem using the PGPE algorithm and ClipUp optimizer.