Examples¶
We provide a number of reference examples directly on our GitHub. Each of these examples demonstrates how to recreate a particular experiment or result from recent evolutionary algorithm literature, to highlight that Evotorch is highly suited to both academic research in and advanced industrial applications of evolutionary algorithms.
Tip
We provide a number of examples as jupyter notebooks. The easiest way to get started with these examples is to run:
Examples in examples/notebooks
¶
 Gym Experiments with PGPE and CoSyNE

Demonstrates how you can solve "LunarLanderContinuousv2" using both
PGPE
andCoSyNE
following the configurations described in the paper proposing ClipUp and the JMLR paper on the CoSyNE algorithm.  Minimizing LennardJones Atom Cluster Potentials

Recreates experiments from the paper introducing
SNES
, showing that the algorithm can effectively solve the challenging task of minimising LennardJones atom cluster potentials.  Model_Predictive_Control_with_CEM

Demonstrates the application of the CrossEntropy Method
CEM
to Model Predictive Control (MPC) of the MuJoCo task named "Reacherv4".  Training MNIST30K

Recreates experiments from a recent paper which demonstrates that
SNES
can be used to solve supervised learning problems. The script in particular recreates the training of the 30Kparameter 'MNIST30K' model on the MNIST dataset, but can easily be reconfigured to recreate other experiments from that paper.  Variational Quantum Eigensolvers with SNES

Reimplements (with some minor changes in experimental setup), experiments in a recent paper demonstrating that
SNES
is a scalable alternative to analytic gradients on a quantum computer, and can practically optimize Quantum Eigensolvers.
Examples in examples/scripts
¶
In addition, to help you to implement advanced neuroevolutionary reinforcement learning settings, we have provided 3 python scripts in the examples/scripts
directory:
bbo_vectorized.py

Demonstrates single objective blackbox optimization using a distributionbased algorithm, accelerated using vectorization on a single GPU/CPU.
moo_parallel.py

Demonstrates multiobjective optimization using parallelization on all CPU cores without vectorization.
rl_clipup.py

Reimplements almost all experiments from the paper proposing ClipUp, and is easily reconfigured to replicate any particular experiment using
sacred
. rl_enjoy.py

Allows you to easily visualize and enjoy agents trained through
rl_clipup.py
. rl_gym.py

Demonstrates how to solve a simple Gym problem using the PGPE algorithm and ClipUp optimizer.