Examples
We provide a number of reference examples directly on our GitHub. Each of these examples demonstrates how to recreate a particular experiment or result from recent evolutionary algorithm literature, to highlight that Evotorch is highly suited to both academic research in and advanced industrial applications of evolutionary algorithms.
We provide a number of examples as jupyter notebooks. The easiest way to get started with these examples is to run:
In the examples/notebooks/
directory, you can find the following notebooks:
- Gym Experiments with PGPE and CoSyNE demonstrates how you can solve "LunarLanderContinuous-v2" using both
PGPE
andCoSyNE
following the configurations described in the paper proposing ClipUp and the JMLR paper on the CoSyNE algorithm. - Minimizing Lennard-Jones Atom Cluster Potentials recreates experiments from the paper introducing
SNES
, showing that the algorithm can effectively solve the challenging task of minimising Lennard-Jones atom cluster potentials. - Model_Predictive_Control_with_CEM demonstrates the application of the Cross-Entropy Method
CEM
to Model Predictive Control (MPC) of the MuJoCo task named "Reacher-v4". - Training MNIST30K recreates experiments from a recent paper which demonstrates that
SNES
can be used to solve supervised learning problems. The script in particular recreates the training of the 30K-parameter 'MNIST30K' model on the MNIST dataset, but can easily be reconfigured to recreate other experiments from that paper. - Variational Quantum Eigensolvers with SNES re-implements (with some minor changes in experimental setup), experiments in a recent paper demonstrating that
SNES
is a scalable alternative to analytic gradients on a quantum computer, and can practically optimize Quantum Eigensolvers.
In addition, to help you to implement advanced neuroevolutionary reinforcement learning settings, we have provided 3 python scripts in the examples/scripts
directory:
rl.py
re-implements almost all experiments from the paper proposing ClipUp, and is easily reconfigured to replicate any particular experiment usingsacred
.rl_enjoy.py
allows you to easily visualize and enjoy agents trained throughrl.py
.tinytraj_humanoid_bullet.py
implements the modified"pybullet_envs:HumanoidBulletEnv-v0"
environment from the paper.