Using Ray Clusters
This short tutorial will guide you through the basics of setting up a ray cluster to run evaluation across multiple CPUs and multiple machines. For more advanced configurations, please refer to the ray documentation.
To use EvoTorch problems across multiple machines, before starting the python environment, do the following steps:
-
Make sure that the same Python environment, libraries, and your dependencies exist in all machines.
-
In the terminal of the head node, execute the following command:
- In the terminal of each non-head node, execute the following command:
Once these steps have been performed, when you launch your python script, do
This will ensure that your calls to Problem instances using ray
will use the cluster that you have created in the previous steps. From here, any Problem class that is instantiated with
will use all CPUs available on the entire cluster.