Deploy Cluster
Prerequisites
The drives must already be prepped using the method in this wiki:
Portainer is not required but highly recommended as it provides a nice GUI front end to Docker. Make sure to complete the last section "Keep Ubuntu VM Running"
The Node must be deployed before deploying the Farmer
NATS is required for the Cluster as each component will connect to NATS
Install the Latest GPU Drivers (NVIDIA) in order to use GPU Plotting
Install the NVIDIA Toolkit for Docker
Resources
The following are relevant resources:
Identify Disks and Size
If the disk path and size are not known check them in WSL. Press WIN + x
, then select "Terminal" and then open up wsl with wsl
. Now display the disks with:
df --block-size=G | grep autonomys
The above command assumes you mounted the drives in a subfolder of autonomys.
Cluster
Due to the flexibility of a Cluster, there is no exact way to deploy the components. Some systems may only run the Farmer, other systems may run all four components, etc.. This guide cannot cover every possible variation. However, deploying the components is straight forward. Components that reside on the same host can be included in a single stack file, and that is how this wiki will show it.
Create Cluster Folders
In Ubuntu WSL create the "cache" and "controller" folders:
mkdir -p ~/autonomys/cache && mkdir ~/autonomys/controller
Set permissions:
sudo chown -R nobody:nogroup ~/autonomys/*
Stack File
Open up the Cluster File and review the contents. Each cluster component will be covered in a section below. Now open up Portainer and create a new Stack. Name the stack "autonomys-cluster" and then paste the contents of the Cluster File in the stack.
Controller
The Docker file starts out with services
, and then the first service listed is the cluster-controller
.
image
- Update to the latest versionvolumes
- This maps to your "controller" folder--nats-server
- Update to the IP of your NATS server. If it is running on the "autonomys-network" on this host it can be left as is--node-rpc-url
- Update to the IP of your Node. If it is running on the "autonomys-network" on this host it can be left as iscom.spaceport.name
- Can be updated to whatever value you would likeTZ=
- Update to your local Timezone
Once the above values have been updated, move to the Cache
Cache
image
- Update to the latest versionvolumes
- This maps to your "cache" folder--nats-server
- Update to the IP of your NATS server. If it is running on the "autonomys-network" on this host it can be left as ispath=
- Update the size value to how big you want the Cache.com.spaceport.name
- Can be updated to whatever value you would likeTZ=
- Update to your local Timezone
Plotter
image
- Update to the latest version--nats-server
- Update to the IP of your NATS server. If it is running on the "autonomys-network" on this host it can be left as iscom.spaceport.name
- Can be updated to whatever value you would likeTZ=
- Update to your local Timezone
If not using an NVIDIA GPU, remove this section:
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
runtime: nvidia
Farmer
image
- Update to the latest versionvolumes
- Update for each farm disk--nats-server
- Update to the IP of your NATS server. If it is running on the "autonomys-network" on this host it can be left as is--reward-address
- Update to your reward addresspath=
- Update folder for each farm and specify the size
Scroll to the bottom and click "Deploy the stack". The Farmer will begin plotting once the Piece Cache is synced and the disks are initialized. This can take a few hours.
Alternate Deployments
As mentioned, there are many ways to configure a Cluster. Here are some other files to reference:
- [Dedicated Controller]
- [Dedicated Cache]
- [Dedicated Farmer]
- [Dedicated Plotter]
- [Plotter w/ Cache Group]
- [Dedicated Farmer]