From bde7ce036b4478cd8f819f91f3f1663210b94194 Mon Sep 17 00:00:00 2001 From: Kaloyan Tanev Date: Tue, 7 Jan 2025 10:22:29 +0200 Subject: [PATCH 1/5] Add multi cluster CDVN page --- docs/adv/advanced/multi-cluster-setup.mdx | 180 ++++++++++++++++++++++ 1 file changed, 180 insertions(+) create mode 100644 docs/adv/advanced/multi-cluster-setup.mdx diff --git a/docs/adv/advanced/multi-cluster-setup.mdx b/docs/adv/advanced/multi-cluster-setup.mdx new file mode 100644 index 0000000000..ac8918c721 --- /dev/null +++ b/docs/adv/advanced/multi-cluster-setup.mdx @@ -0,0 +1,180 @@ +--- +sidebar_position: 7 +description: Multi cluster setup +--- + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +# Multi cluster setup + +:::caution +Multi cluster setup should be used with caution as it is still in experimental phase. +::: + +To spin up multiple clusters that use a single consensus layer client (beacon node) and execution layer client, the multi cluster setup in the [charon-distributed-validator-node](https://github.com/ObolNetwork/charon-distributed-validator-node) (CDVN) repository can be used. Multi cluster setup is aimed towards power users that want to spin up multiple clusters **for the same network** on the same machine for different reasons, some of which might be: + +- squad staking with multiple squads; +- receiving delegated stake from different parties, using separate clusters to separate concerns; +- combinations of all of the above. + +## Concerns + +In order to achieve that, each cluster requires separate Charon and validator client instances. Charon P2P ports need to be different for each cluster, separate validator clients need to point to different charon instances, any other changes regarding the accompanying infra should be taken into account (think Prometheus, Grafana, etc.). + +## Setup + +To achieve that in an easier way for [CDVN](https://github.com/ObolNetwork/charon-distributed-validator-node) users, there are scripts to setup and manage a multi cluster CDVN directory. + +To ease the management of multiple cluster, what is done in those scripts is to separate the shared resources - consensus layer client (beacon node) and execution layer client, Grafana. The cluster specific resources are separated in folders in a `clusters/` directory: + +```directory +clusters +└───{CLUSTER_NAME} # cluster name +│ │ .charon # folder including secret material used by charon +│ │ data # data from the validator client and Prometheus +│ │ lodestar # scripts used by lodestar +│ │ prometheus # scripts and configs used by Prometheus +│ │ .env # environment variables used by the cluster +│ │ docker-compose.yml # docker compose used by the cluster +│ # N.B.: only services with profile "cluster" are ran +└───{CLUSTER_NAME_2} +└───{CLUSTER_NAME_...} +└───{CLUSTER_NAME_N} +``` + +### Setup Multi cluster CDVN + + + + As there is already a cluster set and running, all the cluster specific data from the root directory will be moved to the first cluster in `clusters/` directory. You can expect short disruption of a couple of seconds when setting up the multi cluster CDVN - this is stopping the cluster specific containers from the root docker compose and starting them from inside the cluster specific docker compose. Usually this is 2-5 seconds and is highly unlikely to cause an issue. + + All the changes done during setup are: + - `clusters/` directory is created + - `.charon/` is copied to `clusters/{CLUSTER_NAME}/.charon/` + - `.env` is copied to `clusters/{CLUSTER_NAME}/.env` + - `data/lodestar/` is copied to `clusters/{CLUSTER_NAME}/data/lodestar/` + - `data/prometheus/` is copied to `clusters/{CLUSTER_NAME}/data/prometheus/` + - `lodestar/` is copied to `clusters/{CLUSTER_NAME}/lodestar/` + - `prometheus/` is copied to `clusters/{CLUSTER_NAME}/prometheus/` + - `docker-compose.yml` is copied to `clusters/{CLUSTER_NAME}/docker-compose.yml` + - `.charon/` is renamed to `.charon-migrated-to-multi/` and a README is added to it with details about the migration + - `data/lodestar/` is renamed to `data/lodestar-migrated-to-multi/` and a README is added to it with details about the migration + - `data/prometheus/` is renamed to `data/prometheus-migrated-to-multi/` and a README is added to it with details about the migration + - docker containers from `docker-compose.yml` for Charon, VC and Prometheus are stopped (if they are running) + - docker containers from `clusters/{CLUSTER_NAME}/docker-compose.yml` for Charon, VC and Prometheus are started (if they were running) + + Run the setup, by specifying a name in place of the CLUSTER_NAME: + + ```shell + make name=CLUSTER_NAME + ``` + + + As there is no cluster setup and running, there is no downtime and cluster specific data copied + + All the changes done during setup are: + - `clusters/` directory is created + - `.env` is copied to `clusters/{CLUSTER_NAME}/.env` + - `lodestar/` is copied to `clusters/{CLUSTER_NAME}/lodestar/` + - `prometheus/` is copied to `clusters/{CLUSTER_NAME}/prometheus/` + - `docker-compose.yml` is copied to `clusters/{CLUSTER_NAME}/docker-compose.yml` + - `data/lodestar/` is renamed to `data/lodestar-migrated-to-multi/` and a README is added to it with details about the migration + - `data/prometheus/` is renamed to `data/prometheus-migrated-to-multi/` and a README is added to it with details about the migration + + Run the setup, by specifying a name in place of the CLUSTER_NAME: + + ```shell + make name=CLUSTER_NAME + ``` + + + +## Manage + +As now there are multiple clusters, each one with its own Charon and VC, management becomes a bit more complex. + +The private keys and ENRs of each Charon node should be separated, the data used from each VC, potentially each Prometheus instance as well. + +The base containers (consensus layer client, execution layer client, etc.) should be managed with caution as well, as they impact multiple clusters now. + +### Manage clusters + +#### Add new cluster + +Add a new cluster with a name to the `clusters/` directory. A new folder with the specified name will be created. A free port is chosen for the new libp2p port of the cluster. + +The structure of the new folder looks like such: + +```directory +{NEW_CLUSTER_NAME} +│ data # empty data folder at which validator client and Prometheus data folders will be created once the node is started +│ lodestar # scripts used by lodestar, copied from the root directory +│ prometheus # scripts and configs used by Prometheus, copied from the root directory +│ .env # environment variables used by the cluster, copied from the root directory +│ docker-compose.yml # docker compose used by the cluster, copied from the root directory +``` + +Couple of things that can be configured, if desired: + +- .env file found in `clusters/{NEW_CLUSTER_NAME}/.env` with some cluster-specific variables (i.e.: Charon relays); +- Prometheus config found in `clusters/{NEW_CLUSTER_NAME}/prometheus/prometheus.yml.example` (i.e.: if writing metrics to different remote server); +- Docker compose found in `clusters/{NEW_CLUSTER_NAME}/docker-compose.yml` (i.e.: if you want to change configurations of the validator client). Mind you that only containers with profile `"cluster"` are started from here, meaning that if you make changes to any other container, they won't be taken into account. + +After the new cluster is created, all Charon specific tasks, like creating ENR, should be done **from inside the cluster's directory**. + +```shell + make multi-cluster-add-cluster name=NEW_CLUSTER_NAME +``` + +#### Delete cluster + +Clusters can also be deleted by specifying their name, this is in scenarios like finished voluntary exits. + +:::danger +By deleting a cluster you delete all private key material associated with it as well. Delete only if you know what you are doing. +::: + +```shell + make multi-cluster-delete-cluster name=CLUSTER_NAME +``` + +#### Start cluster + +Start a cluster from the `clusters/` directory by specifying its name. + +This is to be done in cases of first startup for a new cluster, machine has been restarted or the cluster has been stopped for any other reason. + +```shell + make multi-cluster-start-cluster name=CLUSTER_NAME +``` + +#### Stop cluster + +Stop a cluster from the `clusters/` directory by specifying its name. + +This is to be done in cases of some planned maintenance, version updates, etc. + +```shell + make multi-cluster-stop-cluster name=CLUSTER_NAME +``` + +### Manage base + +Now that the validator stack (Charon, validator client) is decoupled and can be managed, the base - consensus layer client, execution layer client, MEV-boost, Grafana containers should be managed on its own as well. Here the actions are simpler. + +#### Start base + +Start the base containers. + +```shell + make multi-cluster-start-base +``` + +#### Stop base + +Stop the base containers. Note that this impacts **all** of your clusters in `clusters/`. + +```shell + make multi-cluster-stop-base +``` From 89ad7b7f2cbe5c283968c154c699322f3dcd3d20 Mon Sep 17 00:00:00 2001 From: Kaloyan Tanev Date: Tue, 7 Jan 2025 11:35:55 +0200 Subject: [PATCH 2/5] Max review: put commands on top --- docs/adv/advanced/multi-cluster-setup.mdx | 38 +++++++++++------------ 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/docs/adv/advanced/multi-cluster-setup.mdx b/docs/adv/advanced/multi-cluster-setup.mdx index ac8918c721..fc3fd32eab 100644 --- a/docs/adv/advanced/multi-cluster-setup.mdx +++ b/docs/adv/advanced/multi-cluster-setup.mdx @@ -47,6 +47,12 @@ clusters + Run the setup, by specifying a name in place of the CLUSTER_NAME: + + ```shell + make name=CLUSTER_NAME + ``` + As there is already a cluster set and running, all the cluster specific data from the root directory will be moved to the first cluster in `clusters/` directory. You can expect short disruption of a couple of seconds when setting up the multi cluster CDVN - this is stopping the cluster specific containers from the root docker compose and starting them from inside the cluster specific docker compose. Usually this is 2-5 seconds and is highly unlikely to cause an issue. All the changes done during setup are: @@ -63,14 +69,14 @@ clusters - `data/prometheus/` is renamed to `data/prometheus-migrated-to-multi/` and a README is added to it with details about the migration - docker containers from `docker-compose.yml` for Charon, VC and Prometheus are stopped (if they are running) - docker containers from `clusters/{CLUSTER_NAME}/docker-compose.yml` for Charon, VC and Prometheus are started (if they were running) - + + Run the setup, by specifying a name in place of the CLUSTER_NAME: ```shell make name=CLUSTER_NAME ``` - - + As there is no cluster setup and running, there is no downtime and cluster specific data copied All the changes done during setup are: @@ -81,12 +87,6 @@ clusters - `docker-compose.yml` is copied to `clusters/{CLUSTER_NAME}/docker-compose.yml` - `data/lodestar/` is renamed to `data/lodestar-migrated-to-multi/` and a README is added to it with details about the migration - `data/prometheus/` is renamed to `data/prometheus-migrated-to-multi/` and a README is added to it with details about the migration - - Run the setup, by specifying a name in place of the CLUSTER_NAME: - - ```shell - make name=CLUSTER_NAME - ``` @@ -102,7 +102,11 @@ The base containers (consensus layer client, execution layer client, etc.) shoul #### Add new cluster -Add a new cluster with a name to the `clusters/` directory. A new folder with the specified name will be created. A free port is chosen for the new libp2p port of the cluster. +Add a new cluster with a name to the `clusters/` directory, by specifying a name in place of the NEW_CLUSTER_NAME. A new folder with the specified name will be created. A free port is chosen for the new libp2p port of the cluster. + +```shell + make multi-cluster-add-cluster name=NEW_CLUSTER_NAME +``` The structure of the new folder looks like such: @@ -123,13 +127,9 @@ Couple of things that can be configured, if desired: After the new cluster is created, all Charon specific tasks, like creating ENR, should be done **from inside the cluster's directory**. -```shell - make multi-cluster-add-cluster name=NEW_CLUSTER_NAME -``` - #### Delete cluster -Clusters can also be deleted by specifying their name, this is in scenarios like finished voluntary exits. +Clusters can also be deleted, by specifying their name in place of the CLUSTER_NAME. This is in scenarios like finished voluntary exits. :::danger By deleting a cluster you delete all private key material associated with it as well. Delete only if you know what you are doing. @@ -141,17 +141,17 @@ By deleting a cluster you delete all private key material associated with it as #### Start cluster -Start a cluster from the `clusters/` directory by specifying its name. - -This is to be done in cases of first startup for a new cluster, machine has been restarted or the cluster has been stopped for any other reason. +Start a cluster from the `clusters/` directory, by specifying its name in place of the CLUSTER_NAME. ```shell make multi-cluster-start-cluster name=CLUSTER_NAME ``` +This is to be done in cases of first startup for a new cluster, machine has been restarted or the cluster has been stopped for any other reason. + #### Stop cluster -Stop a cluster from the `clusters/` directory by specifying its name. +Stop a cluster from the `clusters/` directory, by specifying its name in place of the CLUSTER_NAME. This is to be done in cases of some planned maintenance, version updates, etc. From bb64e64272ba2891ab96ac1cb8d5cf7ff53dfa74 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E2=80=9CMax?= <“max@obol.tech”> Date: Tue, 7 Jan 2025 16:28:40 +0100 Subject: [PATCH 3/5] fixed language and clarifications in some parts, but somehow broken the yarn preview --- docs/adv/advanced/multi-cluster-setup.mdx | 66 ++++++++++++----------- 1 file changed, 35 insertions(+), 31 deletions(-) diff --git a/docs/adv/advanced/multi-cluster-setup.mdx b/docs/adv/advanced/multi-cluster-setup.mdx index fc3fd32eab..a69499ad2c 100644 --- a/docs/adv/advanced/multi-cluster-setup.mdx +++ b/docs/adv/advanced/multi-cluster-setup.mdx @@ -12,32 +12,31 @@ import TabItem from '@theme/TabItem'; Multi cluster setup should be used with caution as it is still in experimental phase. ::: -To spin up multiple clusters that use a single consensus layer client (beacon node) and execution layer client, the multi cluster setup in the [charon-distributed-validator-node](https://github.com/ObolNetwork/charon-distributed-validator-node) (CDVN) repository can be used. Multi cluster setup is aimed towards power users that want to spin up multiple clusters **for the same network** on the same machine for different reasons, some of which might be: +To spin up multiple clusters that use a single consensus layer client (beacon node) and execution layer client, the multi cluster setup in the [charon-distributed-validator-node](https://github.com/ObolNetwork/charon-distributed-validator-node) (CDVN) repository can be used. Multi cluster setup is for power users who want to spin up multiple clusters **for the same network** on the same machine. Some reasons for doing this might be: - squad staking with multiple squads; -- receiving delegated stake from different parties, using separate clusters to separate concerns; -- combinations of all of the above. +- receiving delegated stake from different parties, separating clusters to keep stake separate; +- combinations of the above. ## Concerns -In order to achieve that, each cluster requires separate Charon and validator client instances. Charon P2P ports need to be different for each cluster, separate validator clients need to point to different charon instances, any other changes regarding the accompanying infra should be taken into account (think Prometheus, Grafana, etc.). +Each cluster requires separate Charon and validator client instances. Charon P2P ports need to be different for each cluster, separate validator clients need to point to different charon instances, any other changes regarding the accompanying infra should be taken into account (Prometheus, Grafana, etc.). ## Setup -To achieve that in an easier way for [CDVN](https://github.com/ObolNetwork/charon-distributed-validator-node) users, there are scripts to setup and manage a multi cluster CDVN directory. +Scripts in the [CDVN](https://github.com/ObolNetwork/charon-distributed-validator-node) repo can setup and manage a multi cluster CDVN directory. -To ease the management of multiple cluster, what is done in those scripts is to separate the shared resources - consensus layer client (beacon node) and execution layer client, Grafana. The cluster specific resources are separated in folders in a `clusters/` directory: +Those scripts separate the shared resources: the consensus layer client (beacon node), the execution layer client, and Grafana. Only Charon services with profile ["cluster"](https://github.com/ObolNetwork/charon-distributed-validator-node/blob/ad4044faf78bbe972437abb5dfb3b1e856776c22/docker-compose.yml#L82) are ran. The cluster-specific resources are separated into folders in a `clusters/` directory: ```directory clusters -└───{CLUSTER_NAME} # cluster name +└───{CLUSTER_NAME} # cluster name │ │ .charon # folder including secret material used by charon │ │ data # data from the validator client and Prometheus │ │ lodestar # scripts used by lodestar │ │ prometheus # scripts and configs used by Prometheus │ │ .env # environment variables used by the cluster │ │ docker-compose.yml # docker compose used by the cluster -│ # N.B.: only services with profile "cluster" are ran └───{CLUSTER_NAME_2} └───{CLUSTER_NAME_...} └───{CLUSTER_NAME_N} @@ -47,15 +46,16 @@ clusters - Run the setup, by specifying a name in place of the CLUSTER_NAME: + + Run the setup by running the `make` command, specifying the `CLUSTER_NAME`: ```shell make name=CLUSTER_NAME ``` - As there is already a cluster set and running, all the cluster specific data from the root directory will be moved to the first cluster in `clusters/` directory. You can expect short disruption of a couple of seconds when setting up the multi cluster CDVN - this is stopping the cluster specific containers from the root docker compose and starting them from inside the cluster specific docker compose. Usually this is 2-5 seconds and is highly unlikely to cause an issue. + As was already a cluster set and running, all the cluster specific data from the root directory will be moved to the first cluster in `clusters/` directory. You can expect a delay of a few seconds when running the setup command - this is stopping the cluster specific containers from the root docker compose and starting them from inside the cluster specific docker compose. Usually this is 2-5 seconds and is highly unlikely to cause an issue. - All the changes done during setup are: + The setup command carries out the following actions: - `clusters/` directory is created - `.charon/` is copied to `clusters/{CLUSTER_NAME}/.charon/` - `.env` is copied to `clusters/{CLUSTER_NAME}/.env` @@ -71,15 +71,16 @@ clusters - docker containers from `clusters/{CLUSTER_NAME}/docker-compose.yml` for Charon, VC and Prometheus are started (if they were running) - Run the setup, by specifying a name in place of the CLUSTER_NAME: + + This section is for the case where you have only cloned the [CDVN repo](https://github.com/ObolNetwork/charon-distributed-validator-cluster.git), but not yet setup ENR keys and validator keys or started your node. + + Run the setup by running the `make` command, specifying the `CLUSTER_NAME`: ```shell make name=CLUSTER_NAME ``` - As there is no cluster setup and running, there is no downtime and cluster specific data copied - - All the changes done during setup are: + The setup command carries out the following actions: - `clusters/` directory is created - `.env` is copied to `clusters/{CLUSTER_NAME}/.env` - `lodestar/` is copied to `clusters/{CLUSTER_NAME}/lodestar/` @@ -87,49 +88,52 @@ clusters - `docker-compose.yml` is copied to `clusters/{CLUSTER_NAME}/docker-compose.yml` - `data/lodestar/` is renamed to `data/lodestar-migrated-to-multi/` and a README is added to it with details about the migration - `data/prometheus/` is renamed to `data/prometheus-migrated-to-multi/` and a README is added to it with details about the migration + + To continue with setting up your node, please refer to the [Quickstart guide](../../run/start/quickstart_group), while keeping in mind you should keep all the charon-specific data in the clusters/{CLUSTER_NAME}/ directory instead of the root directory. (For example, the .charon folder and modifications to the .env file) + ## Manage -As now there are multiple clusters, each one with its own Charon and VC, management becomes a bit more complex. +As there are now multiple clusters, each one with its own Charon and VC, management becomes a bit more complex. -The private keys and ENRs of each Charon node should be separated, the data used from each VC, potentially each Prometheus instance as well. +The private keys and ENRs of each Charon node should be separated, as should the data from each VC, and potentially each Prometheus instance as well. -The base containers (consensus layer client, execution layer client, etc.) should be managed with caution as well, as they impact multiple clusters now. +The base containers (consensus layer client, execution layer client, etc.) should also be managed with caution, as they now impact multiple clusters. ### Manage clusters #### Add new cluster -Add a new cluster with a name to the `clusters/` directory, by specifying a name in place of the NEW_CLUSTER_NAME. A new folder with the specified name will be created. A free port is chosen for the new libp2p port of the cluster. +You can add a new cluster to the `clusters/` directory, by running the following command, specifying a name in place of the NEW_CLUSTER_NAME. A new folder with the specified name will be created. A free port is automatically chosen for the new libp2p port of the cluster. ```shell make multi-cluster-add-cluster name=NEW_CLUSTER_NAME ``` -The structure of the new folder looks like such: +The structure of the new folder will look like this: ```directory {NEW_CLUSTER_NAME} -│ data # empty data folder at which validator client and Prometheus data folders will be created once the node is started +│ data # initially empty. Once the node is started, the validator client and Prometheus data folders will be created inside this folder. │ lodestar # scripts used by lodestar, copied from the root directory │ prometheus # scripts and configs used by Prometheus, copied from the root directory │ .env # environment variables used by the cluster, copied from the root directory │ docker-compose.yml # docker compose used by the cluster, copied from the root directory ``` -Couple of things that can be configured, if desired: +A few things can be configured, if desired: -- .env file found in `clusters/{NEW_CLUSTER_NAME}/.env` with some cluster-specific variables (i.e.: Charon relays); -- Prometheus config found in `clusters/{NEW_CLUSTER_NAME}/prometheus/prometheus.yml.example` (i.e.: if writing metrics to different remote server); -- Docker compose found in `clusters/{NEW_CLUSTER_NAME}/docker-compose.yml` (i.e.: if you want to change configurations of the validator client). Mind you that only containers with profile `"cluster"` are started from here, meaning that if you make changes to any other container, they won't be taken into account. +- The .env file found in `clusters/{NEW_CLUSTER_NAME}/.env` can be configured with some cluster-specific variables (i.e.: the choice of Charon relays); +- The Prometheus config found in `clusters/{NEW_CLUSTER_NAME}/prometheus/prometheus.yml.example` (i.e.: if writing metrics to a different remote server); +- The Docker compose found in `clusters/{NEW_CLUSTER_NAME}/docker-compose.yml` (i.e.: if you want to change configurations of the validator client). Keep in mind that only containers with profile `"cluster"` are started from here - if you make changes to any other container, they won't be taken into account. After the new cluster is created, all Charon specific tasks, like creating ENR, should be done **from inside the cluster's directory**. #### Delete cluster -Clusters can also be deleted, by specifying their name in place of the CLUSTER_NAME. This is in scenarios like finished voluntary exits. +Clusters can also be deleted, by running the below command and specifying the `CLUSTER_NAME`. This is useful following completed voluntary exits of validators. :::danger By deleting a cluster you delete all private key material associated with it as well. Delete only if you know what you are doing. @@ -141,19 +145,19 @@ By deleting a cluster you delete all private key material associated with it as #### Start cluster -Start a cluster from the `clusters/` directory, by specifying its name in place of the CLUSTER_NAME. +Start a cluster from the `clusters/` directory, by running the command, specifying the CLUSTER_NAME. ```shell make multi-cluster-start-cluster name=CLUSTER_NAME ``` -This is to be done in cases of first startup for a new cluster, machine has been restarted or the cluster has been stopped for any other reason. +This is to be done during the first startup of a new cluster, or when a machine has been restarted, or the cluster has stopped for any other reason. #### Stop cluster -Stop a cluster from the `clusters/` directory, by specifying its name in place of the CLUSTER_NAME. +Stop a cluster from the `clusters/` directory, by running the command, specifying the CLUSTER_NAME. -This is to be done in cases of some planned maintenance, version updates, etc. +This is to be done in cases of planned maintenance, version updates, etc. ```shell make multi-cluster-stop-cluster name=CLUSTER_NAME @@ -161,7 +165,7 @@ This is to be done in cases of some planned maintenance, version updates, etc. ### Manage base -Now that the validator stack (Charon, validator client) is decoupled and can be managed, the base - consensus layer client, execution layer client, MEV-boost, Grafana containers should be managed on its own as well. Here the actions are simpler. +Now that the validator stack (Charon, validator client) is decoupled and can be managed, the "base" containers can be managed on their own as well. These include the consensus layer client, execution layer client, MEV-boost client, and Grafana containers. Here the actions are simpler. #### Start base From 5343e97a9b5b86e26f6cfa0a500a27453960031d Mon Sep 17 00:00:00 2001 From: Kaloyan Tanev Date: Tue, 7 Jan 2025 17:44:40 +0200 Subject: [PATCH 4/5] Fix build --- docs/adv/advanced/multi-cluster-setup.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/adv/advanced/multi-cluster-setup.mdx b/docs/adv/advanced/multi-cluster-setup.mdx index a69499ad2c..2c442c71e3 100644 --- a/docs/adv/advanced/multi-cluster-setup.mdx +++ b/docs/adv/advanced/multi-cluster-setup.mdx @@ -89,7 +89,7 @@ clusters - `data/lodestar/` is renamed to `data/lodestar-migrated-to-multi/` and a README is added to it with details about the migration - `data/prometheus/` is renamed to `data/prometheus-migrated-to-multi/` and a README is added to it with details about the migration - To continue with setting up your node, please refer to the [Quickstart guide](../../run/start/quickstart_group), while keeping in mind you should keep all the charon-specific data in the clusters/{CLUSTER_NAME}/ directory instead of the root directory. (For example, the .charon folder and modifications to the .env file) + To continue with setting up your node, please refer to the [Quickstart guide](../../run/start/quickstart_group), while keeping in mind you should keep all the charon-specific data in the `clusters/{CLUSTER_NAME}/` directory instead of the root directory. (For example, the .charon folder and modifications to the .env file) From 354d8533e47de14e9216f5d4489d817cb320fa07 Mon Sep 17 00:00:00 2001 From: Max Sherwood <63233138+slugmann321@users.noreply.github.com> Date: Tue, 7 Jan 2025 16:51:47 +0100 Subject: [PATCH 5/5] Update docs/adv/advanced/multi-cluster-setup.mdx Co-authored-by: Kaloyan Tanev <24719519+KaloyanTanev@users.noreply.github.com> --- docs/adv/advanced/multi-cluster-setup.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/adv/advanced/multi-cluster-setup.mdx b/docs/adv/advanced/multi-cluster-setup.mdx index 2c442c71e3..562ba55b89 100644 --- a/docs/adv/advanced/multi-cluster-setup.mdx +++ b/docs/adv/advanced/multi-cluster-setup.mdx @@ -53,7 +53,7 @@ clusters make name=CLUSTER_NAME ``` - As was already a cluster set and running, all the cluster specific data from the root directory will be moved to the first cluster in `clusters/` directory. You can expect a delay of a few seconds when running the setup command - this is stopping the cluster specific containers from the root docker compose and starting them from inside the cluster specific docker compose. Usually this is 2-5 seconds and is highly unlikely to cause an issue. + As was already a cluster set and running, all the cluster specific data from the root directory will be moved to the first cluster in `clusters/` directory. You can expect a downtime of the node of a few seconds when running the setup command - this is stopping the cluster specific containers from the root docker compose and starting them from inside the cluster specific docker compose. Usually this is 2-5 seconds and is highly unlikely to cause an issue. The setup command carries out the following actions: - `clusters/` directory is created