Skip to content

Commit 7ca6bc5

Browse files
authored
refactor: GreptimeDB cluster Kubernetes monitoring (#2150)
1 parent 7bd9bdb commit 7ca6bc5

File tree

7 files changed

+622
-438
lines changed

7 files changed

+622
-438
lines changed

docs/user-guide/deployments-administration/deploy-on-kubernetes/deploy-greptimedb-cluster.md

Lines changed: 9 additions & 88 deletions
Original file line numberDiff line numberDiff line change
@@ -193,7 +193,7 @@ http://etcd-2.etcd-headless.etcd-cluster.svc.cluster.local:2379 is healthy: succ
193193
## Setup `values.yaml`
194194
195195
The `values.yaml` file contains parameters and configurations for GreptimeDB and is the key to defining the Helm chart.
196-
For example, a minimal GreptimeDB cluster with self-monitoring configuration is as follows:
196+
For example, a minimal GreptimeDB cluster configuration is as follows:
197197
198198
```yaml
199199
image:
@@ -212,15 +212,6 @@ initializer:
212212
registry: docker.io
213213
repository: greptime/greptimedb-initializer
214214

215-
monitoring:
216-
# Enable monitoring
217-
enabled: true
218-
219-
grafana:
220-
# Enable grafana deployment.
221-
# It needs to enable monitoring `monitoring.enabled: true` first.
222-
enabled: true
223-
224215
frontend:
225216
replicas: 1
226217

@@ -239,11 +230,11 @@ You should adjust the configuration according to your requirements.
239230
You can refer to the [configuration documentation](/user-guide/deployments-administration/deploy-on-kubernetes/common-helm-chart-configurations.md) for the complete `values.yaml` configuration options.
240231
241232
242-
## Install the GreptimeDB cluster with self-monitoring
233+
## Install the GreptimeDB cluster
243234
244235
Now that the GreptimeDB Operator and etcd cluster are installed,
245236
and `values.yaml` is configured,
246-
you can deploy a minimal GreptimeDB cluster with self-monitoring and Flow enabled:
237+
you can deploy a minimal GreptimeDB cluster:
247238
248239
```bash
249240
helm upgrade --install mycluster \
@@ -277,51 +268,6 @@ The greptimedb-cluster is starting, use `kubectl get pods -n default` to check i
277268
```
278269
</details>
279270
280-
When both `monitoring` and `grafana` options are enabled, we will enable **self-monitoring** for the GreptimeDB cluster: a GreptimeDB standalone instance will be deployed to monitor the GreptimeDB cluster, and the monitoring data will be visualized using Grafana, making it easier to troubleshoot issues in the GreptimeDB cluster.
281-
282-
We will deploy a GreptimeDB standalone instance named `${cluster}-monitor` in the same namespace as the cluster to store monitoring data such as metrics and logs from the cluster. Additionally, we will deploy a [Vector](https://github.com/vectordotdev/vector) sidecar for each pod in the cluster to collect metrics and logs and send them to the GreptimeDB standalone instance.
283-
284-
We will deploy a [Grafana](https://grafana.com/) instance and configure it to use the GreptimeDB standalone instance as a data source (using both Prometheus and MySQL protocols), allowing us to visualize the GreptimeDB cluster's monitoring data out of the box. By default, Grafana will use `mycluster` and `default` as the cluster name and namespace to create data sources. If you want to monitor clusters with different names or namespaces, you'll need to create different data source configurations based on the cluster names and namespaces. You can create a `values.yaml` file like this:
285-
286-
```yaml
287-
monitoring:
288-
enabled: true
289-
290-
grafana:
291-
enabled: true
292-
datasources:
293-
datasources.yaml:
294-
datasources:
295-
- name: greptimedb-metrics
296-
type: prometheus
297-
url: http://${cluster}-monitor-standalone.${namespace}.svc.cluster.local:4000/v1/prometheus
298-
access: proxy
299-
isDefault: true
300-
301-
- name: greptimedb-logs
302-
type: mysql
303-
url: ${cluster}-monitor-standalone.${namespace}.svc.cluster.local:4002
304-
access: proxy
305-
database: public
306-
```
307-
308-
The above configuration will create the default datasources for the GreptimeDB cluster metrics and logs in the Grafana dashboard:
309-
310-
- `greptimedb-metrics`: The metrics of the cluster are stored in the standalone monitoring database and exposed in Prometheus protocol (`type: prometheus`);
311-
312-
- `greptimedb-logs`: The logs of the cluster are stored in the standalone monitoring database and exposed in MySQL protocol (`type: mysql`). It uses the `public` database by default;
313-
314-
Then replace `{cluster}` and `${namespace}` with your desired values and install the GreptimeDB cluster using the following command (please note that `{cluster}` and `${namespace}` in the command also need to be replaced):
315-
316-
```bash
317-
helm install {cluster} \
318-
--set monitoring.enabled=true \
319-
--set grafana.enabled=true \
320-
greptime/greptimedb-cluster \
321-
-f values.yaml \
322-
-n ${namespace}
323-
```
324-
325271
When starting the cluster installation, we can check the status of the GreptimeDB cluster with the following command. If you use a different cluster name and namespace, you can replace `mycluster` and `default` with your configuration:
326272
327273
```bash
@@ -350,13 +296,11 @@ kubectl -n default get pods
350296
NAME READY STATUS RESTARTS AGE
351297
mycluster-datanode-0 2/2 Running 0 77s
352298
mycluster-frontend-6ffdd549b-9s7gx 2/2 Running 0 66s
353-
mycluster-grafana-675b64786-ktqps 1/1 Running 0 6m35s
354299
mycluster-meta-58bc88b597-ppzvj 2/2 Running 0 86s
355-
mycluster-monitor-standalone-0 1/1 Running 0 6m35s
356300
```
357301
</details>
358302
359-
As you can see, we have created a minimal GreptimeDB cluster consisting of 1 frontend, 1 datanode, and 1 metasrv by default. For information about the components of a complete GreptimeDB cluster, you can refer to [architecture](/user-guide/concepts/architecture.md). Additionally, we have deployed a standalone GreptimeDB instance (`mycluster-monitor-standalone-0`) for storing monitoring data and a Grafana instance (`mycluster-grafana-675b64786-ktqps`) for visualizing the cluster's monitoring data.
303+
As you can see, we have created a minimal GreptimeDB cluster consisting of 1 frontend, 1 datanode, and 1 metasrv by default. For information about the components of a complete GreptimeDB cluster, you can refer to [architecture](/user-guide/concepts/architecture.md).
360304
361305
## Explore the GreptimeDB cluster
362306
@@ -406,33 +350,6 @@ Open the browser and navigate to `http://localhost:4000/dashboard` to access by
406350
407351
If you want to use other tools like `mysql` or `psql` to connect to the GreptimeDB cluster, you can refer to the [Quick Start](/getting-started/quick-start.md).
408352
409-
### Access the Grafana dashboard
410-
411-
You can access the Grafana dashboard by using `kubctl port-forward` the Grafana service:
412-
413-
```bash
414-
kubectl -n default port-forward svc/mycluster-grafana 18080:80
415-
```
416-
417-
Please note that when you use a different cluster name and namespace, you can use the following command, and replace `${cluster}` and `${namespace}` with your configuration:
418-
419-
```bash
420-
kubectl -n ${namespace} port-forward svc/${cluster}-grafana 18080:80
421-
```
422-
423-
Then open your browser and navigate to `http://localhost:18080` to access the Grafana dashboard. The default username and password are `admin` and `gt-operator`:
424-
425-
![Grafana Dashboard](/kubernetes-cluster-grafana-dashboard.jpg)
426-
427-
There are three dashboards available:
428-
429-
- **GreptimeDB**: Displays the metrics of the GreptimeDB cluster.
430-
- **GreptimeDB Logs**: Displays the logs of the GreptimeDB cluster.
431-
432-
## Next Steps
433-
434-
- If you want to deploy a GreptimeDB cluster with Remote WAL, you can refer to [Configure Remote WAL](/user-guide/deployments-administration/deploy-on-kubernetes/configure-remote-wal.md) for more details.
435-
436353
## Cleanup
437354
438355
:::danger
@@ -461,7 +378,6 @@ The PVCs wouldn't be deleted by default for safety reasons. If you want to delet
461378
462379
```bash
463380
kubectl -n default delete pvc -l app.greptime.io/component=mycluster-datanode
464-
kubectl -n default delete pvc -l app.greptime.io/component=mycluster-monitor-standalone
465381
```
466382
467383
### Cleanup the etcd cluster
@@ -479,3 +395,8 @@ If you are using `kind` to create the Kubernetes cluster, you can use the follow
479395
```bash
480396
kind delete cluster
481397
```
398+
399+
## Next Steps
400+
401+
If you want to deploy a GreptimeDB cluster with Remote WAL, you can refer to [Configure Remote WAL](/user-guide/deployments-administration/deploy-on-kubernetes/configure-remote-wal.md) for more details.
402+

0 commit comments

Comments
 (0)