From 93b82bb0bbac5603f58439c3344223f5b6557423 Mon Sep 17 00:00:00 2001 From: Michael Wolf Date: Thu, 30 Oct 2025 16:26:25 -0700 Subject: [PATCH 1/5] Rebuild docs1 --- .../hashicorp_vault/_dev/build/docs/README.md | 287 ++++++---- packages/hashicorp_vault/docs/README.md | 522 +++++++++--------- .../docs/knowledge_base/service_info.md | 272 +++++++++ 3 files changed, 707 insertions(+), 374 deletions(-) create mode 100644 packages/hashicorp_vault/docs/knowledge_base/service_info.md diff --git a/packages/hashicorp_vault/_dev/build/docs/README.md b/packages/hashicorp_vault/_dev/build/docs/README.md index d98daad7fbc..3d1f2a28141 100644 --- a/packages/hashicorp_vault/_dev/build/docs/README.md +++ b/packages/hashicorp_vault/_dev/build/docs/README.md @@ -1,137 +1,224 @@ -# Hashicorp Vault +# Hashicorp Vault Integration for Elastic -This integration collects logs and metrics from Hashicorp Vault. There are -three data streams: +## Overview -- audit - Audit logs from file or TCP socket. -- log - Operation log from file. -- metrics - Telemetry data from the /sys/metrics API. +The Hashicorp Vault integration for Elastic enables the collection of logs and metrics from Hashicorp Vault. This allows you to monitor Vault server health, track access to secrets, and maintain a detailed audit trail for security and compliance. -## Compatibility +This integration facilitates the following use cases: +- **Security Monitoring and Auditing**: Track all access to secrets, who accessed them, and when, providing a detailed audit trail for compliance and security investigations. +- **Operational Monitoring**: Monitor Vault server health, performance, and operational status to identify issues before they impact production. +- **Access Pattern Analysis**: Analyze patterns in secret access to identify potential security threats or unusual behavior. +- **Compliance Reporting**: Generate reports from audit logs to demonstrate compliance with security policies and regulatory requirements. +- **Performance Optimization**: Track metrics to understand Vault usage patterns and optimize resource allocation. +- **Secret Lifecycle Management**: Monitor secret creation, access, renewal, and revocation activities across your organization. -This integration has been tested with Vault 1.11. +### Compatibility -## Audit Logs +This integration has been tested with HashiCorp Vault 1.11. +It requires Elastic Stack version 8.12.0 or higher, or version 9.0.0 and above. -Vault audit logs provide a detailed accounting of who accessed or modified what -secrets. The logs do not contain the actual secret values (for strings), but -instead contain the value hashed with a salt using HMAC-SHA256. Hashes can be -compared to values by using the -[`/sys/audit-hash`](https://www.vaultproject.io/api/system/audit-hash.html) API. +## What data does this integration collect? -In order to use this integration for audit logs you must configure Vault -to use a [`file` audit device](https://www.vaultproject.io/docs/audit/file) -or [`socket` audit device](https://www.vaultproject.io/docs/audit/socket). The -file audit device provides the strongest delivery guarantees. +This integration collects the following types of data from HashiCorp Vault: -### File audit device requirements +- **Audit Logs** (`hashicorp_vault.audit`): Detailed records of all requests and responses to Vault APIs, including authentication attempts, secret access, policy changes, and administrative operations. Audit logs contain HMAC-SHA256 hashed values of secrets and can be collected via file or TCP socket. +- **Operational Logs** (`hashicorp_vault.log`): JSON-formatted operational logs from the Vault server, including startup messages, configuration changes, errors, warnings, and general operational events. +- **Metrics** (`hashicorp_vault.metrics`): Prometheus-formatted telemetry data from the `/v1/sys/metrics` API endpoint, including performance counters, gauges, and system health indicators. -- Create a directory for audit logs on each Vault server host. +## What do I need to use this integration? -``` -mkdir /var/log/vault -``` +### Vendor Prerequisites -- Enable the file audit device. +- **For Audit Log Collection (File)**: A file audit device must be enabled with write permissions to a directory accessible by Vault. +- **For Audit Log Collection (Socket)**: A socket audit device can be configured to stream logs to a TCP endpoint where Elastic Agent is listening. +- **For Operational Log Collection**: Vault must be configured to output logs in JSON format (`log_format = "json"`) and the log file must be accessible by Elastic Agent. +- **For Metrics Collection**: + - A Vault token with read access to the `/sys/metrics` API endpoint. + - Vault telemetry must be configured with `disable_hostname = true`. It is also recommended to set `enable_hostname_label = true`. + - The Elastic Agent must have network access to the Vault API endpoint. -``` -vault audit enable file file_path=/var/log/vault/audit.json -``` +### Elastic Prerequisites -- Configure log rotation for the audit log. The exact steps may vary by OS. -This example uses `logrotate` to call `systemctl reload` on the -[Vault service](https://learn.hashicorp.com/tutorials/vault/deployment-guide#step-3-configure-systemd) -which sends the process a SIGHUP signal. The SIGHUP signal causes Vault to start -writing to a new log file. +- Elastic Stack version 8.12.0 or higher (or 9.0.0+). +- Elastic Agent installed and enrolled in Fleet. -``` -tee /etc/logrotate.d/vault <<'EOF' -/var/log/vault/audit.json { - rotate 7 - daily - compress - delaycompress - missingok - notifempty - extension json - dateext - dateformat %Y-%m-%d. - postrotate - /bin/systemctl reload vault || true - endscript -} -EOF -``` +## How do I deploy this integration? -### Socket audit device requirements +### Vendor Setup -To enable the socket audit device in Vault you should first enable this -integration because Vault will test that it can connect to the TCP socket. +#### Setting up Audit Logs (File Audit Device) -- Add this integration and enable audit log collection via TCP. If Vault will -be connecting remotely set the listen address to 0.0.0.0. +1. Create a directory for audit logs on each Vault server: + ```bash + mkdir /var/log/vault + ``` -- Configure the socket audit device to stream logs to this integration. -Substitute in the IP address of the Elastic Agent to which you are sending the -audit logs. +2. Enable the file audit device in Vault: + ```bash + vault audit enable file file_path=/var/log/vault/audit.json + ``` -``` -vault audit enable socket address=${ELASTIC_AGENT_IP}:9007 socket_type=tcp +3. Configure log rotation to prevent disk space issues. The following is an example using `logrotate`: + ```bash + tee /etc/logrotate.d/vault <<'EOF' + /var/log/vault/audit.json { + rotate 7 + daily + compress + delaycompress + missingok + notifempty + extension json + dateext + dateformat %Y-%m-%d. + postrotate + /bin/systemctl reload vault || true + endscript + } + EOF + ``` + +#### Setting up Audit Logs (Socket Audit Device) + +1. Note the IP address and port where Elastic Agent will be listening (e.g., port `9007`). +2. **Important**: Configure and deploy the integration in Kibana *before* enabling the socket device in Vault, as Vault will immediately test the connection. +3. Enable the socket audit device in Vault, substituting the IP of your Elastic Agent: + ```bash + vault audit enable socket address=${ELASTIC_AGENT_IP}:9007 socket_type=tcp + ``` + +#### Setting up Operational Logs + +Add the following line to your Vault configuration file to enable JSON-formatted logs. Ensure the log output is directed to a file that Elastic Agent can read. +```hcl +log_format = "json" ``` -{{event "audit"}} +#### Setting up Metrics -{{fields "audit"}} +1. Configure Vault telemetry in your Vault configuration file: + ```hcl + telemetry { + disable_hostname = true + enable_hostname_label = true + } + ``` -## Operational Logs +2. Create a Vault policy that grants read access to the metrics endpoint. + ```hcl + path "sys/metrics" { + capabilities = ["read"] + } + ``` -Vault outputs its logs to stdout. In order to use the package to collect the -operational log you will need to direct its output to a file. +3. Create a Vault token with this policy: + ```bash + vault token create -policy=metrics-read + ``` -This table shows how the Vault field names are mapped in events. The remaining -structured data fields (indicated by the `*`) are placed under -`hashicorp_vault.log` which is mapped as `flattened` to allow for arbitrary -fields without causing mapping explosions or type conflicts. +### Onboard / configure in Kibana -| Original Field | Package Field | -|---------------- |----------------------- | -| `@timestamp` | `@timestamp` | -| `@module` | `log.logger` | -| `@level` | `log.level` | -| `@message` | `message` | -| `*` | `hashicorp_vault.log` | +1. In Kibana, navigate to **Management > Integrations**. +2. Search for "HashiCorp Vault" and select the integration. +3. Click **Add HashiCorp Vault**. +4. Configure the integration based on your data collection needs: -### Requirements + **For Audit Logs (File)**: + - Enable the "Audit logs (file audit device)" input. + - Specify the file path (default: `/var/log/vault/audit*.json*`). -By default, Vault uses its `standard` log output as opposed to `json`. Please -enable the JSON output in order to have the log data in a structured format. In -a config file for Vault add the following: + **For Audit Logs (TCP Socket)**: + - Enable the "Audit logs (socket audit device)" input. + - Configure the `Listen Address` (default: `localhost`) and `Listen Port` (default: `9007`). + - If Vault connects from a different host, set the Listen Address to `0.0.0.0`. -```hcl -log_format = "json" -``` + **For Operational Logs**: + - Enable the "Operation logs" input. + - Specify the log file path (default: `/var/log/vault/log*.json*`). -{{event "log"}} + **For Metrics**: + - Enable the "Vault metrics (prometheus)" input. + - Enter the Vault host URL under `Hosts` (default: `http://localhost:8200`). + - Provide the `Vault Token` created earlier. + - Adjust the collection `Period` if needed (default: `30s`). -{{fields "log"}} +5. Click **Save and continue** to deploy the integration policy to your Elastic Agents. -## Metrics +### Validation -Vault can provide [telemetry](https://www.vaultproject.io/docs/configuration/telemetry) -information in the form of Prometheus metrics. You can verify that metrics are -enabled by making an HTTP request to -`http://vault_server:8200/v1/sys/metrics?format=prometheus` on your Vault server. +1. **Check Agent Status**: In Fleet, verify that the Elastic Agent shows a "Healthy" status. +2. **Verify Data Ingestion**: + - Navigate to **Analytics > Discover** in Kibana. + - Select the appropriate data view (`logs-hashicorp_vault.audit-*`, `logs-hashicorp_vault.log-*`, or `metrics-hashicorp_vault.metrics-*`). + - Confirm that events are appearing with recent timestamps. +3. **View Dashboards**: + - Navigate to **Analytics > Dashboards**. + - Search for "Hashicorp Vault" to find the pre-built dashboards. + - Verify that data is populating the dashboard panels. -### Requirements +## Troubleshooting -You must configure the Vault prometheus endpoint to disable the hostname -prefixing. It's recommended to also enable the hostname label. +For help with Elastic ingest tools, check [Common problems](https://www.elastic.co/docs/troubleshoot/ingest/fleet/common-problems). -```hcl -telemetry { - disable_hostname = true - enable_hostname_label = true -} -``` +### Common Configuration Issues + +- **No Data Collected**: + - Verify Elastic Agent is healthy in Fleet. + - Ensure the user running Elastic Agent has read permissions on log files. + - Double-check that the configured file paths in the integration policy match the actual log file locations. + - For operational logs, confirm Vault is configured with `log_format = "json"`. +- **TCP Socket Connection Fails**: + - Verify network connectivity between Vault and the Elastic Agent host. + - Check that firewall rules allow TCP connections on the configured port. + - If Vault is remote, ensure the listen address is set to `0.0.0.0` in the integration policy. +- **Metrics Not Collected**: + - Verify the Vault token is valid, has not expired, and has read permissions for the `/sys/metrics` endpoint. + - Confirm Vault's telemetry configuration includes `disable_hostname = true`. + +### Vendor Resources + +- [HashiCorp Vault Audit Devices](https://developer.hashicorp.com/vault/docs/audit) +- [HashiCorp Vault Telemetry Configuration](https://developer.hashicorp.com/vault/docs/configuration/telemetry) +- [HashiCorp Vault Troubleshooting](https://developer.hashicorp.com/vault/docs/troubleshoot) + +## Scaling + +- **Audit Log Performance**: Vault's file audit device provides the strongest delivery guarantees. Ensure adequate disk I/O capacity, as Vault will block operations if it cannot write audit logs. +- **Metrics Collection**: The default collection interval is 30 seconds. Adjust this period based on your monitoring needs and Vault server load. +- **TCP Socket Considerations**: When using the socket audit device, ensure network reliability between Vault and the Elastic Agent. If the TCP connection is unavailable, Vault operations will be blocked until it is restored. + +For more information on architectures that can be used for scaling this integration, check the [Ingest Architectures](https://www.elastic.co/docs/manage-data/ingest/ingest-reference-architectures) documentation. + +## Reference + +### audit + +The `audit` data stream collects audit logs from the file or socket audit devices. + +#### audit fields + +{{ fields "audit" }} + +### log + +The `log` data stream collects operational logs from Vault's standard log file. + +#### log fields + +{{ fields "log" }} + +### metrics + +The `metrics` data stream collects Prometheus-formatted metrics from the Vault telemetry endpoint. + +#### metrics fields + +{{ fields "metrics" }} + +### Inputs used +{{ inputDocs }} -{{fields "metrics"}} +### API usage +These APIs are used with this integration: +* **`/v1/sys/metrics`**: Used to collect Prometheus-formatted telemetry data. See the [HashiCorp Vault Metrics API documentation](https://developer.hashicorp.com/vault/api-docs/system/metrics) for more information. +* **`/sys/audit-hash`**: Can be used to manually verify the hash of a secret found in an audit log. See the [HashiCorp Vault Audit Hash API documentation](https://developer.hashicorp.com/vault/api-docs/system/audit-hash) for more information. diff --git a/packages/hashicorp_vault/docs/README.md b/packages/hashicorp_vault/docs/README.md index 56d106a7b3f..299536a3e5b 100644 --- a/packages/hashicorp_vault/docs/README.md +++ b/packages/hashicorp_vault/docs/README.md @@ -1,179 +1,202 @@ -# Hashicorp Vault +# Hashicorp Vault Integration for Elastic -This integration collects logs and metrics from Hashicorp Vault. There are -three data streams: +## Overview -- audit - Audit logs from file or TCP socket. -- log - Operation log from file. -- metrics - Telemetry data from the /sys/metrics API. +The Hashicorp Vault integration for Elastic enables the collection of logs and metrics from Hashicorp Vault. This allows you to monitor Vault server health, track access to secrets, and maintain a detailed audit trail for security and compliance. -## Compatibility +This integration facilitates the following use cases: +- **Security Monitoring and Auditing**: Track all access to secrets, who accessed them, and when, providing a detailed audit trail for compliance and security investigations. +- **Operational Monitoring**: Monitor Vault server health, performance, and operational status to identify issues before they impact production. +- **Access Pattern Analysis**: Analyze patterns in secret access to identify potential security threats or unusual behavior. +- **Compliance Reporting**: Generate reports from audit logs to demonstrate compliance with security policies and regulatory requirements. +- **Performance Optimization**: Track metrics to understand Vault usage patterns and optimize resource allocation. +- **Secret Lifecycle Management**: Monitor secret creation, access, renewal, and revocation activities across your organization. -This integration has been tested with Vault 1.11. +### Compatibility -## Audit Logs +This integration has been tested with HashiCorp Vault 1.11. +It requires Elastic Stack version 8.12.0 or higher, or version 9.0.0 and above. -Vault audit logs provide a detailed accounting of who accessed or modified what -secrets. The logs do not contain the actual secret values (for strings), but -instead contain the value hashed with a salt using HMAC-SHA256. Hashes can be -compared to values by using the -[`/sys/audit-hash`](https://www.vaultproject.io/api/system/audit-hash.html) API. +## What data does this integration collect? -In order to use this integration for audit logs you must configure Vault -to use a [`file` audit device](https://www.vaultproject.io/docs/audit/file) -or [`socket` audit device](https://www.vaultproject.io/docs/audit/socket). The -file audit device provides the strongest delivery guarantees. +This integration collects the following types of data from HashiCorp Vault: -### File audit device requirements +- **Audit Logs** (`hashicorp_vault.audit`): Detailed records of all requests and responses to Vault APIs, including authentication attempts, secret access, policy changes, and administrative operations. Audit logs contain HMAC-SHA256 hashed values of secrets and can be collected via file or TCP socket. +- **Operational Logs** (`hashicorp_vault.log`): JSON-formatted operational logs from the Vault server, including startup messages, configuration changes, errors, warnings, and general operational events. +- **Metrics** (`hashicorp_vault.metrics`): Prometheus-formatted telemetry data from the `/v1/sys/metrics` API endpoint, including performance counters, gauges, and system health indicators. -- Create a directory for audit logs on each Vault server host. +## What do I need to use this integration? -``` -mkdir /var/log/vault -``` +### Vendor Prerequisites -- Enable the file audit device. +- **For Audit Log Collection (File)**: A file audit device must be enabled with write permissions to a directory accessible by Vault. +- **For Audit Log Collection (Socket)**: A socket audit device can be configured to stream logs to a TCP endpoint where Elastic Agent is listening. +- **For Operational Log Collection**: Vault must be configured to output logs in JSON format (`log_format = "json"`) and the log file must be accessible by Elastic Agent. +- **For Metrics Collection**: + - A Vault token with read access to the `/sys/metrics` API endpoint. + - Vault telemetry must be configured with `disable_hostname = true`. It is also recommended to set `enable_hostname_label = true`. + - The Elastic Agent must have network access to the Vault API endpoint. -``` -vault audit enable file file_path=/var/log/vault/audit.json -``` +### Elastic Prerequisites -- Configure log rotation for the audit log. The exact steps may vary by OS. -This example uses `logrotate` to call `systemctl reload` on the -[Vault service](https://learn.hashicorp.com/tutorials/vault/deployment-guide#step-3-configure-systemd) -which sends the process a SIGHUP signal. The SIGHUP signal causes Vault to start -writing to a new log file. +- Elastic Stack version 8.12.0 or higher (or 9.0.0+). +- Elastic Agent installed and enrolled in Fleet. -``` -tee /etc/logrotate.d/vault <<'EOF' -/var/log/vault/audit.json { - rotate 7 - daily - compress - delaycompress - missingok - notifempty - extension json - dateext - dateformat %Y-%m-%d. - postrotate - /bin/systemctl reload vault || true - endscript -} -EOF -``` +## How do I deploy this integration? -### Socket audit device requirements +### Vendor Setup -To enable the socket audit device in Vault you should first enable this -integration because Vault will test that it can connect to the TCP socket. +#### Setting up Audit Logs (File Audit Device) -- Add this integration and enable audit log collection via TCP. If Vault will -be connecting remotely set the listen address to 0.0.0.0. +1. Create a directory for audit logs on each Vault server: + ```bash + mkdir /var/log/vault + ``` -- Configure the socket audit device to stream logs to this integration. -Substitute in the IP address of the Elastic Agent to which you are sending the -audit logs. +2. Enable the file audit device in Vault: + ```bash + vault audit enable file file_path=/var/log/vault/audit.json + ``` -``` -vault audit enable socket address=${ELASTIC_AGENT_IP}:9007 socket_type=tcp -``` +3. Configure log rotation to prevent disk space issues. The following is an example using `logrotate`: + ```bash + tee /etc/logrotate.d/vault <<'EOF' + /var/log/vault/audit.json { + rotate 7 + daily + compress + delaycompress + missingok + notifempty + extension json + dateext + dateformat %Y-%m-%d. + postrotate + /bin/systemctl reload vault || true + endscript + } + EOF + ``` + +#### Setting up Audit Logs (Socket Audit Device) -An example event for `audit` looks as following: - -```json -{ - "@timestamp": "2023-09-26T13:07:49.743Z", - "agent": { - "ephemeral_id": "5bbd86cc-8032-432d-be82-fae8f624ed98", - "id": "f25d13cd-18cc-4e73-822c-c4f849322623", - "name": "docker-fleet-agent", - "type": "filebeat", - "version": "8.10.1" - }, - "data_stream": { - "dataset": "hashicorp_vault.audit", - "namespace": "ep", - "type": "logs" - }, - "ecs": { - "version": "8.17.0" - }, - "elastic_agent": { - "id": "f25d13cd-18cc-4e73-822c-c4f849322623", - "snapshot": false, - "version": "8.10.1" - }, - "event": { - "action": "update", - "agent_id_status": "verified", - "category": [ - "authentication" - ], - "dataset": "hashicorp_vault.audit", - "id": "0b1b9013-da54-633d-da69-8575e6794ed3", - "ingested": "2023-09-26T13:08:15Z", - "kind": "event", - "original": "{\"time\":\"2023-09-26T13:07:49.743284857Z\",\"type\":\"request\",\"auth\":{\"token_type\":\"default\"},\"request\":{\"id\":\"0b1b9013-da54-633d-da69-8575e6794ed3\",\"operation\":\"update\",\"namespace\":{\"id\":\"root\"},\"path\":\"sys/audit/test\"}}", - "outcome": "success", - "type": [ - "info" - ] - }, - "hashicorp_vault": { - "audit": { - "auth": { - "token_type": "default" - }, - "request": { - "id": "0b1b9013-da54-633d-da69-8575e6794ed3", - "namespace": { - "id": "root" - }, - "operation": "update", - "path": "sys/audit/test" - }, - "type": "request" - } - }, - "host": { - "architecture": "x86_64", - "containerized": false, - "hostname": "docker-fleet-agent", - "id": "28da52b32df94b50aff67dfb8f1be3d6", - "ip": [ - "192.168.80.5" - ], - "mac": [ - "02-42-C0-A8-50-05" - ], - "name": "docker-fleet-agent", - "os": { - "codename": "focal", - "family": "debian", - "kernel": "5.10.104-linuxkit", - "name": "Ubuntu", - "platform": "ubuntu", - "type": "linux", - "version": "20.04.6 LTS (Focal Fossa)" - } - }, - "input": { - "type": "log" - }, - "log": { - "file": { - "path": "/tmp/service_logs/vault/audit.json" - }, - "offset": 0 - }, - "tags": [ - "preserve_original_event", - "hashicorp-vault-audit" - ] -} +1. Note the IP address and port where Elastic Agent will be listening (e.g., port `9007`). +2. **Important**: Configure and deploy the integration in Kibana *before* enabling the socket device in Vault, as Vault will immediately test the connection. +3. Enable the socket audit device in Vault, substituting the IP of your Elastic Agent: + ```bash + vault audit enable socket address=${ELASTIC_AGENT_IP}:9007 socket_type=tcp + ``` + +#### Setting up Operational Logs + +Add the following line to your Vault configuration file to enable JSON-formatted logs. Ensure the log output is directed to a file that Elastic Agent can read. +```hcl +log_format = "json" ``` +#### Setting up Metrics + +1. Configure Vault telemetry in your Vault configuration file: + ```hcl + telemetry { + disable_hostname = true + enable_hostname_label = true + } + ``` + +2. Create a Vault policy that grants read access to the metrics endpoint. + ```hcl + path "sys/metrics" { + capabilities = ["read"] + } + ``` + +3. Create a Vault token with this policy: + ```bash + vault token create -policy=metrics-read + ``` + +### Onboard / configure in Kibana + +1. In Kibana, navigate to **Management > Integrations**. +2. Search for "HashiCorp Vault" and select the integration. +3. Click **Add HashiCorp Vault**. +4. Configure the integration based on your data collection needs: + + **For Audit Logs (File)**: + - Enable the "Audit logs (file audit device)" input. + - Specify the file path (default: `/var/log/vault/audit*.json*`). + + **For Audit Logs (TCP Socket)**: + - Enable the "Audit logs (socket audit device)" input. + - Configure the `Listen Address` (default: `localhost`) and `Listen Port` (default: `9007`). + - If Vault connects from a different host, set the Listen Address to `0.0.0.0`. + + **For Operational Logs**: + - Enable the "Operation logs" input. + - Specify the log file path (default: `/var/log/vault/log*.json*`). + + **For Metrics**: + - Enable the "Vault metrics (prometheus)" input. + - Enter the Vault host URL under `Hosts` (default: `http://localhost:8200`). + - Provide the `Vault Token` created earlier. + - Adjust the collection `Period` if needed (default: `30s`). + +5. Click **Save and continue** to deploy the integration policy to your Elastic Agents. + +### Validation + +1. **Check Agent Status**: In Fleet, verify that the Elastic Agent shows a "Healthy" status. +2. **Verify Data Ingestion**: + - Navigate to **Analytics > Discover** in Kibana. + - Select the appropriate data view (`logs-hashicorp_vault.audit-*`, `logs-hashicorp_vault.log-*`, or `metrics-hashicorp_vault.metrics-*`). + - Confirm that events are appearing with recent timestamps. +3. **View Dashboards**: + - Navigate to **Analytics > Dashboards**. + - Search for "Hashicorp Vault" to find the pre-built dashboards. + - Verify that data is populating the dashboard panels. + +## Troubleshooting + +For help with Elastic ingest tools, check [Common problems](https://www.elastic.co/docs/troubleshoot/ingest/fleet/common-problems). + +### Common Configuration Issues + +- **No Data Collected**: + - Verify Elastic Agent is healthy in Fleet. + - Ensure the user running Elastic Agent has read permissions on log files. + - Double-check that the configured file paths in the integration policy match the actual log file locations. + - For operational logs, confirm Vault is configured with `log_format = "json"`. +- **TCP Socket Connection Fails**: + - Verify network connectivity between Vault and the Elastic Agent host. + - Check that firewall rules allow TCP connections on the configured port. + - If Vault is remote, ensure the listen address is set to `0.0.0.0` in the integration policy. +- **Metrics Not Collected**: + - Verify the Vault token is valid, has not expired, and has read permissions for the `/sys/metrics` endpoint. + - Confirm Vault's telemetry configuration includes `disable_hostname = true`. + +### Vendor Resources + +- [HashiCorp Vault Audit Devices](https://developer.hashicorp.com/vault/docs/audit) +- [HashiCorp Vault Telemetry Configuration](https://developer.hashicorp.com/vault/docs/configuration/telemetry) +- [HashiCorp Vault Troubleshooting](https://developer.hashicorp.com/vault/docs/troubleshoot) + +## Scaling + +- **Audit Log Performance**: Vault's file audit device provides the strongest delivery guarantees. Ensure adequate disk I/O capacity, as Vault will block operations if it cannot write audit logs. +- **Metrics Collection**: The default collection interval is 30 seconds. Adjust this period based on your monitoring needs and Vault server load. +- **TCP Socket Considerations**: When using the socket audit device, ensure network reliability between Vault and the Elastic Agent. If the TCP connection is unavailable, Vault operations will be blocked until it is restored. + +For more information on architectures that can be used for scaling this integration, check the [Ingest Architectures](https://www.elastic.co/docs/manage-data/ingest/ingest-reference-architectures) documentation. + +## Reference + +### audit + +The `audit` data stream collects audit logs from the file or socket audit devices. + +#### audit fields + **Exported fields** | Field | Description | Type | @@ -285,112 +308,11 @@ An example event for `audit` looks as following: | user.id | Unique identifier of the user. | keyword | -## Operational Logs - -Vault outputs its logs to stdout. In order to use the package to collect the -operational log you will need to direct its output to a file. - -This table shows how the Vault field names are mapped in events. The remaining -structured data fields (indicated by the `*`) are placed under -`hashicorp_vault.log` which is mapped as `flattened` to allow for arbitrary -fields without causing mapping explosions or type conflicts. - -| Original Field | Package Field | -|---------------- |----------------------- | -| `@timestamp` | `@timestamp` | -| `@module` | `log.logger` | -| `@level` | `log.level` | -| `@message` | `message` | -| `*` | `hashicorp_vault.log` | - -### Requirements - -By default, Vault uses its `standard` log output as opposed to `json`. Please -enable the JSON output in order to have the log data in a structured format. In -a config file for Vault add the following: +### log -```hcl -log_format = "json" -``` +The `log` data stream collects operational logs from Vault's standard log file. -An example event for `log` looks as following: - -```json -{ - "@timestamp": "2023-09-26T13:09:08.587Z", - "agent": { - "ephemeral_id": "5bbd86cc-8032-432d-be82-fae8f624ed98", - "id": "f25d13cd-18cc-4e73-822c-c4f849322623", - "name": "docker-fleet-agent", - "type": "filebeat", - "version": "8.10.1" - }, - "data_stream": { - "dataset": "hashicorp_vault.log", - "namespace": "ep", - "type": "logs" - }, - "ecs": { - "version": "8.17.0" - }, - "elastic_agent": { - "id": "f25d13cd-18cc-4e73-822c-c4f849322623", - "snapshot": false, - "version": "8.10.1" - }, - "event": { - "agent_id_status": "verified", - "dataset": "hashicorp_vault.log", - "ingested": "2023-09-26T13:09:35Z", - "kind": "event", - "original": "{\"@level\":\"info\",\"@message\":\"proxy environment\",\"@timestamp\":\"2023-09-26T13:09:08.587324Z\",\"http_proxy\":\"\",\"https_proxy\":\"\",\"no_proxy\":\"\"}" - }, - "hashicorp_vault": { - "log": { - "http_proxy": "", - "https_proxy": "", - "no_proxy": "" - } - }, - "host": { - "architecture": "x86_64", - "containerized": false, - "hostname": "docker-fleet-agent", - "id": "28da52b32df94b50aff67dfb8f1be3d6", - "ip": [ - "192.168.80.5" - ], - "mac": [ - "02-42-C0-A8-50-05" - ], - "name": "docker-fleet-agent", - "os": { - "codename": "focal", - "family": "debian", - "kernel": "5.10.104-linuxkit", - "name": "Ubuntu", - "platform": "ubuntu", - "type": "linux", - "version": "20.04.6 LTS (Focal Fossa)" - } - }, - "input": { - "type": "log" - }, - "log": { - "file": { - "path": "/tmp/service_logs/log.json" - }, - "level": "info", - "offset": 709 - }, - "message": "proxy environment", - "tags": [ - "preserve_original_event", - "hashicorp-vault-log" - ] -} -``` +#### log fields **Exported fields** @@ -417,24 +339,11 @@ An example event for `log` looks as following: | tags | List of keywords used to tag each event. | keyword | -## Metrics - -Vault can provide [telemetry](https://www.vaultproject.io/docs/configuration/telemetry) -information in the form of Prometheus metrics. You can verify that metrics are -enabled by making an HTTP request to -`http://vault_server:8200/v1/sys/metrics?format=prometheus` on your Vault server. +### metrics -### Requirements +The `metrics` data stream collects Prometheus-formatted metrics from the Vault telemetry endpoint. -You must configure the Vault prometheus endpoint to disable the hostname -prefixing. It's recommended to also enable the hostname label. - -```hcl -telemetry { - disable_hostname = true - enable_hostname_label = true -} -``` +#### metrics fields **Exported fields** @@ -483,3 +392,68 @@ telemetry { | service.address | Address where data about this service was collected from. This should be a URI, network address (ipv4:port or [ipv6]:port) or a resource path (sockets). | keyword | | | service.type | The type of the service data is collected from. The type can be used to group and correlate logs and metrics from one service type. Example: If logs or metrics are collected from Elasticsearch, `service.type` would be `elasticsearch`. | keyword | | + +### Inputs used +These inputs can be used with this integration: +
+logfile + +## Setup +For more details about the logfile input settings, check the [Filebeat documentation](https://www.elastic.co/docs/reference/beats/filebeat/filebeat-input-log). + +### Collecting logs from logfile + +To collect logs via logfile, select **Collect logs via the logfile input** and configure the following parameter: + +- Paths: List of glob-based paths to crawl and fetch log files from. Supports glob patterns like + `/var/log/*.log` or `/var/log/*/*.log` for subfolder matching. Each file found starts a + separate harvester. +
+
+tcp + +## Setup + +For more details about the TCP input settings, check the [Filebeat documentation](https://www.elastic.co/docs/reference/beats/filebeat/filebeat-input-tcp). + +### Collecting logs from TCP + +To collect logs via TCP, select **Collect logs via TCP** and configure the following parameters: + +**Required Settings:** +- Host +- Port + +**Common Optional Settings:** +- Max Message Size - Maximum size of incoming messages +- Max Connections - Maximum number of concurrent connections +- Timeout - How long to wait for data before closing idle connections +- Line Delimiter - Character(s) that separate log messages + +## SSL/TLS Configuration + +To enable encrypted connections, configure the following SSL settings: + +**SSL Settings:** +- Enable SSL*- Toggle to enable SSL/TLS encryption +- Certificate - Path to the SSL certificate file (`.crt` or `.pem`) +- Certificate Key - Path to the private key file (`.key`) +- Certificate Authorities - Path to CA certificate file for client certificate validation (optional) +- Client Authentication - Require client certificates (`none`, `optional`, or `required`) +- Supported Protocols - TLS versions to support (e.g., `TLSv1.2`, `TLSv1.3`) + +**Example SSL Configuration:** +```yaml +ssl.enabled: true +ssl.certificate: "/path/to/server.crt" +ssl.key: "/path/to/server.key" +ssl.certificate_authorities: ["/path/to/ca.crt"] +ssl.client_authentication: "optional" +``` +
+ + +### API usage +These APIs are used with this integration: +* **`/v1/sys/metrics`**: Used to collect Prometheus-formatted telemetry data. See the [HashiCorp Vault Metrics API documentation](https://developer.hashicorp.com/vault/api-docs/system/metrics) for more information. +* **`/sys/audit-hash`**: Can be used to manually verify the hash of a secret found in an audit log. See the [HashiCorp Vault Audit Hash API documentation](https://developer.hashicorp.com/vault/api-docs/system/audit-hash) for more information. diff --git a/packages/hashicorp_vault/docs/knowledge_base/service_info.md b/packages/hashicorp_vault/docs/knowledge_base/service_info.md new file mode 100644 index 00000000000..ab268b5329d --- /dev/null +++ b/packages/hashicorp_vault/docs/knowledge_base/service_info.md @@ -0,0 +1,272 @@ +# Service Info + +## Common use cases + +This integration facilitates the following use cases: + +- **Security Monitoring and Auditing**: Track all access to secrets, who accessed them, and when, providing a detailed audit trail for compliance and security investigations +- **Operational Monitoring**: Monitor Vault server health, performance, and operational status to identify issues before they impact production +- **Access Pattern Analysis**: Analyze patterns in secret access to identify potential security threats or unusual behavior +- **Compliance Reporting**: Generate reports from audit logs to demonstrate compliance with security policies and regulatory requirements +- **Performance Optimization**: Track metrics to understand Vault usage patterns and optimize resource allocation +- **Secret Lifecycle Management**: Monitor secret creation, access, renewal, and revocation activities across your organization + +## Data types collected + +This integration collects the following types of data from HashiCorp Vault: + +- **Audit Logs** (`hashicorp_vault.audit`): Detailed records of all requests and responses to Vault APIs, including authentication attempts, secret access, policy changes, and administrative operations. Audit logs contain HMAC-SHA256 hashed values of secrets (not plaintext) and can be collected via file or TCP socket. +- **Operational Logs** (`hashicorp_vault.log`): JSON-formatted operational logs from the Vault server, including startup messages, configuration changes, errors, warnings, and general operational events. +- **Metrics** (`hashicorp_vault.metrics`): Prometheus-formatted telemetry data from the `/v1/sys/metrics` API endpoint, including performance counters, gauges, histograms, and system health indicators. + +## Compatibility + +This integration has been tested with HashiCorp Vault 1.11. + +The integration requires Elastic Stack version 8.12.0 or higher, or version 9.0.0 and above. + +## Scaling and Performance + +### Audit Log Performance + +Vault's file audit device provides the strongest delivery guarantees for audit logs. When using the file audit device, ensure adequate disk I/O capacity as Vault will block operations if it cannot write audit logs. + +### Metrics Collection + +The metrics endpoint (`/v1/sys/metrics?format=prometheus`) exposes Vault's telemetry data. The default collection interval for this integration is 30 seconds. Adjust this based on your monitoring needs and Vault server load. + +### TCP Socket Considerations + +When using the socket audit device for real-time log streaming, ensure network reliability between Vault and the Elastic Agent. If the TCP connection is unavailable, Vault operations will be blocked until the connection is restored. + +### Log Rotation + +For file-based log collection, implement log rotation to prevent disk space exhaustion. The integration supports rotated log files with compression and date extensions. + +# Set Up Instructions + +## Vendor prerequisites + +The following prerequisites are required on the HashiCorp Vault side: + +### For Audit Log Collection + +- **File Audit Device**: A file audit device must be enabled with write permissions to a directory accessible by Vault +- **Socket Audit Device** (alternative): A socket audit device can be configured to stream logs to a TCP endpoint where Elastic Agent is listening + +### For Operational Log Collection + +- **JSON Log Format**: Vault must be configured to output logs in JSON format (set `log_format = "json"` in Vault configuration) +- **File Access**: The Vault operational log file must be accessible by Elastic Agent for collection + +### For Metrics Collection + +- **Vault Token**: A Vault token with read access to the `/sys/metrics` API endpoint +- **Telemetry Configuration**: Vault telemetry must be configured with `disable_hostname = true` and `enable_hostname_label = true` is recommended +- **Network Access**: The Elastic Agent must be able to reach the Vault API endpoint (default: `http://localhost:8200`) + +## Elastic prerequisites + +- Elastic Stack version 8.12.0 or higher (or 9.0.0+) +- Elastic Agent installed and enrolled in Fleet + +## Vendor set up steps + +### Setting up Audit Logs (File Audit Device) + +1. Create a directory for audit logs on each Vault server: +```bash +mkdir /var/log/vault +``` + +2. Enable the file audit device in Vault: +```bash +vault audit enable file file_path=/var/log/vault/audit.json +``` + +3. Configure log rotation to prevent disk space issues. Example using `logrotate`: +```bash +tee /etc/logrotate.d/vault <<'EOF' +/var/log/vault/audit.json { + rotate 7 + daily + compress + delaycompress + missingok + notifempty + extension json + dateext + dateformat %Y-%m-%d. + postrotate + /bin/systemctl reload vault || true + endscript +} +EOF +``` + +### Setting up Audit Logs (Socket Audit Device) + +1. Note the IP address and port where Elastic Agent will be listening (default: port 9007) + +2. Enable the socket audit device in Vault (substitute your Elastic Agent IP): +```bash +vault audit enable socket address=${ELASTIC_AGENT_IP}:9007 socket_type=tcp +``` + +**Note**: Configure the integration in Kibana first before enabling the socket audit device, as Vault will test the connection. + +### Setting up Operational Logs + +Configure Vault to output logs in JSON format by adding to your Vault configuration file: +```hcl +log_format = "json" +``` + +Direct Vault's log output to a file that Elastic Agent can read. + +### Setting up Metrics + +1. Configure Vault telemetry in your Vault configuration file: +```hcl +telemetry { + disable_hostname = true + enable_hostname_label = true +} +``` + +2. Create a Vault token with read access to the metrics endpoint: +```bash +vault token create -policy=metrics-read +``` + +Ensure the token has a policy that grants read access to `sys/metrics`. + +## Kibana set up steps + +1. In Kibana, navigate to **Management > Integrations** + +2. Search for "HashiCorp Vault" and select the integration + +3. Click **Add HashiCorp Vault** + +4. Configure the integration based on your data collection needs: + + **For Audit Logs (File)**: + - Enable the "Audit logs (file audit device)" input + - Specify the file path (default: `/var/log/vault/audit*.json*`) + - Optionally enable "Preserve original event" to keep raw logs + + **For Audit Logs (TCP Socket)**: + - Enable the "Audit logs (socket audit device)" input + - Configure the listen address (default: `localhost`) and port (default: `9007`) + - If Vault will connect remotely, set listen address to `0.0.0.0` + + **For Operational Logs**: + - Enable the "Operation logs" input + - Specify the log file path (default: `/var/log/vault/log*.json*`) + + **For Metrics**: + - Enable the "Vault metrics (prometheus)" input + - Enter the Vault host URL (default: `http://localhost:8200`) + - Provide the Vault token with read access to `/sys/metrics` + - Optionally configure SSL settings if using HTTPS + - Adjust the collection period if needed (default: `30s`) + +5. Configure the agent policy and select the agent to run this integration + +6. Click **Save and continue** to deploy the integration + +# Validation Steps + +After configuring the integration, validate that data is flowing correctly: + +1. **Check Agent Status**: In Fleet, verify that the Elastic Agent shows a "Healthy" status + +2. **Verify Data Ingestion**: + - Navigate to **Analytics > Discover** in Kibana + - Select the appropriate data view for each data stream: + - `logs-hashicorp_vault.audit-*` for audit logs + - `logs-hashicorp_vault.log-*` for operational logs + - `metrics-hashicorp_vault.metrics-*` for metrics + - Confirm that events are appearing with recent timestamps + +3. **Test Audit Logging**: Perform an action in Vault (e.g., read a secret) and verify it appears in the audit logs + +4. **View Dashboards**: + - Navigate to **Analytics > Dashboards** + - Open the "Hashicorp Vault Audit Log Dashboard" to view audit log visualizations + - Open the "Hashicorp Vault Log Dashboard" to view operational log visualizations + - Verify that data is populating the dashboard panels + +5. **Check Metrics**: For metrics collection, verify that Prometheus metrics are being collected by searching for documents with `hashicorp_vault.metrics.*` fields + +# Troubleshooting + +## Common Configuration Issues + +### No data collected + +- **Agent Status**: Check the Elastic Agent status in Fleet to ensure it's running and healthy +- **File Permissions**: Verify that the user running Elastic Agent has read permissions on log files +- **File Paths**: Ensure the configured file paths match the actual location of Vault logs +- **Log Format**: For operational logs, confirm Vault is configured with `log_format = "json"` + +### TCP socket connection fails + +- **Network Connectivity**: Verify network connectivity between Vault and Elastic Agent +- **Firewall Rules**: Check that firewall rules allow TCP connections on the configured port +- **Listen Address**: If Vault is on a different host, ensure the listen address is set to `0.0.0.0` rather than `localhost` +- **Port Conflicts**: Verify the configured port is not in use by another service + +### Metrics not collected + +- **Vault Token**: Verify the Vault token is valid and has not expired +- **Token Permissions**: Ensure the token has read access to the `/sys/metrics` endpoint +- **Telemetry Configuration**: Confirm Vault telemetry is properly configured with `disable_hostname = true` +- **Network Access**: Verify Elastic Agent can reach the Vault API endpoint + +## Ingestion Errors + +If `error.message` appears in ingested data: + +- **Check Pipeline Errors**: Review the error message details to identify parsing or processing issues +- **Log Format Issues**: Ensure logs are in valid JSON format and match expected schema +- **Missing Required Fields**: Some audit log events require certain fields; check for incomplete log entries + +## API Authentication Errors + +### Token expired or invalid + +- Generate a new Vault token with appropriate permissions +- Update the integration configuration in Kibana with the new token +- For long-running deployments, use a token with an appropriate TTL or create a periodic token + +### Permission denied errors + +- Verify the token has a policy granting read access to `/sys/metrics` +- Check Vault audit logs for permission denial details +- Example policy for metrics access: +```hcl +path "sys/metrics" { + capabilities = ["read"] +} +``` + +## Vendor Resources + +- [HashiCorp Vault Audit Devices](https://developer.hashicorp.com/vault/docs/audit) +- [HashiCorp Vault File Audit Device](https://developer.hashicorp.com/vault/docs/audit/file) +- [HashiCorp Vault Socket Audit Device](https://developer.hashicorp.com/vault/docs/audit/socket) +- [HashiCorp Vault Telemetry Configuration](https://developer.hashicorp.com/vault/docs/configuration/telemetry) +- [HashiCorp Vault Troubleshooting](https://developer.hashicorp.com/vault/docs/troubleshoot) + +# Documentation sites + +- [HashiCorp Vault Official Documentation](https://developer.hashicorp.com/vault/docs) +- [HashiCorp Vault API Documentation](https://developer.hashicorp.com/vault/api-docs) +- [HashiCorp Vault Audit Hash API](https://developer.hashicorp.com/vault/api-docs/system/audit-hash) +- [HashiCorp Vault Metrics API](https://developer.hashicorp.com/vault/api-docs/system/metrics) +- [HashiCorp Vault Configuration Reference](https://developer.hashicorp.com/vault/docs/configuration) +- [HashiCorp Vault Deployment Guide](https://developer.hashicorp.com/vault/tutorials/day-one-raft/raft-deployment-guide) +- [Elastic HashiCorp Vault Integration Documentation](https://docs.elastic.co/integrations/hashicorp_vault) + From 32ddd8018d02f6c97d8dccd9b17ae0dde8473c04 Mon Sep 17 00:00:00 2001 From: Michael Wolf Date: Fri, 31 Oct 2025 12:34:40 -0700 Subject: [PATCH 2/5] [hashicorp-vault] Update documentation Update documentation for the hashicorp_vault integration. This expands the information on the use-cases supported by the integration, and the data collected. It also reformats the set up instructions to make them easier to follow, and adds common troubleshooting issues. --- .../hashicorp_vault/_dev/build/docs/README.md | 67 +++-- packages/hashicorp_vault/changelog.yml | 5 + packages/hashicorp_vault/docs/README.md | 237 ++++++++++++++++-- .../docs/knowledge_base/service_info.md | 61 +++-- packages/hashicorp_vault/manifest.yml | 2 +- 5 files changed, 293 insertions(+), 79 deletions(-) diff --git a/packages/hashicorp_vault/_dev/build/docs/README.md b/packages/hashicorp_vault/_dev/build/docs/README.md index 3d1f2a28141..7fe42e0877d 100644 --- a/packages/hashicorp_vault/_dev/build/docs/README.md +++ b/packages/hashicorp_vault/_dev/build/docs/README.md @@ -15,7 +15,7 @@ This integration facilitates the following use cases: ### Compatibility This integration has been tested with HashiCorp Vault 1.11. -It requires Elastic Stack version 8.12.0 or higher, or version 9.0.0 and above. +It requires Elastic Stack version 8.12.0 or higher. ## What data does this integration collect? @@ -33,13 +33,13 @@ This integration collects the following types of data from HashiCorp Vault: - **For Audit Log Collection (Socket)**: A socket audit device can be configured to stream logs to a TCP endpoint where Elastic Agent is listening. - **For Operational Log Collection**: Vault must be configured to output logs in JSON format (`log_format = "json"`) and the log file must be accessible by Elastic Agent. - **For Metrics Collection**: + - The Vault telemetry endpoint must be enabled. - A Vault token with read access to the `/sys/metrics` API endpoint. - - Vault telemetry must be configured with `disable_hostname = true`. It is also recommended to set `enable_hostname_label = true`. - The Elastic Agent must have network access to the Vault API endpoint. ### Elastic Prerequisites -- Elastic Stack version 8.12.0 or higher (or 9.0.0+). +- Elastic Stack version 8.12.0 or higher. - Elastic Agent installed and enrolled in Fleet. ## How do I deploy this integration? @@ -49,14 +49,12 @@ This integration collects the following types of data from HashiCorp Vault: #### Setting up Audit Logs (File Audit Device) 1. Create a directory for audit logs on each Vault server: - ```bash - mkdir /var/log/vault - ``` + + `mkdir /var/log/vault` 2. Enable the file audit device in Vault: - ```bash - vault audit enable file file_path=/var/log/vault/audit.json - ``` + + `vault audit enable file file_path=/var/log/vault/audit.json` 3. Configure log rotation to prevent disk space issues. The following is an example using `logrotate`: ```bash @@ -83,16 +81,14 @@ This integration collects the following types of data from HashiCorp Vault: 1. Note the IP address and port where Elastic Agent will be listening (e.g., port `9007`). 2. **Important**: Configure and deploy the integration in Kibana *before* enabling the socket device in Vault, as Vault will immediately test the connection. 3. Enable the socket audit device in Vault, substituting the IP of your Elastic Agent: - ```bash - vault audit enable socket address=${ELASTIC_AGENT_IP}:9007 socket_type=tcp - ``` + + `vault audit enable socket address=${ELASTIC_AGENT_IP}:9007 socket_type=tcp` #### Setting up Operational Logs Add the following line to your Vault configuration file to enable JSON-formatted logs. Ensure the log output is directed to a file that Elastic Agent can read. -```hcl -log_format = "json" -``` + +`log_format = "json"` #### Setting up Metrics @@ -103,18 +99,24 @@ log_format = "json" enable_hostname_label = true } ``` + Restart the Vault server after saving this file. -2. Create a Vault policy that grants read access to the metrics endpoint. +2. Create a Vault policy file that grants read access to the metrics endpoint. ```hcl path "sys/metrics" { capabilities = ["read"] } ``` -3. Create a Vault token with this policy: - ```bash - vault token create -policy=metrics-read - ``` +3. Apply the policy. + + `vault policy write read-metrics metrics-policy.hcl` + +4. Create a Vault token with this policy: + + `vault token create -policy="read-metrics" -display-name="elastic-agent-token"` + + Save the token value, it will be needed to complete configuring the integration in Kibana. ### Onboard / configure in Kibana @@ -124,20 +126,23 @@ log_format = "json" 4. Configure the integration based on your data collection needs: **For Audit Logs (File)**: - - Enable the "Audit logs (file audit device)" input. + - Enable the "Logs from file" --> "Audit logs (file audit device)" input. - Specify the file path (default: `/var/log/vault/audit*.json*`). + **For Audit Logs (TCP Socket)**: - - Enable the "Audit logs (socket audit device)" input. + - Enable the "Logs from TCP socket" input. - Configure the `Listen Address` (default: `localhost`) and `Listen Port` (default: `9007`). - If Vault connects from a different host, set the Listen Address to `0.0.0.0`. + **For Operational Logs**: - - Enable the "Operation logs" input. + - Enable the "Logs from file" --> "Operation logs" input. - Specify the log file path (default: `/var/log/vault/log*.json*`). + **For Metrics**: - - Enable the "Vault metrics (prometheus)" input. + - Enable the "Metrics" input. - Enter the Vault host URL under `Hosts` (default: `http://localhost:8200`). - Provide the `Vault Token` created earlier. - Adjust the collection `Period` if needed (default: `30s`). @@ -148,11 +153,11 @@ log_format = "json" 1. **Check Agent Status**: In Fleet, verify that the Elastic Agent shows a "Healthy" status. 2. **Verify Data Ingestion**: - - Navigate to **Analytics > Discover** in Kibana. + - Navigate to **Discover** in Kibana. - Select the appropriate data view (`logs-hashicorp_vault.audit-*`, `logs-hashicorp_vault.log-*`, or `metrics-hashicorp_vault.metrics-*`). - Confirm that events are appearing with recent timestamps. 3. **View Dashboards**: - - Navigate to **Analytics > Dashboards**. + - Navigate to **Dashboards**. - Search for "Hashicorp Vault" to find the pre-built dashboards. - Verify that data is populating the dashboard panels. @@ -178,6 +183,7 @@ For help with Elastic ingest tools, check [Common problems](https://www.elastic. ### Vendor Resources - [HashiCorp Vault Audit Devices](https://developer.hashicorp.com/vault/docs/audit) +- [HashiCorp Vault File Audit Device](https://developer.hashicorp.com/vault/docs/audit/file) - [HashiCorp Vault Telemetry Configuration](https://developer.hashicorp.com/vault/docs/configuration/telemetry) - [HashiCorp Vault Troubleshooting](https://developer.hashicorp.com/vault/docs/troubleshoot) @@ -199,6 +205,10 @@ The `audit` data stream collects audit logs from the file or socket audit device {{ fields "audit" }} +#### audit sample event + +{{event "audit"}} + ### log The `log` data stream collects operational logs from Vault's standard log file. @@ -207,6 +217,10 @@ The `log` data stream collects operational logs from Vault's standard log file. {{ fields "log" }} +#### log sample event + +{{event "log"}} + ### metrics The `metrics` data stream collects Prometheus-formatted metrics from the Vault telemetry endpoint. @@ -221,4 +235,3 @@ The `metrics` data stream collects Prometheus-formatted metrics from the Vault t ### API usage These APIs are used with this integration: * **`/v1/sys/metrics`**: Used to collect Prometheus-formatted telemetry data. See the [HashiCorp Vault Metrics API documentation](https://developer.hashicorp.com/vault/api-docs/system/metrics) for more information. -* **`/sys/audit-hash`**: Can be used to manually verify the hash of a secret found in an audit log. See the [HashiCorp Vault Audit Hash API documentation](https://developer.hashicorp.com/vault/api-docs/system/audit-hash) for more information. diff --git a/packages/hashicorp_vault/changelog.yml b/packages/hashicorp_vault/changelog.yml index 07eab9c85aa..41348e2e89c 100644 --- a/packages/hashicorp_vault/changelog.yml +++ b/packages/hashicorp_vault/changelog.yml @@ -1,4 +1,9 @@ # newer versions go on top +- version: "1.28.3" + changes: + - description: Update documentation + type: bugfix + link: https://github.com/elastic/integrations/pull/999999 - version: "1.28.2" changes: - description: Generate processor tags and normalize error handler. diff --git a/packages/hashicorp_vault/docs/README.md b/packages/hashicorp_vault/docs/README.md index 299536a3e5b..271d38e138e 100644 --- a/packages/hashicorp_vault/docs/README.md +++ b/packages/hashicorp_vault/docs/README.md @@ -15,7 +15,7 @@ This integration facilitates the following use cases: ### Compatibility This integration has been tested with HashiCorp Vault 1.11. -It requires Elastic Stack version 8.12.0 or higher, or version 9.0.0 and above. +It requires Elastic Stack version 8.12.0 or higher. ## What data does this integration collect? @@ -33,13 +33,13 @@ This integration collects the following types of data from HashiCorp Vault: - **For Audit Log Collection (Socket)**: A socket audit device can be configured to stream logs to a TCP endpoint where Elastic Agent is listening. - **For Operational Log Collection**: Vault must be configured to output logs in JSON format (`log_format = "json"`) and the log file must be accessible by Elastic Agent. - **For Metrics Collection**: + - The Vault telemetry endpoint must be enabled. - A Vault token with read access to the `/sys/metrics` API endpoint. - - Vault telemetry must be configured with `disable_hostname = true`. It is also recommended to set `enable_hostname_label = true`. - The Elastic Agent must have network access to the Vault API endpoint. ### Elastic Prerequisites -- Elastic Stack version 8.12.0 or higher (or 9.0.0+). +- Elastic Stack version 8.12.0 or higher. - Elastic Agent installed and enrolled in Fleet. ## How do I deploy this integration? @@ -49,14 +49,12 @@ This integration collects the following types of data from HashiCorp Vault: #### Setting up Audit Logs (File Audit Device) 1. Create a directory for audit logs on each Vault server: - ```bash - mkdir /var/log/vault - ``` + + `mkdir /var/log/vault` 2. Enable the file audit device in Vault: - ```bash - vault audit enable file file_path=/var/log/vault/audit.json - ``` + + `vault audit enable file file_path=/var/log/vault/audit.json` 3. Configure log rotation to prevent disk space issues. The following is an example using `logrotate`: ```bash @@ -83,16 +81,14 @@ This integration collects the following types of data from HashiCorp Vault: 1. Note the IP address and port where Elastic Agent will be listening (e.g., port `9007`). 2. **Important**: Configure and deploy the integration in Kibana *before* enabling the socket device in Vault, as Vault will immediately test the connection. 3. Enable the socket audit device in Vault, substituting the IP of your Elastic Agent: - ```bash - vault audit enable socket address=${ELASTIC_AGENT_IP}:9007 socket_type=tcp - ``` + + `vault audit enable socket address=${ELASTIC_AGENT_IP}:9007 socket_type=tcp` #### Setting up Operational Logs Add the following line to your Vault configuration file to enable JSON-formatted logs. Ensure the log output is directed to a file that Elastic Agent can read. -```hcl -log_format = "json" -``` + +`log_format = "json"` #### Setting up Metrics @@ -103,18 +99,24 @@ log_format = "json" enable_hostname_label = true } ``` + Restart the Vault server after saving this file. -2. Create a Vault policy that grants read access to the metrics endpoint. +2. Create a Vault policy file that grants read access to the metrics endpoint. ```hcl path "sys/metrics" { capabilities = ["read"] } ``` -3. Create a Vault token with this policy: - ```bash - vault token create -policy=metrics-read - ``` +3. Apply the policy. + + `vault policy write read-metrics metrics-policy.hcl` + +4. Create a Vault token with this policy: + + `vault token create -policy="read-metrics" -display-name="elastic-agent-token"` + + Save the token value, it will be needed to complete configuring the integration in Kibana. ### Onboard / configure in Kibana @@ -124,20 +126,23 @@ log_format = "json" 4. Configure the integration based on your data collection needs: **For Audit Logs (File)**: - - Enable the "Audit logs (file audit device)" input. + - Enable the "Logs from file" --> "Audit logs (file audit device)" input. - Specify the file path (default: `/var/log/vault/audit*.json*`). + **For Audit Logs (TCP Socket)**: - - Enable the "Audit logs (socket audit device)" input. + - Enable the "Logs from TCP socket" input. - Configure the `Listen Address` (default: `localhost`) and `Listen Port` (default: `9007`). - If Vault connects from a different host, set the Listen Address to `0.0.0.0`. + **For Operational Logs**: - - Enable the "Operation logs" input. + - Enable the "Logs from file" --> "Operation logs" input. - Specify the log file path (default: `/var/log/vault/log*.json*`). + **For Metrics**: - - Enable the "Vault metrics (prometheus)" input. + - Enable the "Metrics" input. - Enter the Vault host URL under `Hosts` (default: `http://localhost:8200`). - Provide the `Vault Token` created earlier. - Adjust the collection `Period` if needed (default: `30s`). @@ -148,11 +153,11 @@ log_format = "json" 1. **Check Agent Status**: In Fleet, verify that the Elastic Agent shows a "Healthy" status. 2. **Verify Data Ingestion**: - - Navigate to **Analytics > Discover** in Kibana. + - Navigate to **Discover** in Kibana. - Select the appropriate data view (`logs-hashicorp_vault.audit-*`, `logs-hashicorp_vault.log-*`, or `metrics-hashicorp_vault.metrics-*`). - Confirm that events are appearing with recent timestamps. 3. **View Dashboards**: - - Navigate to **Analytics > Dashboards**. + - Navigate to **Dashboards**. - Search for "Hashicorp Vault" to find the pre-built dashboards. - Verify that data is populating the dashboard panels. @@ -178,6 +183,7 @@ For help with Elastic ingest tools, check [Common problems](https://www.elastic. ### Vendor Resources - [HashiCorp Vault Audit Devices](https://developer.hashicorp.com/vault/docs/audit) +- [HashiCorp Vault File Audit Device](https://developer.hashicorp.com/vault/docs/audit/file) - [HashiCorp Vault Telemetry Configuration](https://developer.hashicorp.com/vault/docs/configuration/telemetry) - [HashiCorp Vault Troubleshooting](https://developer.hashicorp.com/vault/docs/troubleshoot) @@ -308,6 +314,103 @@ The `audit` data stream collects audit logs from the file or socket audit device | user.id | Unique identifier of the user. | keyword | +#### audit sample event + +An example event for `audit` looks as following: + +```json +{ + "@timestamp": "2023-09-26T13:07:49.743Z", + "agent": { + "ephemeral_id": "5bbd86cc-8032-432d-be82-fae8f624ed98", + "id": "f25d13cd-18cc-4e73-822c-c4f849322623", + "name": "docker-fleet-agent", + "type": "filebeat", + "version": "8.10.1" + }, + "data_stream": { + "dataset": "hashicorp_vault.audit", + "namespace": "ep", + "type": "logs" + }, + "ecs": { + "version": "8.17.0" + }, + "elastic_agent": { + "id": "f25d13cd-18cc-4e73-822c-c4f849322623", + "snapshot": false, + "version": "8.10.1" + }, + "event": { + "action": "update", + "agent_id_status": "verified", + "category": [ + "authentication" + ], + "dataset": "hashicorp_vault.audit", + "id": "0b1b9013-da54-633d-da69-8575e6794ed3", + "ingested": "2023-09-26T13:08:15Z", + "kind": "event", + "original": "{\"time\":\"2023-09-26T13:07:49.743284857Z\",\"type\":\"request\",\"auth\":{\"token_type\":\"default\"},\"request\":{\"id\":\"0b1b9013-da54-633d-da69-8575e6794ed3\",\"operation\":\"update\",\"namespace\":{\"id\":\"root\"},\"path\":\"sys/audit/test\"}}", + "outcome": "success", + "type": [ + "info" + ] + }, + "hashicorp_vault": { + "audit": { + "auth": { + "token_type": "default" + }, + "request": { + "id": "0b1b9013-da54-633d-da69-8575e6794ed3", + "namespace": { + "id": "root" + }, + "operation": "update", + "path": "sys/audit/test" + }, + "type": "request" + } + }, + "host": { + "architecture": "x86_64", + "containerized": false, + "hostname": "docker-fleet-agent", + "id": "28da52b32df94b50aff67dfb8f1be3d6", + "ip": [ + "192.168.80.5" + ], + "mac": [ + "02-42-C0-A8-50-05" + ], + "name": "docker-fleet-agent", + "os": { + "codename": "focal", + "family": "debian", + "kernel": "5.10.104-linuxkit", + "name": "Ubuntu", + "platform": "ubuntu", + "type": "linux", + "version": "20.04.6 LTS (Focal Fossa)" + } + }, + "input": { + "type": "log" + }, + "log": { + "file": { + "path": "/tmp/service_logs/vault/audit.json" + }, + "offset": 0 + }, + "tags": [ + "preserve_original_event", + "hashicorp-vault-audit" + ] +} +``` + ### log The `log` data stream collects operational logs from Vault's standard log file. @@ -339,6 +442,87 @@ The `log` data stream collects operational logs from Vault's standard log file. | tags | List of keywords used to tag each event. | keyword | +#### log sample event + +An example event for `log` looks as following: + +```json +{ + "@timestamp": "2023-09-26T13:09:08.587Z", + "agent": { + "ephemeral_id": "5bbd86cc-8032-432d-be82-fae8f624ed98", + "id": "f25d13cd-18cc-4e73-822c-c4f849322623", + "name": "docker-fleet-agent", + "type": "filebeat", + "version": "8.10.1" + }, + "data_stream": { + "dataset": "hashicorp_vault.log", + "namespace": "ep", + "type": "logs" + }, + "ecs": { + "version": "8.17.0" + }, + "elastic_agent": { + "id": "f25d13cd-18cc-4e73-822c-c4f849322623", + "snapshot": false, + "version": "8.10.1" + }, + "event": { + "agent_id_status": "verified", + "dataset": "hashicorp_vault.log", + "ingested": "2023-09-26T13:09:35Z", + "kind": "event", + "original": "{\"@level\":\"info\",\"@message\":\"proxy environment\",\"@timestamp\":\"2023-09-26T13:09:08.587324Z\",\"http_proxy\":\"\",\"https_proxy\":\"\",\"no_proxy\":\"\"}" + }, + "hashicorp_vault": { + "log": { + "http_proxy": "", + "https_proxy": "", + "no_proxy": "" + } + }, + "host": { + "architecture": "x86_64", + "containerized": false, + "hostname": "docker-fleet-agent", + "id": "28da52b32df94b50aff67dfb8f1be3d6", + "ip": [ + "192.168.80.5" + ], + "mac": [ + "02-42-C0-A8-50-05" + ], + "name": "docker-fleet-agent", + "os": { + "codename": "focal", + "family": "debian", + "kernel": "5.10.104-linuxkit", + "name": "Ubuntu", + "platform": "ubuntu", + "type": "linux", + "version": "20.04.6 LTS (Focal Fossa)" + } + }, + "input": { + "type": "log" + }, + "log": { + "file": { + "path": "/tmp/service_logs/log.json" + }, + "level": "info", + "offset": 709 + }, + "message": "proxy environment", + "tags": [ + "preserve_original_event", + "hashicorp-vault-log" + ] +} +``` + ### metrics The `metrics` data stream collects Prometheus-formatted metrics from the Vault telemetry endpoint. @@ -456,4 +640,3 @@ ssl.client_authentication: "optional" ### API usage These APIs are used with this integration: * **`/v1/sys/metrics`**: Used to collect Prometheus-formatted telemetry data. See the [HashiCorp Vault Metrics API documentation](https://developer.hashicorp.com/vault/api-docs/system/metrics) for more information. -* **`/sys/audit-hash`**: Can be used to manually verify the hash of a secret found in an audit log. See the [HashiCorp Vault Audit Hash API documentation](https://developer.hashicorp.com/vault/api-docs/system/audit-hash) for more information. diff --git a/packages/hashicorp_vault/docs/knowledge_base/service_info.md b/packages/hashicorp_vault/docs/knowledge_base/service_info.md index ab268b5329d..85d6b3f97d6 100644 --- a/packages/hashicorp_vault/docs/knowledge_base/service_info.md +++ b/packages/hashicorp_vault/docs/knowledge_base/service_info.md @@ -23,7 +23,7 @@ This integration collects the following types of data from HashiCorp Vault: This integration has been tested with HashiCorp Vault 1.11. -The integration requires Elastic Stack version 8.12.0 or higher, or version 9.0.0 and above. +The integration requires Elastic Stack version 8.12.0 or higher. ## Scaling and Performance @@ -67,7 +67,7 @@ The following prerequisites are required on the HashiCorp Vault side: ## Elastic prerequisites -- Elastic Stack version 8.12.0 or higher (or 9.0.0+) +- Elastic Stack version 8.12.0 or higher - Elastic Agent installed and enrolled in Fleet ## Vendor set up steps @@ -126,20 +126,32 @@ Direct Vault's log output to a file that Elastic Agent can read. ### Setting up Metrics -1. Configure Vault telemetry in your Vault configuration file: -```hcl -telemetry { - disable_hostname = true - enable_hostname_label = true -} -``` - -2. Create a Vault token with read access to the metrics endpoint: -```bash -vault token create -policy=metrics-read -``` - -Ensure the token has a policy that grants read access to `sys/metrics`. +1. Configure Vault telemetry in your Vault configuration file: + ```hcl + telemetry { + disable_hostname = true + enable_hostname_label = true + } + ``` + Restart the Vault server after saving this file. + +2. Create a Vault policy file that grants read access to the metrics endpoint. + ```hcl + path "sys/metrics" { + capabilities = ["read"] + } + ``` + +3. Apply the policy. + ```bash + vault policy write read-metrics metrics-policy.hcl + ``` + +4. Create a Vault token with this policy: + ```bash + vault token create -policy="read-metrics" -display-name="elastic-agent-token" + ``` + Save the token value, it will be needed to complete configuring the integration in Kibana. ## Kibana set up steps @@ -152,21 +164,21 @@ Ensure the token has a policy that grants read access to `sys/metrics`. 4. Configure the integration based on your data collection needs: **For Audit Logs (File)**: - - Enable the "Audit logs (file audit device)" input + - Enable the "Logs from file" --> "Audit logs (file audit device)" input - Specify the file path (default: `/var/log/vault/audit*.json*`) - Optionally enable "Preserve original event" to keep raw logs **For Audit Logs (TCP Socket)**: - - Enable the "Audit logs (socket audit device)" input + - Enable the "Logs from TCP socket" input - Configure the listen address (default: `localhost`) and port (default: `9007`) - If Vault will connect remotely, set listen address to `0.0.0.0` **For Operational Logs**: - - Enable the "Operation logs" input + - Enable the "Logs from file" --> "Operation logs" input - Specify the log file path (default: `/var/log/vault/log*.json*`) **For Metrics**: - - Enable the "Vault metrics (prometheus)" input + - Enable the "Metrics" input - Enter the Vault host URL (default: `http://localhost:8200`) - Provide the Vault token with read access to `/sys/metrics` - Optionally configure SSL settings if using HTTPS @@ -183,7 +195,7 @@ After configuring the integration, validate that data is flowing correctly: 1. **Check Agent Status**: In Fleet, verify that the Elastic Agent shows a "Healthy" status 2. **Verify Data Ingestion**: - - Navigate to **Analytics > Discover** in Kibana + - Navigate to **Discover** in Kibana - Select the appropriate data view for each data stream: - `logs-hashicorp_vault.audit-*` for audit logs - `logs-hashicorp_vault.log-*` for operational logs @@ -194,11 +206,11 @@ After configuring the integration, validate that data is flowing correctly: 4. **View Dashboards**: - Navigate to **Analytics > Dashboards** - - Open the "Hashicorp Vault Audit Log Dashboard" to view audit log visualizations - - Open the "Hashicorp Vault Log Dashboard" to view operational log visualizations + - Open the "[Hashicorp Vault] Audit Logs" to view audit log visualizations + - Open the "[Hashicorp Vault] Operational Logs" to view operational log visualizations - Verify that data is populating the dashboard panels -5. **Check Metrics**: For metrics collection, verify that Prometheus metrics are being collected by searching for documents with `hashicorp_vault.metrics.*` fields +5. **Check Metrics**: For metrics collection, verify that metrics are being collected by searching for documents with `hashicorp_vault.metrics.*` fields # Troubleshooting @@ -270,3 +282,4 @@ path "sys/metrics" { - [HashiCorp Vault Deployment Guide](https://developer.hashicorp.com/vault/tutorials/day-one-raft/raft-deployment-guide) - [Elastic HashiCorp Vault Integration Documentation](https://docs.elastic.co/integrations/hashicorp_vault) + diff --git a/packages/hashicorp_vault/manifest.yml b/packages/hashicorp_vault/manifest.yml index 76422c6e3a6..1d4b0489183 100644 --- a/packages/hashicorp_vault/manifest.yml +++ b/packages/hashicorp_vault/manifest.yml @@ -1,7 +1,7 @@ format_version: "3.0.3" name: hashicorp_vault title: Hashicorp Vault -version: "1.28.2" +version: "1.28.3" description: Collect logs and metrics from Hashicorp Vault with Elastic Agent. type: integration categories: From bad2e07c891f21baa90caff023bb4881de03b6c3 Mon Sep 17 00:00:00 2001 From: Michael Wolf Date: Fri, 31 Oct 2025 12:43:52 -0700 Subject: [PATCH 3/5] update changelog --- packages/hashicorp_vault/changelog.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/packages/hashicorp_vault/changelog.yml b/packages/hashicorp_vault/changelog.yml index 41348e2e89c..553b7126c22 100644 --- a/packages/hashicorp_vault/changelog.yml +++ b/packages/hashicorp_vault/changelog.yml @@ -3,7 +3,7 @@ changes: - description: Update documentation type: bugfix - link: https://github.com/elastic/integrations/pull/999999 + link: https://github.com/elastic/integrations/pull/15833 - version: "1.28.2" changes: - description: Generate processor tags and normalize error handler. From 51c80a58bce1b65f4d66ea90ac6f622111385203 Mon Sep 17 00:00:00 2001 From: Michael Wolf Date: Fri, 31 Oct 2025 12:46:14 -0700 Subject: [PATCH 4/5] Update compatible vault versions --- packages/hashicorp_vault/_dev/build/docs/README.md | 2 +- packages/hashicorp_vault/docs/README.md | 2 +- packages/hashicorp_vault/docs/knowledge_base/service_info.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/packages/hashicorp_vault/_dev/build/docs/README.md b/packages/hashicorp_vault/_dev/build/docs/README.md index 7fe42e0877d..899d6c6e2a6 100644 --- a/packages/hashicorp_vault/_dev/build/docs/README.md +++ b/packages/hashicorp_vault/_dev/build/docs/README.md @@ -14,7 +14,7 @@ This integration facilitates the following use cases: ### Compatibility -This integration has been tested with HashiCorp Vault 1.11. +This integration has been tested with HashiCorp Vault 1.11 and 1.21. It requires Elastic Stack version 8.12.0 or higher. ## What data does this integration collect? diff --git a/packages/hashicorp_vault/docs/README.md b/packages/hashicorp_vault/docs/README.md index 271d38e138e..5ba6b1a61e6 100644 --- a/packages/hashicorp_vault/docs/README.md +++ b/packages/hashicorp_vault/docs/README.md @@ -14,7 +14,7 @@ This integration facilitates the following use cases: ### Compatibility -This integration has been tested with HashiCorp Vault 1.11. +This integration has been tested with HashiCorp Vault 1.11 and 1.21. It requires Elastic Stack version 8.12.0 or higher. ## What data does this integration collect? diff --git a/packages/hashicorp_vault/docs/knowledge_base/service_info.md b/packages/hashicorp_vault/docs/knowledge_base/service_info.md index 85d6b3f97d6..63942816226 100644 --- a/packages/hashicorp_vault/docs/knowledge_base/service_info.md +++ b/packages/hashicorp_vault/docs/knowledge_base/service_info.md @@ -21,7 +21,7 @@ This integration collects the following types of data from HashiCorp Vault: ## Compatibility -This integration has been tested with HashiCorp Vault 1.11. +This integration has been tested with HashiCorp Vault 1.11 and 1.21. The integration requires Elastic Stack version 8.12.0 or higher. From 1dc8592907edb926725992dc3794f53409a9f0f1 Mon Sep 17 00:00:00 2001 From: Michael Wolf Date: Tue, 4 Nov 2025 14:25:24 -0800 Subject: [PATCH 5/5] Update advice on blocking network socket Add warning and troubleshooting advice on use of a TCP socket destination, which may result in blocked vault operations. --- packages/hashicorp_vault/_dev/build/docs/README.md | 11 +++++++++-- packages/hashicorp_vault/docs/README.md | 11 +++++++++-- .../docs/knowledge_base/service_info.md | 10 ++++++++++ 3 files changed, 28 insertions(+), 4 deletions(-) diff --git a/packages/hashicorp_vault/_dev/build/docs/README.md b/packages/hashicorp_vault/_dev/build/docs/README.md index 899d6c6e2a6..11c18280047 100644 --- a/packages/hashicorp_vault/_dev/build/docs/README.md +++ b/packages/hashicorp_vault/_dev/build/docs/README.md @@ -2,7 +2,7 @@ ## Overview -The Hashicorp Vault integration for Elastic enables the collection of logs and metrics from Hashicorp Vault. This allows you to monitor Vault server health, track access to secrets, and maintain a detailed audit trail for security and compliance. +The Hashicorp Vault integration for Elastic enables you to collect logs and metrics from Hashicorp Vault. This allows you to ingest audit logs for security monitoring, collect operational logs for troubleshooting, and gather metrics to monitor the overall health and performance of your Vault servers. This integration facilitates the following use cases: - **Security Monitoring and Auditing**: Track all access to secrets, who accessed them, and when, providing a detailed audit trail for compliance and security investigations. @@ -78,6 +78,12 @@ This integration collects the following types of data from HashiCorp Vault: #### Setting up Audit Logs (Socket Audit Device) +> **Warning: Risk of Unresponsive Vault with TCP Socket Audit Devices** +> +> If a TCP socket audit log destination (like the Elastic Agent) becomes unavailable, Vault may block and stop processing all requests until the connection is restored. This can lead to a service outage. +> +> To mitigate this risk, HashiCorp strongly recommends that a socket audit device is configured as a secondary device, alongside a primary, non-socket audit device (like the `file` audit device). For more details, see the official documentation on [Blocked Audit Devices](https://developer.hashicorp.com/vault/docs/audit/socket#configuration). + 1. Note the IP address and port where Elastic Agent will be listening (e.g., port `9007`). 2. **Important**: Configure and deploy the integration in Kibana *before* enabling the socket device in Vault, as Vault will immediately test the connection. 3. Enable the socket audit device in Vault, substituting the IP of your Elastic Agent: @@ -167,6 +173,8 @@ For help with Elastic ingest tools, check [Common problems](https://www.elastic. ### Common Configuration Issues +- **Vault is Unresponsive or Stops Accepting Requests**: + - If Vault stops responding to requests, you may have a blocked audit device. This can happen if a TCP socket destination is unavailable or a file audit device cannot write to disk. Review Vault's operational logs for errors related to audit logging. For more information on identifying and resolving this, see the [Blocked Audit Device Behavior](https://developer.hashicorp.com/vault/tutorials/monitoring/blocked-audit-devices#blocked-audit-device-behavior) tutorial. - **No Data Collected**: - Verify Elastic Agent is healthy in Fleet. - Ensure the user running Elastic Agent has read permissions on log files. @@ -191,7 +199,6 @@ For help with Elastic ingest tools, check [Common problems](https://www.elastic. - **Audit Log Performance**: Vault's file audit device provides the strongest delivery guarantees. Ensure adequate disk I/O capacity, as Vault will block operations if it cannot write audit logs. - **Metrics Collection**: The default collection interval is 30 seconds. Adjust this period based on your monitoring needs and Vault server load. -- **TCP Socket Considerations**: When using the socket audit device, ensure network reliability between Vault and the Elastic Agent. If the TCP connection is unavailable, Vault operations will be blocked until it is restored. For more information on architectures that can be used for scaling this integration, check the [Ingest Architectures](https://www.elastic.co/docs/manage-data/ingest/ingest-reference-architectures) documentation. diff --git a/packages/hashicorp_vault/docs/README.md b/packages/hashicorp_vault/docs/README.md index 5ba6b1a61e6..2ec5f4ae72c 100644 --- a/packages/hashicorp_vault/docs/README.md +++ b/packages/hashicorp_vault/docs/README.md @@ -2,7 +2,7 @@ ## Overview -The Hashicorp Vault integration for Elastic enables the collection of logs and metrics from Hashicorp Vault. This allows you to monitor Vault server health, track access to secrets, and maintain a detailed audit trail for security and compliance. +The Hashicorp Vault integration for Elastic enables you to collect logs and metrics from Hashicorp Vault. This allows you to ingest audit logs for security monitoring, collect operational logs for troubleshooting, and gather metrics to monitor the overall health and performance of your Vault servers. This integration facilitates the following use cases: - **Security Monitoring and Auditing**: Track all access to secrets, who accessed them, and when, providing a detailed audit trail for compliance and security investigations. @@ -78,6 +78,12 @@ This integration collects the following types of data from HashiCorp Vault: #### Setting up Audit Logs (Socket Audit Device) +> **Warning: Risk of Unresponsive Vault with TCP Socket Audit Devices** +> +> If a TCP socket audit log destination (like the Elastic Agent) becomes unavailable, Vault may block and stop processing all requests until the connection is restored. This can lead to a service outage. +> +> To mitigate this risk, HashiCorp strongly recommends that a socket audit device is configured as a secondary device, alongside a primary, non-socket audit device (like the `file` audit device). For more details, see the official documentation on [Blocked Audit Devices](https://developer.hashicorp.com/vault/docs/audit/socket#configuration). + 1. Note the IP address and port where Elastic Agent will be listening (e.g., port `9007`). 2. **Important**: Configure and deploy the integration in Kibana *before* enabling the socket device in Vault, as Vault will immediately test the connection. 3. Enable the socket audit device in Vault, substituting the IP of your Elastic Agent: @@ -167,6 +173,8 @@ For help with Elastic ingest tools, check [Common problems](https://www.elastic. ### Common Configuration Issues +- **Vault is Unresponsive or Stops Accepting Requests**: + - If Vault stops responding to requests, you may have a blocked audit device. This can happen if a TCP socket destination is unavailable or a file audit device cannot write to disk. Review Vault's operational logs for errors related to audit logging. For more information on identifying and resolving this, see the [Blocked Audit Device Behavior](https://developer.hashicorp.com/vault/tutorials/monitoring/blocked-audit-devices#blocked-audit-device-behavior) tutorial. - **No Data Collected**: - Verify Elastic Agent is healthy in Fleet. - Ensure the user running Elastic Agent has read permissions on log files. @@ -191,7 +199,6 @@ For help with Elastic ingest tools, check [Common problems](https://www.elastic. - **Audit Log Performance**: Vault's file audit device provides the strongest delivery guarantees. Ensure adequate disk I/O capacity, as Vault will block operations if it cannot write audit logs. - **Metrics Collection**: The default collection interval is 30 seconds. Adjust this period based on your monitoring needs and Vault server load. -- **TCP Socket Considerations**: When using the socket audit device, ensure network reliability between Vault and the Elastic Agent. If the TCP connection is unavailable, Vault operations will be blocked until it is restored. For more information on architectures that can be used for scaling this integration, check the [Ingest Architectures](https://www.elastic.co/docs/manage-data/ingest/ingest-reference-architectures) documentation. diff --git a/packages/hashicorp_vault/docs/knowledge_base/service_info.md b/packages/hashicorp_vault/docs/knowledge_base/service_info.md index 63942816226..4d160996921 100644 --- a/packages/hashicorp_vault/docs/knowledge_base/service_info.md +++ b/packages/hashicorp_vault/docs/knowledge_base/service_info.md @@ -115,6 +115,11 @@ vault audit enable socket address=${ELASTIC_AGENT_IP}:9007 socket_type=tcp **Note**: Configure the integration in Kibana first before enabling the socket audit device, as Vault will test the connection. +**Warning: Risk of Unresponsive Vault with TCP Socket Audit Devices**: If a TCP socket audit log destination (like the Elastic Agent) +becomes unavailable, Vault may block and stop processing all requests until the connection is restored. This can lead to a service outage. +To mitigate this risk, HashiCorp strongly recommends that a socket audit device is configured as a secondary device, alongside a primary, +non-socket audit device (like the `file` audit device). For more details, see the official documentation on [Blocked Audit Devices](https://developer.hashicorp.com/vault/docs/audit/socket#configuration). + ### Setting up Operational Logs Configure Vault to output logs in JSON format by adding to your Vault configuration file: @@ -237,6 +242,11 @@ After configuring the integration, validate that data is flowing correctly: - **Telemetry Configuration**: Confirm Vault telemetry is properly configured with `disable_hostname = true` - **Network Access**: Verify Elastic Agent can reach the Vault API endpoint +### Vault is Unresponsive or Stops Accepting Request +If Vault stops responding to requests, you may have a blocked audit device. This can happen if a TCP socket destination is unavailable or a file +audit device cannot write to disk. Review Vault's operational logs for errors related to audit logging. For more information on identifying and +resolving this, see the [Blocked Audit Device Behavior](https://developer.hashicorp.com/vault/tutorials/monitoring/blocked-audit-devices#blocked-audit-device-behavior) tutorial. + ## Ingestion Errors If `error.message` appears in ingested data: