Skip to content

darktrace/terraform-aws-vsensor

Repository files navigation

Overview

This Quick Start terraform module deploys Darktrace vSensor on the Amazon Web Services (AWS) Cloud.

Darktrace vSensor probes analyze raw data from mirrored virtual private cloud (VPC) traffic permitting a connected Darktrace service to learn to identify traffic patterns and threats. Darktrace vSensors are only used in conjunction with a Darktrace cloud service offering or physical Darktrace appliance.

Darktrace vSensors accept traffic from mirroring services (such as AWS Traffic mirroring) and from Darktrace osSensor agents. Darktrace osSensors can be configured on virtual machines and containerized applications. Darktrace osSensors are available for Linux, Windows, and any system that can run the Docker Engine.

Darktrace vSensors can also be configured to accept syslog feeds of third-party security information and event management tools.

Darktrace vSensors extract meta-data from the traffic sources and submit these to a connected Darktrace cloud service or physcial Darktrace deployment over port 443 (TLS). PCAP (packet capture) data is produced by the vSensor for forensic analysis. This is stored in an Amazon Simple Storage Service (Amazon S3) bucket.

This guide covers the steps necessary to deploy this Quick Start.

Architecture Diagram

image

This module sets up the following:

  • A highly available architecture (when 2+ Availability Zones are used).

  • (Optional) A virtual private network (VPC) configured with public and private subnets, according to AWS best practices, to provide you with your own virtual network on AWS.

  • In the public subnets:

    • Managed network address translation (NAT) gateways* to allow outbound internet access to Darktrace vSensor instances.
    • Linux bastion host* managing inbound Secure Shell (SSH) access to Darktrace vSensor instances in the private subnets.
  • In the private subnets:

    • An Auto Scaling group of Darktrace vSensor probes hosted on Amazon EC2 instances.
  • VPC traffic mirroring to send mirrored traffic to a Network Load Balancer.

  • A Network Load Balancer to distribute monitored VPC traffic to Darktrace vSensor instances.

  • An Amazon S3 bucket to store packets captured by Darktrace vSensor.

  • Amazon CloudWatch to provide:

    • Logs from Darktrace vSensor EC2 instances.
    • Metrics from Darktrace vSensor EC2 instances.
  • (Optional) AWS Systems Manager Session Manager to manage the vSensors through an interactive one-click browser-based shell or through the AWS CLI.

* If the module does not create a new VPC the components marked by asterisks will be skipped.

Deployment options

This Quick Start terraform module provides the following deployment options:

  • Deploy Darktrace vSensor into an existing VPC. This option provisions Darktrace vSensor in your existing AWS infrastructure.

  • Deploy Darktrace vSensor into a new VPC. This option builds a new AWS environment that consists of the VPC, subnets, NAT gateways, security groups, bastion host (optional), and other infrastructure components. It then deploys Darktrace vSensor into this new VPC.

AWS System Manager Session Manager for access to the vSensors can be enabled in any of the deployments. This works independently from the bastion host deployed by the module, or any remote access provided outside the module. Session Manager provides secure and auditable edge device and instance management without needing to open inbound ports, maintain bastion hosts, or manage SSH keys.

To minimise traffic costs, Cross-Zone Load Balancing is disabled by default. This will require at least one vSensor in each Availabily Zone you need to monitor. There is an option to enable Cross-Zone Load Balancing (see input variable cross_zone_load_balancing_enable for details).

Pre deployment steps

Register a push token and obtain a Darktrace vSensor update key.

Register a new push token to enable connection between vSensor probes and an existing Darktrace on-premise or cloud instance. All of the vSensor instances in one deployment should share the same push token.

  1. Log into the Darktrace console.

  2. From the main menu, choose Admin > System Config, then access the "Settings" page.

  3. Locate the "Push Probe Tokens" section. At the bottom of the list of probes is an field to create a new token. Enter a label for the vSensor deployment.

  4. Choose Add. You will need to record two variables from the resulting window. The vSensor Update Key (also found on the Darktrace customer portal) and the Push Token. The Push Token is only displayed once.

  5. In AWS Systems Manager (Parameter Store) create a new parameter with a unique name eg. darktrace_vsensor_update_key with type SecureString and the value of the vSensor Update Key obtained previously.

  6. In AWS Systems Manager (Parameter Store) create a new parameter with a unique name, eg. darktrace_push_token with type SecureString and the value of the Push Token obtained previously. (Any additional Darktrace master deployments will require their own push token)

If you have a physical Darktrace deployment behind a firewall, you must grant access to the instance to the IP addresses of your NAT Gateways after deployment.

Note: Darktrace cloud offerings are already configured to allow Push Token access, no firewall changes are necessary.

Set osSensor shared HMAC secret key

The shared HMAC secret key between the osSensor and vSensor is optional for the installation. If the HMAC is not provided the module will not set the osSensor shared HMAC secret key on the vSensors.

This can be done outside the module once the deployment has completed. More details on how to do it can be found in the Requirements section in the osSensor product guide on the Darktrace Customer Portal.

  1. In AWS Systems Manager (Parameter Store) create a new parameter with a unique name, eg. os_sensor_hmac_token with type SecureString and the value of the osSensor shared HMAC secret key obtained previously.

Terraform user policy

The terraform user's IAM policy should allow the relevant actions as per the resources the module will create.

Usage

Before you start

If you use the module to create a new VPC the number of availability_zone, private_subnets_cidrs, and public_subnets_cidrs should be the same. Changing the order will force re-creating the resources.

Deploy Darktrace vSensor into an existing VPC

module "vsensors" {
  source = "git::https://github.com/darktrace/terraform-aws-vsensor?ref=<version>"

  deployment_prefix = "dt"

  vpc_id              = "vpc-XXXX"
  vpc_private_subnets = ["subnet-XXXXXXX", "subnet-YYYYYYY"]

  update_key           = "update_key_parameter_store_name"
  push_token           = "push_token_parameter_store_name"
  instance_host_name   = "instance_host_name-XXXXXXX"
  os_sensor_hmac_token = "os_sensor_hmac_token_parameter_store_name"
}

Deploy Darktrace vSensor into a new VPC

module "vsensors" {
  source = "git::https://github.com/darktrace/terraform-aws-vsensor?ref=<version>"

  deployment_prefix = "dt"

  vpc_enable            = true
  vpc_cidr              = "10.0.0.0/16"
  availability_zone     = ["eu-west-1a", "eu-west-1b", "eu-west-1c"]
  private_subnets_cidrs = ["10.0.0.0/23", "10.0.2.0/23", "10.0.4.0/23"]
  public_subnets_cidrs  = ["10.0.246.0/23", "10.0.248.0/23", "10.0.250.0/23"]

  update_key           = "update_key_parameter_store_name"
  push_token           = "push_token_parameter_store_name"
  instance_host_name   = "instance_host_name-XXXXXXX"
}

Deploy Darktrace vSensor into a new VPC with bastion host

module "vsensors" {
  source = "git::https://github.com/darktrace/terraform-aws-vsensor?ref=<version>"

  deployment_prefix = "dt"

  vpc_enable            = true
  vpc_cidr              = "10.0.0.0/16"
  availability_zone     = ["eu-west-1a", "eu-west-1b", "eu-west-1c"]
  private_subnets_cidrs = ["10.0.0.0/23", "10.0.2.0/23", "10.0.4.0/23"]
  public_subnets_cidrs  = ["10.0.246.0/23", "10.0.248.0/23", "10.0.250.0/23"]

  ssh_keyname = "sshkey-XXXXXX"

  ssm_session_enable = false

  instance_type = "m5.large"

  desired_capacity = 3
  max_size         = 5
  min_size         = 3

  update_key           = "update_key_parameter_store_name"
  push_token           = "push_token_parameter_store_name"
  instance_host_name   = "instance_host_name-XXXXXXX"
  os_sensor_hmac_token = "os_sensor_hmac_token_parameter_store_name"

  bastion_enable        = true
  bastion_instance_type = "t3.micro"
  bastion_ami           = "Ubuntu-Server-24_04-LTS-HVM"
  bastion_ssh_keyname   = "sshkey-XXXYYY"
  bastion_ssh_cidrs     = ["35.180.11.224/27"]
  tags = {
    department    = "Ops"
    team          = "Ops-1"
  }
}

Requirements

Name Version
terraform >= 1.4
aws >= 5.23
random >= 3.5

Providers

Name Version
aws >= 5.23
random >= 3.5

Modules

No modules.

Resources

Name Type
aws_autoscaling_group.vsensors_asg resource
aws_autoscaling_policy.vsensors_asg_policy resource
aws_cloudwatch_log_group.vsensor_log_group resource
aws_ec2_traffic_mirror_filter.vsensor_filter resource
aws_ec2_traffic_mirror_filter_rule.rulein resource
aws_ec2_traffic_mirror_filter_rule.ruleout resource
aws_ec2_traffic_mirror_target.vsensor_lb_target resource
aws_eip.remote_access_eip resource
aws_eip.vsensor_nat_gw_eip resource
aws_eip_association.remote_access_eip_assoc resource
aws_iam_instance_profile.vsensor resource
aws_iam_policy.vsensor_iam resource
aws_iam_role.vsensor_iam resource
aws_iam_role_policy_attachment.vsensor_iam resource
aws_instance.bastion resource
aws_internet_gateway.main_igw resource
aws_kms_alias.vsensor_logs resource
aws_kms_key.vsensor_logs resource
aws_launch_template.vsensor resource
aws_lb.vsensor_lb resource
aws_lb_listener.vsensor_lb_listener resource
aws_lb_target_group.vsensor_tg resource
aws_nat_gateway.vsensor_nat_gw resource
aws_route_table.main_rt resource
aws_route_table.vsensor_rt resource
aws_route_table_association.public_rta resource
aws_route_table_association.vsensor_rta resource
aws_s3_bucket.vsensor_pcaps_s3 resource
aws_s3_bucket_lifecycle_configuration.vsensor_pcaps_s3 resource
aws_s3_bucket_logging.vsensor_pcaps_s3 resource
aws_s3_bucket_policy.vsensor_pcaps_s3 resource
aws_s3_bucket_public_access_block.vsensor_pcaps_s3 resource
aws_s3_bucket_server_side_encryption_configuration.vsensor_pcaps_s3 resource
aws_security_group.bastion_sg resource
aws_security_group.vsensors_asg_sg resource
aws_security_group_rule.allow_mirror_4789 resource
aws_security_group_rule.allow_ossesnsors_443 resource
aws_security_group_rule.allow_ossesnsors_80 resource
aws_security_group_rule.bastion_ssh_access resource
aws_security_group_rule.bastion_to_any resource
aws_security_group_rule.remote_ssh resource
aws_security_group_rule.ssh_access resource
aws_security_group_rule.to_pkgs_443 resource
aws_security_group_rule.to_pkgs_80 resource
aws_ssm_document.session_manager_preferences resource
aws_subnet.private resource
aws_subnet.public resource
aws_vpc.main resource
random_string.rnd_deploy_id resource
aws_ami.bastion_amazon_linux_2 data source
aws_ami.bastion_ubuntu data source
aws_ami.ubuntu data source
aws_caller_identity.current data source
aws_iam_policy_document.vsensor_iam data source
aws_iam_policy_document.vsensor_pcaps_s3 data source
aws_partition.current data source
aws_region.current data source
aws_ssm_parameter.dt_os_sensor_hmac_token data source
aws_ssm_parameter.dt_push_token data source
aws_ssm_parameter.dt_update_key data source
aws_vpc.vsensors_asg data source

Inputs

Name Description Type Default Required
availability_zone If vpc_enable = true - Availability Zones that the vSensors, the NAT Gateways and all resources will be deployed into. list(string)
[
"us-east-1a",
"us-east-1b"
]
no
bastion_ami (Optional) The AMI operating system for the bastion host. This can be one of Amazon-Linux2023-HVM, Ubuntu-Server-24_04-LTS-HVM.
Default user names: for Amazon-Linux2023-HVM the user name is ec2-user, for Ubuntu-Server-24_04-LTS-HVM the user name is ubuntu.
string "Amazon-Linux2023-HVM" no
bastion_enable If true will create a public Bastion.
(Optional; applicable only if vpc_enable = true) If true a standalone/single bastion host will be installed to provide ssh remote access to the vSensors.
It will be installed in the first Public subnet CIDR (public_subnets_cidrs). The bastion will automatically have ssh access to the vSensors.
bool false no
bastion_instance_type (Optional) The ec2 instance type for the bastion host. This can be one of t3.micro, t3.small, t3.medium, t3.large, t3.xlarge, t3.2xlarge. string "t3.micro" no
bastion_ssh_cidrs (Optional) Allowed CIDR blocks for SSH (Secure Shell) access to the bastion host. list(any) [] no
bastion_ssh_keyname (Optional) Name of the ssh key pair stored in AWS. This key will be added to the vSensor ssh configuration.
Use case to not provide ssh key pair name - when it is desirable the access to vSensors to be via AWS System Manager Session only (see ssm_session_enable).
string null no
cloudwatch_logs_days Number of days to retain vSensor CloudWatch logs.
Allowed values are 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1096, 1827, 2192, 2557, 2922, 3288, 3653, and 0.
If you select 0, the events in the log group are always retained and never expire.
number 30 no
cross_zone_load_balancing_enable (Optional) Enable (true) or disable (false) cross-zone load balancing of the load balancer.
If it is disabled, make sure there is at least one vSensor in each Availability Zone with Mirror sources.
This will also configure the NLB 'Client routing policy' to any availability zone when cross_zone_load_balancing_enable = true,
or to availability zone affinity when cross_zone_load_balancing_enable = false.
For more information about cross-zone load balancing see the AWS documentation: Network Load Balancers,
Cross-zone load balancing for target groups,
Cross-zone load balancing,
Announcing new AWS Network Load Balancer (NLB) availability and performance capabilities.
Default is disable cross-zone load balancing.
bool false no
cw_log_group_name (Optional) CloudWatch Log Group name for the vSensor logs.
Naming restrictions apply.
If not provided the deployment ID (deployment_id) will be used.
string "" no
cw_metrics_enable (Optional) If true (default) a Custom Namespace for vSensor CloudWatch Metrics will be created. bool true no
cw_namespace (Optional) CloudWatch Metrics Namespace for the vSensors (if cw_metrics_enable = true), for example vSensorMetrics.
Naming restrictions apply.
If not provided the deployment ID (deployment_id) will be used.
string "" no
deployment_prefix (Forces re-creating all resources) Two letter (lowercase) prefix that will be used to create a unique deployment ID to identify the resources. string n/a yes
desired_capacity Desired number of vSensor instances in the Auto-Scaling group. number 2 no
filter_dest_cidr_block Destination CIDR for the Traffic Mirror filter. Use the default 0.0.0.0/0 for all traffic. string "0.0.0.0/0" no
filter_src_cidr_block Source CIDR for the Traffic Mirror filter. Use the default 0.0.0.0/0 for all traffic. string "0.0.0.0/0" no
instance_host_name Host name of the Darktrace Master instance. string n/a yes
instance_port Connection port between vSensor and the Darktrace Master instance. number 443 no
instance_type Specify the EC2 instance type that will be used in the Auto-Scaling Group. It is recommended to start with t3.medium, change if you expect frequent high traffic. string "t3.medium" no
kms_key_arn ARN of the kms key for encrypting log data in CloudWatch Logs. This is when the kms key is created outside the module.
The key policy should allow log encryption see AWS documentation.
If kms_key_enable is true then this kms key arn will be ignored.
string null no
kms_key_enable If true (default) the module will create a new kms key for encrypting log data in CloudWatch Logs. If false, kms_key_arn should be provided. bool true no
kms_key_rotation Specifies whether key rotation is enabled. Defaults to false. bool false no
lifecycle_pcaps_s3_bucket Number of days to retain captured packets in Amazon S3. Longer retention will increase storage costs. Set to 0 to disable PCAP storage. number 7 no
max_size Maximum number of vSensor instances in the Auto-Scaling group. number 5 no
min_size Minimum number of vSensor instances in the Auto-Scaling group. Recomended number is not to be less than the number of Availability Zone where the vSensors will be deployed into. number 2 no
os_sensor_hmac_token (Optional) Name of the SSM Parameter Store parameter that stores the hash-based message authentication code (HMAC) token to authenticate osSensors with vSensor.
The parameter names can consist of alphanumeric characters (0-9A-Za-z),
period (.), hyphen (-), and underscore (_). In addition, the slash forward character (/) is used to delineate hierarchies in parameter names.
Note: for security reasons the HMAC should be stored in SSM Parameter Store and the name of the parameter is passed to the installation script via terraform.
string "" no
private_subnets_cidrs If vpc_enable = true - CIDRs for the private subnets that will be created for the vSensors. list(string)
[
"10.0.0.0/19",
"10.0.32.0/19"
]
no
proxy (Optional) A proxy if it is required for the vSensor to access the Darktrace Master instance.
It should be specified in the format http://hostname:port with no authentication, or http://user:pass@hostname:port with authentication.
string "" no
public_subnets_cidrs If vpc_enable = true - CIDRs for the public subnets that will be created for the NAT Gateways. list(string)
[
"10.0.128.0/20",
"10.0.144.0/20"
]
no
push_token Name of parameter in the SSM Parameter Store that stores the push token generated on the Darktrace Master instance.
The parameter names can consist of alphanumeric characters (0-9A-Za-z),
period (.), hyphen (-), and underscore (_). In addition, the slash forward character (/) is used to delineate hierarchies in parameter names.
The push token is used to authenticate with the Darktrace Master instance. For more information, see the Darktrace Customer Portal (https://customerportal.darktrace.com/login).
Note: for security reasons the push token should be stored in SSM Parameter Store and the name of the parameter is passed to the installation script via terraform.
string n/a yes
ssh_cidrs (Optional) Allowed CIDR blocks for SSH (Secure Shell) access to vSensor. If not provided, the vSensors will not be accessible on port 22/tcp (ssh).
An example when such access won't be required is when it is desired the vSensors to be accessible only via SSM session.
list(any) null no
ssh_keyname (Optional) Name of the ssh key pair stored in AWS. This key will be added to the vSensor ssh configuration. string null no
ssm_session_enable Enable AWS System Manager Session Manager for the vSensors. Default is enable.
When connecting via the AWS Systems Manager Session Manager it is recommended to use the session/preferences document created by the module.
This will make sure that the session is encrypted and logged (in the same CloudWatch Log group as the vSensors logs).
That is the same kms key that is used for encrypting log data in CloudWatch Logs.
For the Systems Manager Session Manager allowed users you can
Enforce a session document permission check for the AWS CLI.
The name of the session/preferences document is in the Outputs (session_manager_preferences_name).
Example: aws ssm start-session --target <instance_id> --document-name <session_manager_preferences_name>.
bool true no
tags Tags for all resources (where possible). By default the module adds two tags to all resources (where possibe) with keys "deployment_id" and "dt_product".
The value for the "deployment_id" key is the deployment_id (see the Outputs for more details).
The value for "dt_product" is "vsensor". If you provide a tag with a key any of those it will overwrite the default.
map(string) {} no
traffic_mirror_target_rule_number Enter a priority to assign to the rule. number 100 no
update_key Name of parameter that stores the Darktrace update key in the SSM Parameter Store.
The parameter names can consist of alphanumeric characters (0-9A-Za-z),
period (.), hyphen (-), and underscore (_). In addition, the slash forward character (/) is used to delineate hierarchies in parameter names.
If you don't have Darktrace update key, you can obtain it from the Darktrace customer portal.
Note: for security reasons the update key should be stored in SSM Parameter Store and the name of the parameter is passed to the installation script via terraform.
string n/a yes
vpc_cidr CIDR for the new VPC that will be created if vpc_enable = true string "10.0.0.0/16" no
vpc_enable (Optional) If true a new VPC will be created with the provided vpc_cidr, availability_zone, private_subnets_cidrs,
public_subnets_cidrs regardless of if the input variables for an existing VPC are also provided (i.e. vpc_id and vpc_private_subnets).
bool false no
vpc_id When Darktrace vSensor is deployed into an existing VPC this is the VPC ID of target deployment.
Required if you are deploying the Darktrace vSensor into an existing VPC.
string null no
vpc_private_subnets When Darktrace vSensor is deployed into an existing VPC this is the list of the Subnet IDs that the vSensors should be launched into.
You can specify at most one subnet per Availability Zone. Minimum one Subnet is required.
Required if you are deploying the Darktrace vSensor into an existing VPC.
list(any) [] no

Outputs

Name Description
autoscaling_group_vsensors_asg_name The name of the Auto Scaling Group.
deployment_id The unique deployment ID. This is a combination of the deployment_prefix (as prefix) followed by a hyphen (-) and an eleven character random string.
kms_key_vsensor_logs_arn If new kms key - the ARN of the kms key.
launch_template_vsensor_arn The ARN of the launch template (vSensors).
launch_template_vsensor_name The name of the launch template (vSensors).
nat_gw_eip_public_ip If vpc_enable = true - a list of the public IP addresses for the NAT gateways.
pcaps_s3_bucket_domain_name The s3 bucket's domain name.
pcaps_s3_bucket_name The name of the s3 bucket that stores the PCAPs.
private_subnets_id If vpc_enable = true - a list of the new private subnets.
public_subnets_id If vpc_enable = true - a list of the new public subnets.
session_manager_preferences_name The name of the SSM Session Manager document.
ssh_remote_access_ip If bastion_enable=true - The public IP address of the Bastion host (for the ssh remote access).
traffic_mirror_filter_arn The ARN of the traffic mirror filter.
traffic_mirror_filter_id The name of the traffic mirror filter.
traffic_mirror_target_arn The ARN of the traffic mirror target.
traffic_mirror_target_id The ID of the Traffic Mirror target.
vpc_id The ID of the new VPC (if vpc_enable = true)
vsensor_cloudwatch_log_group_arn The ARN of the vSensor CloudWatch group.
vsensor_cloudwatch_log_group_name The name of the vSensor CloudWatch group.
vsensor_lb_arn The ARN of the Load Balancer.
vsensor_lb_arn_suffix The ARN suffix of the Load Balancer for use with CloudWatch Metrics.
vsensor_lb_dns_name The DNS name of the load balancer. Use this as the vSensorIP for connecting osSensors.
vsensor_lb_listener_arn A list of the ARNs of the LB listeners.
vsensor_lb_target_group_arn A list of the ARNs of the Target Groups.
vsensor_lb_target_group_arn_suffix A list of the ARN suffixes for use with CloudWatch Metrics.
vsensor_lb_target_group_name A list of the names of the Target Groups.
vsensors_autoscaling_security_group_arn The ARN of the security group for the vSensors.
vsensors_autoscaling_security_group_name The name of the security group for the vSensors.

Post deployment steps

Configure networking

If you have a physical Darktrace deployment behind a firewall, you must grant access to the instance to the IP addresses of your NAT gateways. For a new VPC deployment, use the IP addresses in the nat_gw_eip_public_ip outputs. For an existing VPC deployment, use the IP addresses of the existing NAT gateways.

Configure traffic mirroring

To add your existing EC2 instances to be mirrored and monitored, configure a traffic mirror session. For more information, see Traffic mirror sessions. When doing this, use the traffic mirror target and filter IDs in the outputs (traffic_mirror_target_id, traffic_mirror_filter_id). You can automate the process of adding your existing EC2 instances to this deployment. For more information, contact your Darktrace representative for scripts and guidance to do this, or visit Darktrace Customer Portal.

VPC Traffic Mirroring is only supported on AWS Nitro-based EC2 instance types and some non-Nitro instance types. For more information, see Amazon VPC Traffic Mirroring is now supported on select non-Nitro instance types.

To monitor EC2 instance types that do not support VPC Traffic Mirroring, configure them with Darktrace osSensors. When doing this, use the DNS name of the Network Load Balancer in the outputs - vsensor_lb_dns_name. For more information about configuring osSensors, visit the Darktrace Customer Portal.

Troubleshooting

In scenarios of invalid configuration parameters, upstream internet access blocked and other cases of the EC2 images failing to install and configure Darktrace vSensor, the Auto Scaling Group will fail to deploy.

Terraform will print a message with the specific error it encountered. Also, in most situations, logging will be available from the vSensor start up in CloudWatch. View "Log Groups", find the group named <deployment_id>-vsensor-log-group and look for Log Streams ending in -userdata.

If no userdata logging is provided in CloudWatch, you may need to connect directly to the vSensor via SSH or AWS Systems Manager and retrieve /var/log/userdata.log directly. You can consider to "Detach" the vSensor from the ASG to prevent it terminating during this procedure.

For situations where the error is not clear, contact Darktrace Customer Support via the Darktrace Customer Portal, provide the terraform error and one of the above userdata logs if available.

Configuration changes

A change in any of instance_type, update_key, dt_push_token, or os_sensor_hmac_token followed by terraform apply will result in replacing the vSensors.

In deployments where Cross-Zone Load Balancing is not enabled and a zone has a single vSensor, replacing the vSensor can cause minimal mirror packet loss from that Availability Zone while it is down. It is recommended to schedule Maintenance Window for such changes.

Multiple Regions or Accounts

Each new deployment into a new region or AWS account should be given a new push token generated from the master instance UI config page. Each push token refers to a scalable set of vSensors, which share the same S3 storage bucket and config. When querying for pcap data, the master instance only queries one vSensor in the probe group (to avoid querying the same S3 bucket multiple times). If the same push token is reused in multiple environments, then querying for PCAP data will only search one of the environments and miss the rest.

For aggregating traffic mirrors from multiple AWS accounts, the vSensor can parse GENEVE traffic from an AWS Gateway Load Balancer. However if there are IP range overlaps between VPCs, it cannot currently identify separate VPCs.

About

Terraform module for deploying Darktrace vSensor in AWS

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •