|
| 1 | +[[ea-integrations-tutorial]] |
| 2 | +== Using {ls} with Elastic {integrations} (Beta) tutorial |
| 3 | + |
| 4 | + |
| 5 | +Logstash elastic-integration Filter |
| 6 | +Plugin Guide |
| 7 | +Ingest node pipelines in Logstash |
| 8 | + |
| 9 | + |
| 10 | +Process overview |
| 11 | +* Configure Fleet to send data from Elastic Agent to Logstash |
| 12 | +* Create an Elastic Agent policy with the necessary integrations |
| 13 | +* Configure Logstash to use the elastic_integration filter plugin |
| 14 | + |
| 15 | + |
| 16 | +Logstash elastic-integration Filter Plugin Guide |
| 17 | + |
| 18 | +Overview |
| 19 | +The purpose of this guide is to walk through the steps necessary to configure Logstash to transform events |
| 20 | +collected by the Elastic Agent using our pre-built ingest node pipelines that normalize data to the Elastic |
| 21 | +Common Schema. This is possible with a new beta feature in Logstash known as the elastic-integration |
| 22 | +filter plugin. |
| 23 | +Using this new plugin, Logstash reads certain field values generated by the Elastic Agent that tells Logstash to |
| 24 | +fetch pipeline definitions from an Elasticsearch cluster which Logstash can then use to process events before |
| 25 | +sending them to thier configured destinations. |
| 26 | + |
| 27 | +Prerequisites/Requirements |
| 28 | + |
| 29 | +There are a few requirements needed to make this possible: |
| 30 | + |
| 31 | +* A working Elasticsearch Cluster |
| 32 | +* Fleet server |
| 33 | +* An Elastic Agent configured to send its output to Logstash |
| 34 | +* An Enterprise License |
| 35 | +* A user configured with the minimum required privileges |
| 36 | + |
| 37 | +This feature can also be used with a self-managed agent, but the appropriate setup and configuration details |
| 38 | +of using a self-managed agent will not be provided in this guide. |
| 39 | + |
| 40 | +Configure Fleet to send data from Elastic Agent to Logstash |
| 41 | + |
| 42 | +. For Fleet Managed Agent, go to Kibana and navigate to Fleet → Settings. |
| 43 | + |
| 44 | +Figure 1: fleet-output |
| 45 | + |
| 46 | +. Create a new output and specify Logstash as the output type. |
| 47 | + |
| 48 | +Figure 2: logstash-output |
| 49 | + |
| 50 | +. Add the Logstash hosts (domain or IP address/s) that the Elastic Agent will send data to. |
| 51 | +. Add the client SSL certificate and the Client SSL certificate key to the configuration. |
| 52 | +You can specify at the bottom of the settings if you would like to make this out the default for agent |
| 53 | +integrations. By selecting this option, all Elastic Agent policies will default to using this Logstash output |
| 54 | +configuration. |
| 55 | +. Click “Save and apply settings” in the bottom right-hand corner of the page. |
| 56 | + |
| 57 | +Create an Elastic Agent policy with the necessary integrations |
| 58 | + |
| 59 | +. In Kibana navigate to Fleet → Agent policies and click on “Create agent policy”. |
| 60 | + |
| 61 | + |
| 62 | + |
| 63 | +Figure 3: create-agent-policy |
| 64 | +. Give this policy a name, and then click on “Advanced options”. |
| 65 | +. Change the “Output for integrations” setting to the Logstash output you created in the last step. |
| 66 | + |
| 67 | + |
| 68 | + |
| 69 | +Figure 4: policy-output |
| 70 | + |
| 71 | + |
| 72 | +. Click “Create agent policy” at the bottom of the flyout. |
| 73 | +. The new policy should be listed on the Agent policies page now. |
| 74 | +. Click on the policy name so that we can start configuring an integration. |
| 75 | +. On the policy page, click “Add integration”. This will take you to the integrations browser, where you |
| 76 | +can select an integration that will have data stream definitions (mappings, pipelines, etc.), dashboards, |
| 77 | +and data normalization pipelines that convert the source data into Elastic Common Schema. |
| 78 | + |
| 79 | +Figure 5: add-integration-to-policy |
| 80 | +In this example we will search for and select the Crowdstrike integration. |
| 81 | + |
| 82 | +Figure 6: crowdstrike-integration |
| 83 | + |
| 84 | +. On the Crowdstrike integration overview page, click “Add Crowdstrike” to configure the integration. |
| 85 | + |
| 86 | + |
| 87 | + |
| 88 | +Figure 7: add-crowdstrike |
| 89 | +. Configure the integration to collect the needed data. |
| 90 | +On step 2 at the bottom of the page (Where to add this integration?), make sure the “Existing hosts” option |
| 91 | +is selected and the Agent policy selected is our Logstash policy we created for our Logstash output. This |
| 92 | +should be selected by default using the workflow of these instructions. |
| 93 | +. Click “Save and continue” at the bottom of the page. |
| 94 | +A modal will appear on the screen asking if you want to add the Elastic Agent to your hosts. If you have not |
| 95 | +already done so, please install the Elastic Agent on a host somewhere. Documentation for this process can be |
| 96 | +found here: https://www.elastic.co/guide/en/fleet/current/elastic-agent-installation.html |
| 97 | + |
| 98 | +Figure 8: add-elastic-agent-to-host |
| 99 | + |
| 100 | +Configure Logstash to use the elastic_integration filter plugin |
| 101 | + |
| 102 | + |
| 103 | +Create a new pipeline configuration in Logstash. |
| 104 | + |
| 105 | +Make sure elastic_integration plugin is installed or install with /bin/logstash-plugin install logstash-filter- |
| 106 | +elastic_integration before running the pipeline. |
| 107 | + |
| 108 | +A full list of configuration options can be found here: https://www.elastic.co/guide/en/logstash/current/plugins- |
| 109 | +filters-elastic_integration.html |
| 110 | + |
| 111 | +[source,txt] |
| 112 | +----- |
| 113 | +input { |
| 114 | + elastic_agent { port => 5055 } |
| 115 | +} |
| 116 | +
|
| 117 | +filter { |
| 118 | + elastic_integration { |
| 119 | + hosts => "{es-host}:9200" |
| 120 | + ssl_enabled => true |
| 121 | + ssl_verification_mode => "certificate" |
| 122 | + ssl_certificate_authorities => ["/usr/share/logstash/config/certs/ca-cert.pem"] |
| 123 | + auth_basic_username => "elastic" |
| 124 | + auth_basic_password => "changeme" |
| 125 | + remove_field => ["_version"] |
| 126 | + } |
| 127 | +} |
| 128 | +
|
| 129 | +output { |
| 130 | + stdout { |
| 131 | + codec => rubydebug # to debug datastream inputs |
| 132 | + } |
| 133 | +## add elasticsearch |
| 134 | + elasticsearch { |
| 135 | + hosts => "{es-host}:9200" |
| 136 | + password => "changeme" |
| 137 | + user => "elastic" |
| 138 | + cacert => "/usr/share/logstash/config/certs/ca-cert.pem" |
| 139 | + ssl_certificate_verification => false |
| 140 | + } |
| 141 | +} |
| 142 | +----- |
| 143 | + |
| 144 | + |
| 145 | +If you are using Elastic Cloud, please refer to this configuration instead. |
| 146 | + |
| 147 | +[source,txt] |
| 148 | +----- |
| 149 | +input { |
| 150 | + elastic_agent { port => 5055 } |
| 151 | +} |
| 152 | +
|
| 153 | +filter { |
| 154 | + elastic_integration { |
| 155 | + cloud_id => "your-cloud:id" |
| 156 | + api_key => "api-key" |
| 157 | + remove_field => ["_version"] |
| 158 | + } |
| 159 | +} |
| 160 | +
|
| 161 | +output { |
| 162 | + stdout {} |
| 163 | + elasticsearch { |
| 164 | + cloud_auth => "elastic:<pwd>" |
| 165 | + cloud_id => "your-cloud-id" |
| 166 | + } |
| 167 | + } |
| 168 | +----- |
| 169 | + |
| 170 | +Every event sent from the Elastic Agent to Logstash contains specific meta-fields. Input event are expected |
| 171 | +to have data_stream.type, data_stream.dataset, and data_stream.namespace . This tells Logstash which pipelines |
| 172 | +to fetch from Elasticsearch to correctly process the event before sending that event to it’s destination output. |
| 173 | +Logstash performs a check quickly and often to see if an integrations associated ingest pipeline has had updates |
| 174 | +or changes so that events are processed with the most recent version of the ingest pipeline. |
| 175 | + |
| 176 | + |
| 177 | +All processing occurs in Logstash. |
| 178 | + |
| 179 | + |
| 180 | +The user or credentials specified in the elastic_integration plugin needs to have sufficient privileges to get |
| 181 | + |
| 182 | +the appropriate monitoring, pipeline definitions, and index templates necessary to transform the events. Mini- |
| 183 | +mum required privileges can be found here: https://www.elastic.co/guide/en/logstash/current/plugins-filters- |
| 184 | +elastic_integration.html#plugins-filters-elastic_integration-minimum_required_privileges |
| 185 | + |
0 commit comments