You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The _Amazon S3_ output plugin lets you ingest records into the [S3](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) cloud object store.
9
+
The _Amazon S3_ output plugin lets you ingest records into the [S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) cloud object store.
10
10
11
11
The plugin can upload data to S3 using the [multipart upload API](https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html) or [`PutObject`](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html). Multipart is the default and is recommended. Fluent Bit will stream data in a series of _parts_. This limits the amount of data buffered on disk at any point in time. By default, every time 5 MiB of data have been received, a new part will be uploaded. The plugin can create files up to gigabytes in size from many small chunks or parts using the multipart API. All aspects of the upload process are configurable.
12
12
@@ -36,7 +36,7 @@ The [Prometheus success/retry/error metrics values](../../administration/monitor
36
36
|`blob_database_file`| Absolute path to a database file to be used to store blob files contexts. |_none_|
37
37
|`bucket`| S3 Bucket name |_none_|
38
38
|`canned_acl`|[Predefined Canned ACL policy](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl) for S3 objects. |_none_|
39
-
|`compression`| Compression type for S3 objects. `gzip`, `arrow`, `parquet` and `zstd` are the supported values, `arrow` and `parquet` are only available if Apache Arrow was enabled at compile time. Defaults to no compression. |_none_|
39
+
|`compression`| Compression/format for S3 objects. Supported: `gzip` (always available) and `parquet` (requires Arrow build). For `gzip`, the `Content-Encoding` header is set to `gzip`. `parquet` is available **only when Fluent Bit is built with `-DFLB_ARROW=On`** and Arrow GLib/Parquet GLib are installed. Parquet is typically used with `use_put_object On`.|_none_|
40
40
|`content_type`| A standard MIME type for the S3 object, set as the Content-Type HTTP header. |_none_|
41
41
|`endpoint`| Custom endpoint for the S3 API. Endpoints can contain scheme and port. |_none_|
42
42
|`external_id`| Specify an external ID for the STS API. Can be used with the `role_arn` parameter if your role requires an external ID. |_none_|
@@ -695,3 +695,56 @@ The following example uses `pyarrow` to analyze the uploaded data:
0 commit comments