Skip to main content

AWS Kinesis

Export metrics to AWS Kinesis Data Streams

Setup

Prerequisites

  • First install AWS SDK for C++
  • Here are the instructions when building from source, to ensure 3rd party dependencies are installed:
    git clone --recursive https://github.com/aws/aws-sdk-cpp.git
    cd aws-sdk-cpp/
    git submodule update --init --recursive
    mkdir BUILT
    cd BUILT
    cmake -DCMAKE_INSTALL_PREFIX=/usr -DBUILD_ONLY=kinesis ..
    make
    make install
  • libcrypto, libssl, and libcurl are also required to compile Netdata with Kinesis support enabled.
  • Next, Netdata should be re-installed from the source. The installer will detect that the required libraries are now available.

Configuration

File

The configuration file name for this integration is exporting.conf.

You can edit the configuration file using the edit-config script from the Netdata config directory.

cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config exporting.conf

Options

Netdata automatically computes a partition key for every record with the purpose to distribute records across available shards evenly. The following options can be defined for this exporter.

Config options
NameDescriptionDefaultRequired
enabledEnables or disables an exporting connector instance (yes/no).noyes
destinationAccepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics.noyes
usernameUsername for HTTP authenticationmy_usernameno
passwordPassword for HTTP authenticationmy_passwordno
data sourceSelects the kind of data that will be sent to the external database. (as collected/average/sum)no
hostnameThe hostname to be used for sending data to the external database server.[global].hostnameno
prefixThe prefix to add to all metrics.Netdatano
update everyFrequency of sending sending data to the external database, in seconds.10no
buffer on failuresThe number of iterations (update every seconds) to buffer data, when the external database server is not available.10no
timeout msThe timeout in milliseconds to wait for the external database server to process the data.2 update_every 1000no
send hosts matchingHosts filter. Determines which hosts will be sent to the external database. The syntax is simple patterns.localhost *no
send charts matchingOne or more space separated patterns (use * as wildcard) checked against both chart id and chart name.*no
send names instead of idsControls the metric names Netdata should send to the external database (yes/no).no
send configured labelsControls if host labels defined in the [host labels] section in netdata.conf should be sent to the external database (yes/no).no
send automatic labelsControls if automatically created labels, like _os_name or _architecture should be sent to the external database (yes/no).no
destination

The format of each item in this list, is: [PROTOCOL:]IP[:PORT].

  • PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
  • IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
  • PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.

Example IPv4:

destination = 10.11.14.2:4242 10.11.14.3:4242 10.11.14.4:4242

Example IPv6 and IPv4 together:

destination = [ffff:...:0001]:2003 10.11.12.1:2003

When multiple servers are defined, Netdata will try the next one when the previous one fails.

update every

Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.

buffer on failures

If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).

send hosts matching

Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern). The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.

A pattern starting with ! gives a negative match. So to match all hosts named *db* except hosts containing *child*, use !*child* *db* (so, the order is important: the first pattern matching the hostname will be used - positive or negative).

send charts matching

A pattern starting with ! gives a negative match. So to match all charts named apps. except charts ending in reads, use !reads apps. (so, the order is important: the first pattern matching the chart id or the chart name will be used, positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter has a higher priority than the configuration option.

send names instead of ids

Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.

Examples

Example configuration

Basic configuration

[kinesis:my_instance]
enabled = yes
destination = us-east-1

Configuration with AWS credentials

Add :https modifier to the connector type if you need to use the TLS/SSL protocol. For example: remote_write:https:my_instance.

[kinesis:my_instance]
enabled = yes
destination = us-east-1
# AWS credentials
aws_access_key_id = your_access_key_id
aws_secret_access_key = your_secret_access_key
# destination stream
stream name = your_stream_name


Do you have any feedback for this page? If so, you can open a new issue on our netdata/learn repository.