Getting Data In

Does HEC provide guaranteed delivery, and if not, what are my options?

hrottenberg_spl
Splunk Employee
Splunk Employee

I want to ensure that the messages sent to HEC make it to Splunk. What are my options?

0 Karma
1 Solution

hrottenberg_spl
Splunk Employee
Splunk Employee

HEC is at the end of the day, a web server. For the most part, it only acts as a simple listener of HTTP(S) POST messages, and as such, it does not guarantee that a sent message was received. The sender is expected to use HTTP protocol conventions such as the status code to determine if a message was received, and if not, do whatever it is that the sender wants to do. All of this is by design, and it works exactly as a web server is expected to work.
One exception to the above is the HEC Indexer Acknowledgement feature, which requires the developer to implement client-side code. More details are here: https://docs.splunk.com/Documentation/Splunk/7.3.2/Data/AboutHECIDXAck

If “guaranteed messaging” is a requirement, I suggest looking at one of these options:

  • (In AWS) Kinesis Data Firehose which has a reliable pipeline to Splunk Enterprise or Splunk Cloud (behind the scenes, the nozzle uses HEC with the indexer acknowledgement feature enabled)
  • (Not AWS) Use a any queueing system which provides a guaranteed delivery feature. Then you would need to use another process to dequeue those messages and send to Splunk.
  • Use a logging library (e.g. log4j) which supports multiple destinations, and add a disk buffer as a fallback.

To address momentary connection issues, Splunk has added logic to a handful of client libraries and other tools that will handle things like: batching, buffering, and retrying. Those resources are published at http://dev.splunk.com/view/SP-CAAAFD2, and are available as open source. Realize that these clients will typically use a small (configurable) memory buffer which can quickly be filled (depending on the size of the buffer and the length of the outage).

The available logging libraries are:

  • Java
  • Javascript
  • .NET

We also provide full open source SDKs which include logic for many tasks outside of logging, as well as code samples for these languages:

  • C#
  • Java
  • Javascript
  • Python

Lastly, the Docker Logging Driver does include batch, buffer, and retry logic. By default, it will buffer 10,000 lines, as documented here: https://github.com/splunk/docker-logging-plugin#advanced-options---environment-variables

View solution in original post

hrottenberg_spl
Splunk Employee
Splunk Employee

HEC is at the end of the day, a web server. For the most part, it only acts as a simple listener of HTTP(S) POST messages, and as such, it does not guarantee that a sent message was received. The sender is expected to use HTTP protocol conventions such as the status code to determine if a message was received, and if not, do whatever it is that the sender wants to do. All of this is by design, and it works exactly as a web server is expected to work.
One exception to the above is the HEC Indexer Acknowledgement feature, which requires the developer to implement client-side code. More details are here: https://docs.splunk.com/Documentation/Splunk/7.3.2/Data/AboutHECIDXAck

If “guaranteed messaging” is a requirement, I suggest looking at one of these options:

  • (In AWS) Kinesis Data Firehose which has a reliable pipeline to Splunk Enterprise or Splunk Cloud (behind the scenes, the nozzle uses HEC with the indexer acknowledgement feature enabled)
  • (Not AWS) Use a any queueing system which provides a guaranteed delivery feature. Then you would need to use another process to dequeue those messages and send to Splunk.
  • Use a logging library (e.g. log4j) which supports multiple destinations, and add a disk buffer as a fallback.

To address momentary connection issues, Splunk has added logic to a handful of client libraries and other tools that will handle things like: batching, buffering, and retrying. Those resources are published at http://dev.splunk.com/view/SP-CAAAFD2, and are available as open source. Realize that these clients will typically use a small (configurable) memory buffer which can quickly be filled (depending on the size of the buffer and the length of the outage).

The available logging libraries are:

  • Java
  • Javascript
  • .NET

We also provide full open source SDKs which include logic for many tasks outside of logging, as well as code samples for these languages:

  • C#
  • Java
  • Javascript
  • Python

Lastly, the Docker Logging Driver does include batch, buffer, and retry logic. By default, it will buffer 10,000 lines, as documented here: https://github.com/splunk/docker-logging-plugin#advanced-options---environment-variables

Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...