Getting Data In

Ingest failed events from Kinesis Backsplash bucket

Path Finder

I have just setup a Kinesis Firehose stream to push data into Splunk. While doing this I have setup a backsplash bucket to store any events that fail. I am running into the issue of things not being super clear on how to set it up.

I would like to setup an S3 - SQS input. I know we need to have a dead letter queue setup and configured with FIFO, but I'm unclear on how to connect all the pieces.

I am guessing: S3 > SNS > SQS > DLQ > Splunk Input? If this is the case, what is the configuration of the S3, SNS, and SQS?

0 Karma
1 Solution

Path Finder

I was able to get some clarification via the Splunk slack channel.

Here is what I've learned:

  • The S3 can either publish to an SNS or SQS. SNS allows for more flexibility
  • The SQS and DLQ are both standard, not FIFO. (FIFO is not currently supported for SQS/SNS subscription).
    • If using SNS, the SQS will then subscribe to the SNS.
  • Splunk will then connect to the standard SQS. The DLQ is a failsafe for anything that failed.

I feel like the docs for Splunk don't do a great job at explaining that setup. It mostly just links to SQS configuration tutorials that don't really explain what is specifically needed for Splunk's uses.

View solution in original post

0 Karma

Path Finder

I was able to get some clarification via the Splunk slack channel.

Here is what I've learned:

  • The S3 can either publish to an SNS or SQS. SNS allows for more flexibility
  • The SQS and DLQ are both standard, not FIFO. (FIFO is not currently supported for SQS/SNS subscription).
    • If using SNS, the SQS will then subscribe to the SNS.
  • Splunk will then connect to the standard SQS. The DLQ is a failsafe for anything that failed.

I feel like the docs for Splunk don't do a great job at explaining that setup. It mostly just links to SQS configuration tutorials that don't really explain what is specifically needed for Splunk's uses.

View solution in original post

0 Karma