All Apps and Add-ons

Why does Indexer sometimes not classify AWS Guardduty data as proper source?

emcbolton
New Member

We are using HTTP Event Collector (HEC) to ingest AWS Guardduty Cloudwatch Events via AWS Kinesis Firehose. I have worked with a Splunk SE to get this working on two environments. One environment (dev license) is a single instance environment. The other environment is Distributed with a HEC instance separate from the Indexer. In both environments, the Indexer appears to not always classify the incoming data from Firehose as aws:cloudwatch:guardduty properly. The unclassified data doesn't end up in the Splunk Dashboard for Guardduty. What I mean by this, is that it will sometimes have the clean Splunk formatting (plus signs and all) and other times be multiple Guardduty events in one text blob (no Splunk formatting, just long string of text).

Is there a way to fix this, alert on this issue from Splunk, or have Splunk re-read the data?
Does this likely mean I need a Lambda function to break these into separate events (maybe it's a CloudWatch to Firehose issue) prior to being sent to Firehose?

Tags (1)
0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...