Hey there, we have a large volume (about 500-600gb) of data coming in daily but about 200gb of this is a JSON wrapper from Amazon Firehose. The data essentially looks like this:
{
"message": "ACTUAL_DATA_WE_WANT",
"logGroup": "/use1/prod/eks/primary/containers",
"logStream": "fluent-bit/cross-services/settings-7dbb9dbdb4-qjz5b/settings-api/81d3685eaaeae0effab5931590784016ce75a8171ad7e3e76152e30bd732a739",
"timestamp": 1675349068034
}
As you can see, ACTUAL_DATA_WE_WANT is what we need. This contains everything including timestamp and application information. The JSON wrapper is added by Firehose and makes up at least 250 bytes of every event.
Is it possible to remove all of this unnecessary data so that we can save ingestion for more useful things? I have heard that the SEDCMD can do this but it is resource intensive and we ingest almost a billion events a day.
Usually, this is done with SEDCMD. The resource use depends on the efficiency of the regex used. Test the regex on regex101.com and evaluate the resource usage on your dev/test instances.
Another option is to use Cribl to remove the unwanted bytes.
As you have pure json event you probably could try INGEST_EVAL with json_extract? https://docs.splunk.com/Documentation/Splunk/9.0.3/SearchReference/JSONFunctions#json_extract.28.26l...