All Apps and Add-ons

Cisco eStreamer eNcore delay in logs getting to Splunk

jonathanpeckham
Explorer

I'm running into an issue with the Cisco eStreamer eNcore app where IPS events are very delayed getting to Splunk.

I've setup a correlation rule on the FMC to email me when there is an IPS event. In Splunk I run a report every 30 minutes to search for IPS events past 4 hours from the estreamer host and email me the results. Based on the time I get the email from the FMC and when the events finally show up in the report it takes anywhere from 1.5 to 2 hours or more before triggered IPS events show up in Splunk.

I've been working with Cisco TAC on this for a couple months and am no closer to solving . I've uninstalled Splunk and the TA-eStreamer App at least twice with the similar results.

My version of Splunk, FMC, and eNcore app are as follows:
Splunk - 7.2.7 (tried it on 7.3.0 as well)
FMC - 6.4.0.4
Cisco eStreamer eNcore Add-on for Splunk - 3.6.8 (Same issue with older versions too)

Is anyone else having this issue with eNcore?

0 Karma

nyc_jason
Splunk Employee
Splunk Employee

Hello Jonathan,

How many alerts are coming from the FMC when they do arrive? If its a very low number, you may need to check and reduce the batch size:

Here is a potential issue that sounds like yours:
https://www.cisco.com/c/en/us/td/docs/security/firepower/630/api/eStreamer_enCore/eStreamereNcoreSpl...

Batch Size
The eNcore for Splunk add-on also attempts to improve performance by batching received events and only writing them to output when the threshold for the batch has been reached. The default batch size is 100 events.

If the event rate is very low, then a batch size of 100 events could cause an unwanted delay in the appearance of events in Splunk. For example, if intrusion events are the only events that are handled and the intrusion event rate averages 100 events per hour, then the first event in a batch will often be delayed an hour or more while the batch completes and is written to disk. To reduce such delays the batchSize can be set to a lower value, or to eliminate them entirely, the batchSize can be set to 1.

The disadvantage of setting batchSize to 1, is that in high-throughput environments, the overall event rate will be lower.

An example of the batchSize configuration in the estreamer.conf file is shown here:

        "batchSize": 50
0 Karma

jonathanpeckham
Explorer

Thank you for the reply. TAC pointed me in that direction after a few months of the ticket being open with them. I tried all different sorts of settings for the batchSize but eventually the collection would stop without us being able to figure out why. I suspect there is some sort of issue with our FMC which was upgraded from 6.2.3 to 6.4.0 but I can't say for sure.

The way I'm handling this now is I've created a batch script to restart the splunk service and have set a cron job to perform that hourly. So far this has led to the timely pulling of the Firepower events without delays of more than an hour from the time the event triggered and the time it shows in Splunk.

A bit extreme you might think? Probably but that instance exists only to pulling events from Firepower and it's what's been working so far. 🙂

0 Karma

Mesa_Splunkr
Loves-to-Learn

Greetings all,
I too am seeing about a 1 hour delay in my splunk index for cisco:estreamer:data sourcetype. I want to use the "batchSize" value; however, I dont know what section to put it under in the estreamer.conf file. Can anyone clarify that? My estreamer.conf currently has 89 lines. Thanks in advance!

0 Karma

jonathanpeckham
Explorer

All the modifications TAC had me make were at the bottom of the file that .conf file, right before the last }.

0 Karma

gordo32
Communicator

Just don't forget to put a comma at the end of the previous line (figured out my error quick when running ./splencore.sh test). So, then end of your .conf file should look like this:

"workerProcesses": 4,
"batchSize": 5
}

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...