All Apps and Add-ons

Estreamer get Heavy Forwarder queues go to 100%

Explorer

Hello, everyone

We are using the estreamer app/addon (3.5.3) to get the logs into Splunk. We saw that when we turn on the sending script , the queues of the HF go to 100%. This is a know isssue? .

We are using SUSE 12 as the operating system.

Thanks!

0 Karma

Path Finder

What is criteria for "the optimal number"; threshold for the highest number then back off?

0 Karma

Builder

I'm pasting in guidance from the soon to be released operations guide for our next version (3.5.4)

Performance and the workerProcesses Option
The performance of the eNcore for Splunk add-on has been improved in version 3.5 with the addition of multi-processing. By default, four worker processes operate on the incoming messages to achieve higher throughput. While multiple processes can provide significant performance gains, these gains are highly dependent on the platform because for each platform, the processing bottlenecks may be different. Multiple processes also require additional overhead for managing task distribution, so that increasing the number of processes could actually decrease the performance on platforms with a low number of CPU cores.
The number of worker processes is configurable through the workerProcesses parameter in the estreamer.conf configuration file. The number can be set from 1 to 12. Generally, the more capable the platform (i.e., more CPU cores, better I/O, etc.), more throughput is achieved through a higher number of worker processes. However, the only reliable approach is to test performance with various settings such as 1, 2, 4, 8, and 12, and in many cases the best performance may be gained with just one worker process because no process marshalling is required.
One scenario for testing is to:
1. Disable the add-on's Data Input in Splunk, because the same events will be requested multiple times during the testing.
2. Configure a set number of workerProcesses (such as 😎 and then start eNcore with a start parameter of 0 (for genesis) or at least an old start time.
3. Request connection events from the FMC (or in some other way request the FMC to send millions of backlogged events).
4. Observe the event rate reported by the monitor process in the estreamer.log file.
5. Repeat the test with a different number of workerProcesses.
6. When the optimal number has been determined, set the workerProcesses to that number and enable the add-on's Data Input to resume production operations.
An example of the workerProcesses configuration in the estreamer.conf file is shown here:
"workerProcesses": 12

0 Karma

Explorer

Hello Douglas/teunlaan,

First of all thanks for your responses.

@teunlaan
All the queues ar filling up.

@douglashurd
We have tried change the WorkProcesses parameter to 12 cores but the issue persist :

alt text

Speaking of the hardware resources, the HF have 12 GB of RAM and 12 Cores of CPU .

I dont know exactly how much logs are inyected because when we turn on the log deliver, the queues go to 100% and all the process become inestable.

There is any extra tunning to make?

thanks!

0 Karma

Ultra Champion

@lightech1, you can amplify the queues ...

0 Karma

Contributor

If all your queues are full including you output que, look at:
1) the output rate of your HF (sould be unlimited), or do you have a network issue
2) is there an issue with the load on you indexer (s) so it can't process the data

0 Karma

Explorer

Hello teunlaan,

1) Its set to 0, so the maxkbps is unlimited.

2) no, on the side of the indexer , I dont see blocked messages or performance problems.

Thanks!!

0 Karma

Builder

How many CPU cores have you allocated to the TA? What sort of event rate does your firepower deployment generate? This could be a simple resource issue.

0 Karma

Contributor

also, what queues are filling? (parsing, aqq, typing, index, output)

0 Karma