Getting Data In
Highlighted

Can we increase parallelIngestionPipelines in a heavy Forwarder?

Super Champion

folks,
Have anyone tried configuring parallelIngestionPipelines on Heavy Forwarder? We have plenty of room for cpu/memory on heavy forwarder. Hence wanted to check if parallelIngestionPipelines can be increased in a Heavy Forwarder to forward to index cluster.

I can see documentation for Universal Forwarder & Indexer, but not on heavy forwarder.

0 Karma
Highlighted

Re: Can we increase parallelIngestionPipelines in a heavy Forwarder?

Motivator

Yes you can. Refer the document below on Index Parallelization:

http://docs.splunk.com/Documentation/Splunk/7.0.2/Indexer/Pipelinesets

How forwarders use multiple pipeline sets

When you enable multiple pipeline sets on a forwarder, each pipeline handles both data input and output. In the case of a heavy forwarder, each pipeline also handles parsing.

And we have this setting enabled as 2 and working as expected for a long.

View solution in original post

Highlighted

Re: Can we increase parallelIngestionPipelines in a heavy Forwarder?

Super Champion

thanks for the confirmation @ansif

0 Karma
Highlighted

Re: Can we increase parallelIngestionPipelines in a heavy Forwarder?

Path Finder

We just enabled this on one of our Windows Event Collectors. Will take a few days to monitor performance.

0 Karma
Highlighted

Re: Can we increase parallelIngestionPipelines in a heavy Forwarder?

Path Finder

Just beware - if your HF collect only a single input type, then the parallel ingestion won't work. E.g. TCP or UDP traffic for Syslog ingestion is not parallelized.

Highlighted

Re: Can we increase parallelIngestionPipelines in a heavy Forwarder?

Path Finder

Can you explain this in more detail?

Currently we are facing an issue, where we input http and syslog on our hf and mostly only one of our eight pipelines is is used.

0 Karma
Highlighted

Re: Can we increase parallelIngestionPipelines in a heavy Forwarder?

Path Finder

Yes,

What you probably do is what we did in the past - routing all syslog to our HFs on as single port, e.g. 514.
Splunk serves each port with single pipeline, so if you only use one port for all your syslog, then you will use only a sinigle pipeline. Better option is to configure different ports for different syslog sources (e.g. 1514 for firewalls, 2514 for NLBs, 3515 for WAFs, etc...) BUT the BEST option is to direct all syslog traffic into Syslog-NG service and write the data into files. Then use Universal Forwarder to monitor these files. This is by far the best solution for Syslog data.

0 Karma