Splunk Enterprise

Minimum data pipeline between hwf and indexer

ips_mandar
Builder

Hi, I have heavy forwarder in my domain and Indexer in in some hybrid cloud environment. I want to  move parsed data from heavy forwarder to indexer. Now I want to know minimum data pipeline required between heavy forwarder and indexer. so that I can ask my network team to provide that much amount.

Thanks, 

Labels (1)
0 Karma

ips_mandar
Builder

If I understand correctly there is no need to set [thruput] in limits.conf because it's by default 0 i.e. unlimited.
And Data will move from HWF to indexer via port 9997.
so apart from opening port 9997 and 8089 in HWF and idx is there anything I need to ask to network team?
And there is nothing related to data ingestion pipeline which network team can configure and it is splunk part to do it..is that understanding correct? @thambisetty 
Thanks

0 Karma

thambisetty
SplunkTrust
SplunkTrust

If I understand correctly there is no need to set [thruput] in limits.conf because it's by default 0 i.e. unlimited.
And Data will move from HWF to indexer via port 9997. 

yes correct.

you need to open only 9997 from HF to Idx not 8089. 
so apart from opening port 9997 and 8089 in HWF and idx is there anything I need to ask to network team?
And there is nothing related to data ingestion pipeline which network team can configure and it is splunk part to do it..is that understanding correct? 

you need to look at your internet link bandwidth. for example your HF is restricted to transfer only 5MBPS by network admin then you can only send 5MBPS even if your HF is receiving more than 5MBPS.

————————————
If this helps, give a like below.

ips_mandar
Builder

@thambisetty  Is there any way by looking at heavy forwarder I can Understand how much data bandwidth is required to move data from heavy forwarder to indexer efficiently so that I can ask Network admin team to provide  that much amount of data pipeline.

0 Karma

thambisetty
SplunkTrust
SplunkTrust

It completely depends on how much thruput your HF receive. For example your HF is receiving 2MB/s then you need  not less than 2MB/s to transfer events in real-time.

————————————
If this helps, give a like below.
0 Karma

ips_mandar
Builder

Thanks @thambisetty 
Which queue in splunk is required to set for increasing data pipeline between hwf and indexer?
Since I get burst of data instead of frequent data.

0 Karma

thambisetty
SplunkTrust
SplunkTrust

limits.conf on HF 

[thruput]

maxKBps = <integer>
* The maximum speed, in kilobytes per second, that incoming data is
  processed through the thruput processor in the ingestion pipeline.
* To control the CPU load while indexing, use this setting to throttle
  the number of events this indexer processes to the rate (in
  kilobytes per second) that you specify.
* NOTE:
  * There is no guarantee that the thruput processor
    will always process less than the number of kilobytes per
    second that you specify with this setting. The status of
    earlier processing queues in the pipeline can cause
    temporary bursts of network activity that exceed what
    is configured in the setting.
  * The setting does not limit the amount of data that is
    written to the network from the tcpoutput processor, such
    as what happens when a universal forwarder sends data to
    an indexer.
  * The thruput processor applies the 'maxKBps' setting for each
    ingestion pipeline. If you configure multiple ingestion
    pipelines, the processor multiplies the 'maxKBps' value
    by the number of ingestion pipelines that you have
    configured.
  * For more information about multiple ingestion pipelines, see
    the 'parallelIngestionPipelines' setting in the
    server.conf.spec file.
* Default (Splunk Enterprise): 0 (unlimited)
* Default (Splunk Universal Forwarder): 256

 

————————————
If this helps, give a like below.
0 Karma
Get Updates on the Splunk Community!

Splunk Certification Support Alert | Pearson VUE Outage

Splunk Certification holders and candidates!  Please be advised of an upcoming system maintenance period for ...

Enterprise Security Content Update (ESCU) | New Releases

In September, the Splunk Threat Research Team had two releases of new security content via the Enterprise ...

New in Observability - Improvements to Custom Metrics SLOs, Log Observer Connect & ...

The latest enhancements to the Splunk observability portfolio deliver improved SLO management accuracy, better ...