Splunk Enterprise

Getting data in

_pravin
Contributor

Hi,

I have a scenario where the heavy forwarder queue gets filled and data doesn't appear quickly in Splunk when the indexers are running on a full capacity (i.e. indexers overwhelmed with data from forwarders).
Is there a way to separate queues for HF and the forwarders, so that they have a separate queue for the incoming data ?

Thanks in advance.

Thanks,
Pravin

Labels (1)
Tags (2)
0 Karma
1 Solution

livehybrid
SplunkTrust
SplunkTrust

Hi @_pravin 

The only way you could separate different queues for different forwarders would be to create multiple inputs on different ports and have certain forwarders send to certain ports, but the reality is this isnt going to give you much of an improvement or speed up ingestion.

I would focus on why the indexers are overloaded, this could be by assessing any heavy parsing regex in transforms.conf, increasing the number of indexers in your cluster (Horizontal scaling), adding resources to your indexers (Vertical scaling), reviewing any long-running searches on your Splunk stack or potentially increasing the number of parallelIngestionPipelines (default of 1) to 2 if your servers have the adequate resource available (see https://help.splunk.com/en/splunk-enterprise/get-started/deployment-capacity-manual/9.3/performance-... and https://help.splunk.com/en/splunk-enterprise/administer/manage-indexers-and-indexer-clusters/9.1/man... for more information).

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

View solution in original post

livehybrid
SplunkTrust
SplunkTrust

Hi @_pravin 

The only way you could separate different queues for different forwarders would be to create multiple inputs on different ports and have certain forwarders send to certain ports, but the reality is this isnt going to give you much of an improvement or speed up ingestion.

I would focus on why the indexers are overloaded, this could be by assessing any heavy parsing regex in transforms.conf, increasing the number of indexers in your cluster (Horizontal scaling), adding resources to your indexers (Vertical scaling), reviewing any long-running searches on your Splunk stack or potentially increasing the number of parallelIngestionPipelines (default of 1) to 2 if your servers have the adequate resource available (see https://help.splunk.com/en/splunk-enterprise/get-started/deployment-capacity-manual/9.3/performance-... and https://help.splunk.com/en/splunk-enterprise/administer/manage-indexers-and-indexer-clusters/9.1/man... for more information).

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

richgalloway
SplunkTrust
SplunkTrust

If the HFs are waiting for the indexers then the problem is on the indexers.  Fix the indexers and the HFs should be fine again.

It's not clear what you mean by "separate queues for HF and the forwarders" since HFs *are* forwarders.

---
If this reply helps you, Karma would be appreciated.
0 Karma
Get Updates on the Splunk Community!

Building Reliable Asset and Identity Frameworks in Splunk ES

 Accurate asset and identity resolution is the backbone of security operations. Without it, alerts are ...

Cloud Monitoring Console - Unlocking Greater Visibility in SVC Usage Reporting

For Splunk Cloud customers, understanding and optimizing Splunk Virtual Compute (SVC) usage and resource ...

Automatic Discovery Part 3: Practical Use Cases

If you’ve enabled Automatic Discovery in your install of the Splunk Distribution of the OpenTelemetry ...