Splunk Enterprise

Getting data in

_pravin
Contributor

Hi,

I have a scenario where the heavy forwarder queue gets filled and data doesn't appear quickly in Splunk when the indexers are running on a full capacity (i.e. indexers overwhelmed with data from forwarders).
Is there a way to separate queues for HF and the forwarders, so that they have a separate queue for the incoming data ?

Thanks in advance.

Thanks,
Pravin

Labels (1)
Tags (2)
0 Karma
1 Solution

livehybrid
SplunkTrust
SplunkTrust

Hi @_pravin 

The only way you could separate different queues for different forwarders would be to create multiple inputs on different ports and have certain forwarders send to certain ports, but the reality is this isnt going to give you much of an improvement or speed up ingestion.

I would focus on why the indexers are overloaded, this could be by assessing any heavy parsing regex in transforms.conf, increasing the number of indexers in your cluster (Horizontal scaling), adding resources to your indexers (Vertical scaling), reviewing any long-running searches on your Splunk stack or potentially increasing the number of parallelIngestionPipelines (default of 1) to 2 if your servers have the adequate resource available (see https://help.splunk.com/en/splunk-enterprise/get-started/deployment-capacity-manual/9.3/performance-... and https://help.splunk.com/en/splunk-enterprise/administer/manage-indexers-and-indexer-clusters/9.1/man... for more information).

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

View solution in original post

livehybrid
SplunkTrust
SplunkTrust

Hi @_pravin 

The only way you could separate different queues for different forwarders would be to create multiple inputs on different ports and have certain forwarders send to certain ports, but the reality is this isnt going to give you much of an improvement or speed up ingestion.

I would focus on why the indexers are overloaded, this could be by assessing any heavy parsing regex in transforms.conf, increasing the number of indexers in your cluster (Horizontal scaling), adding resources to your indexers (Vertical scaling), reviewing any long-running searches on your Splunk stack or potentially increasing the number of parallelIngestionPipelines (default of 1) to 2 if your servers have the adequate resource available (see https://help.splunk.com/en/splunk-enterprise/get-started/deployment-capacity-manual/9.3/performance-... and https://help.splunk.com/en/splunk-enterprise/administer/manage-indexers-and-indexer-clusters/9.1/man... for more information).

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

richgalloway
SplunkTrust
SplunkTrust

If the HFs are waiting for the indexers then the problem is on the indexers.  Fix the indexers and the HFs should be fine again.

It's not clear what you mean by "separate queues for HF and the forwarders" since HFs *are* forwarders.

---
If this reply helps you, Karma would be appreciated.
0 Karma
Get Updates on the Splunk Community!

[Puzzles] Solve, Learn, Repeat: Dynamic formatting from XML events

This challenge was first posted on Slack #puzzles channelFor a previous puzzle, I needed a set of fixed-length ...

Enter the Agentic Era with Splunk AI Assistant for SPL 1.4

  🚀 Your data just got a serious AI upgrade — are you ready? Say hello to the Agentic Era with the ...

Stronger Security with Federated Search for S3, GCP SQL & Australian Threat ...

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...