Splunk Enterprise

Getting data in

_pravin
Contributor

Hi,

I have a scenario where the heavy forwarder queue gets filled and data doesn't appear quickly in Splunk when the indexers are running on a full capacity (i.e. indexers overwhelmed with data from forwarders).
Is there a way to separate queues for HF and the forwarders, so that they have a separate queue for the incoming data ?

Thanks in advance.

Thanks,
Pravin

Labels (1)
Tags (2)
0 Karma
1 Solution

livehybrid
SplunkTrust
SplunkTrust

Hi @_pravin 

The only way you could separate different queues for different forwarders would be to create multiple inputs on different ports and have certain forwarders send to certain ports, but the reality is this isnt going to give you much of an improvement or speed up ingestion.

I would focus on why the indexers are overloaded, this could be by assessing any heavy parsing regex in transforms.conf, increasing the number of indexers in your cluster (Horizontal scaling), adding resources to your indexers (Vertical scaling), reviewing any long-running searches on your Splunk stack or potentially increasing the number of parallelIngestionPipelines (default of 1) to 2 if your servers have the adequate resource available (see https://help.splunk.com/en/splunk-enterprise/get-started/deployment-capacity-manual/9.3/performance-... and https://help.splunk.com/en/splunk-enterprise/administer/manage-indexers-and-indexer-clusters/9.1/man... for more information).

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

View solution in original post

livehybrid
SplunkTrust
SplunkTrust

Hi @_pravin 

The only way you could separate different queues for different forwarders would be to create multiple inputs on different ports and have certain forwarders send to certain ports, but the reality is this isnt going to give you much of an improvement or speed up ingestion.

I would focus on why the indexers are overloaded, this could be by assessing any heavy parsing regex in transforms.conf, increasing the number of indexers in your cluster (Horizontal scaling), adding resources to your indexers (Vertical scaling), reviewing any long-running searches on your Splunk stack or potentially increasing the number of parallelIngestionPipelines (default of 1) to 2 if your servers have the adequate resource available (see https://help.splunk.com/en/splunk-enterprise/get-started/deployment-capacity-manual/9.3/performance-... and https://help.splunk.com/en/splunk-enterprise/administer/manage-indexers-and-indexer-clusters/9.1/man... for more information).

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

richgalloway
SplunkTrust
SplunkTrust

If the HFs are waiting for the indexers then the problem is on the indexers.  Fix the indexers and the HFs should be fine again.

It's not clear what you mean by "separate queues for HF and the forwarders" since HFs *are* forwarders.

---
If this reply helps you, Karma would be appreciated.
0 Karma
Get Updates on the Splunk Community!

Index This | What is broken 80% of the time by February?

December 2025 Edition   Hayyy Splunk Education Enthusiasts and the Eternally Curious!    We’re back with this ...

Unlock Faster Time-to-Value on Edge and Ingest Processor with New SPL2 Pipeline ...

Hello Splunk Community,   We're thrilled to share an exciting update that will help you manage your data more ...

Splunk MCP & Agentic AI: Machine Data Without Limits

Discover how the Splunk Model Context Protocol (MCP) Server can revolutionize the way your organization uses ...