Splunk Enterprise

Getting data in

_pravin
Contributor

Hi,

I have a scenario where the heavy forwarder queue gets filled and data doesn't appear quickly in Splunk when the indexers are running on a full capacity (i.e. indexers overwhelmed with data from forwarders).
Is there a way to separate queues for HF and the forwarders, so that they have a separate queue for the incoming data ?

Thanks in advance.

Thanks,
Pravin

Labels (1)
Tags (2)
0 Karma
1 Solution

livehybrid
SplunkTrust
SplunkTrust

Hi @_pravin 

The only way you could separate different queues for different forwarders would be to create multiple inputs on different ports and have certain forwarders send to certain ports, but the reality is this isnt going to give you much of an improvement or speed up ingestion.

I would focus on why the indexers are overloaded, this could be by assessing any heavy parsing regex in transforms.conf, increasing the number of indexers in your cluster (Horizontal scaling), adding resources to your indexers (Vertical scaling), reviewing any long-running searches on your Splunk stack or potentially increasing the number of parallelIngestionPipelines (default of 1) to 2 if your servers have the adequate resource available (see https://help.splunk.com/en/splunk-enterprise/get-started/deployment-capacity-manual/9.3/performance-... and https://help.splunk.com/en/splunk-enterprise/administer/manage-indexers-and-indexer-clusters/9.1/man... for more information).

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

View solution in original post

livehybrid
SplunkTrust
SplunkTrust

Hi @_pravin 

The only way you could separate different queues for different forwarders would be to create multiple inputs on different ports and have certain forwarders send to certain ports, but the reality is this isnt going to give you much of an improvement or speed up ingestion.

I would focus on why the indexers are overloaded, this could be by assessing any heavy parsing regex in transforms.conf, increasing the number of indexers in your cluster (Horizontal scaling), adding resources to your indexers (Vertical scaling), reviewing any long-running searches on your Splunk stack or potentially increasing the number of parallelIngestionPipelines (default of 1) to 2 if your servers have the adequate resource available (see https://help.splunk.com/en/splunk-enterprise/get-started/deployment-capacity-manual/9.3/performance-... and https://help.splunk.com/en/splunk-enterprise/administer/manage-indexers-and-indexer-clusters/9.1/man... for more information).

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

richgalloway
SplunkTrust
SplunkTrust

If the HFs are waiting for the indexers then the problem is on the indexers.  Fix the indexers and the HFs should be fine again.

It's not clear what you mean by "separate queues for HF and the forwarders" since HFs *are* forwarders.

---
If this reply helps you, Karma would be appreciated.
0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Index This | What travels the world but is also stuck in place?

April 2026 Edition  Hayyy Splunk Education Enthusiasts and the Eternally Curious!   We’re back with this ...

Discover New Use Cases: Unlock Greater Value from Your Existing Splunk Data

Realizing the full potential of your Splunk investment requires more than just understanding current usage; it ...

Continue Your Journey: Join Session 2 of the Data Management and Federation Bootcamp ...

As data volumes continue to grow and environments become more distributed, managing and optimizing data ...