Splunk Search

Index feed is having low throughput


Dear All,

I am a SplunkAdmin and we are facing significant data low throughput in some of the indexes. There are many FWD sending logs to the particular indexes so cannot check limits.conf or throuhput conf in each and every FWD's. Also verified there was no activity performed in FWD's sites which causes this significant drop. Can anyone help me with exact steps, which i have to look into? Please

Thanks in Advance!

0 Karma


I'd start by checking queues. Queue backup can occur on indexers or forwarders. You can check queues for the indexers by going to your Monitoring Console and going to Indexing-->Performance-->Indexing Performance: Deployment. You can check filling queues on your forwarders with the search:

index=_internal group=queue | eval percfull=((current_size_kb/max_size_kb)*100) | search percfull>80 | dedup host, name | table _time host name current_size_kb max_size_kb

You can check thruput on your forwarders by searching for these events:

INFO TailingProcessor - Could not send data to output queue (parsingQueue), retrying...

Hosts that throw that event may be candidates for increasing maxKbps in limits.conf.

0 Karma


How many forwarders? How many indexers? Does the MC show any backed up queues?

If this reply helps you, Karma would be appreciated.
0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

Using the Splunk Threat Research Team’s Latest Security Content

REGISTER HERE Tech Talk | Security Edition Did you know the Splunk Threat Research Team regularly releases ...

SplunkTrust | 2024 SplunkTrust Application Period is Open!

It's that time again, folks! That's right, the application/nomination period for the 2024 SplunkTrust is ...