Dear All,
I am a SplunkAdmin and we are facing significant data low throughput in some of the indexes. There are many FWD sending logs to the particular indexes so cannot check limits.conf or throuhput conf in each and every FWD's. Also verified there was no activity performed in FWD's sites which causes this significant drop. Can anyone help me with exact steps, which i have to look into? Please
Thanks in Advance!
I'd start by checking queues. Queue backup can occur on indexers or forwarders. You can check queues for the indexers by going to your Monitoring Console and going to Indexing-->Performance-->Indexing Performance: Deployment. You can check filling queues on your forwarders with the search:
index=_internal group=queue | eval percfull=((current_size_kb/max_size_kb)*100) | search percfull>80 | dedup host, name | table _time host name current_size_kb max_size_kb
You can check thruput on your forwarders by searching for these events:
INFO TailingProcessor - Could not send data to output queue (parsingQueue), retrying...
Hosts that throw that event may be candidates for increasing maxKbps in limits.conf.
(https://docs.splunk.com/Documentation/Splunk/latest/Troubleshooting/Troubleshootingeventsindexingdel...)
How many forwarders? How many indexers? Does the MC show any backed up queues?