Monitoring Splunk

Saturated Event-Processing Queues

msplunk33
Path Finder

I am getting this error frequently and I can see the index queue is 99% for many indexers in the cluster. I am not able to figure out what is causing this issue. During this period indexing is considerable slow and logs are not ingesting for many source type. I am not able to figure out what is causing this issue(which source). After sometime it go back to normal. I am worried this can case issue in the future.

Labels (1)
Tags (1)
0 Karma

richgalloway
SplunkTrust
SplunkTrust

In the MC, select Indexing->Indexing Performance: Instance.  Then scroll down to the "Estimated Indexing Rate Per Sourcetype" panel.  Use the dropdown menu to split the graph by various attributes until you find the source of the problem.

---
If this reply helps you, an upvote would be appreciated.
0 Karma

richgalloway
SplunkTrust
SplunkTrust

A full queue is caused by a slow-down after the queue or a sudden increase before the queue.

Check your storage system to make sure there is nothing that is causing the I/O rate to drop significantly, like an AV scan.  Splunk should not be sharing storage with other high-I/O applications like a DB.

A periodic surge in incoming data can also lead to backed-up queues.  Use the monitoring console to see what sources contributed a lot of data during the period of the slowdown.

---
If this reply helps you, an upvote would be appreciated.
0 Karma

msplunk33
Path Finder

@richgalloway 

 

Use the monitoring console to see what sources contributed a lot of data during the period of the slowdown.

 

I could not find the above option in the monitoring console. Could you give me the menu details  from the monitoring console or a scereenshot.

0 Karma
.conf21 CFS Extended through 5/20!

Don't miss your chance
to share your Splunk
wisdom in-person or
virtually at .conf21!

Call for Speakers has
been extended through
Thursday, 5/20!