Getting Data In

indexing congestion consistenly happening



I have only started using splunk on a test server, and I am consistently getting "skipped indexing of internal audit event will keep dropping events until indexer congestion is remedied. Check disk space and other issues that may cause indexer to block." The volume with the databases has 14 GB free. licensing states that I have only used 5% of quota. Restarting splunk did not seem to help either.

Any assistance would be greatly appreciated.


yannk mentions the SOS app - this has been deprecated - if you have version 6.3 or above this has been replaced by the DMC. We too are getting this error message - any thoughts on a solution for this issue?
Audit event generator: Now skipping indexing of internal audit events, because the downstream queue is not accepting data. Will keep dropping events until data flow resumes. Review system health: ensure downstream indexing and/or forwarding are operating correctly.

0 Karma


what is the resolution

Path Finder

I am having this issue as well. It's on a SH which has some very full parsing queues. The other SHs on my Search Head Pool do not have full queues at all. All my Indexers' queues are all clear.

Suggestions on where to look next?

0 Karma

Splunk Employee
Splunk Employee

This happens also when the queues are full and the indexers are overloaded.

0 Karma

Splunk Employee
Splunk Employee

The exact message is a banner displayed on the search-head.

"Skipped indexing of internal audit event will keep dropping events until indexer congestion is remedied. Check disk space and other issues that may cause indexer to block"

The meaning of this message is that the indexers are busy, and the queues full.
Therefore the internal splunk logs (like audit) are disabled in order to dedicate all the performance to the indexing. "Your data is more important to us than our own logs"

If this is happening once a while, this may be peak of volume in your data, if this is happening constantly, this is a performance issue :
To troubleshoot, install the SOS app and check the "indexing performance" on your indexers.

The root causes can be

  • the forwarders are sending a large volume of events in burst -> you may change your monitoring or logrotation, and verify the thruput limit to spread the load over a longer period. (thruput in limits.conf)
  • too large metadata files -> fixed in 5.*, upgrade the indexers.
  • the indexer capacity is too low for your volume -> load balance over more indexers, faster cpu, faster disks
  • the format of the data requires more index time processing (multiline events, invalid timestamp, linebreaking rules, wrong sourcetype, non optimized sourcetype) -> check your props and transforms, and the internal splunkd.log
  • costly index time rules are setup (sedcmd, index time field extraction, complex routing rules, bad regexes) -> check your props and transforms.

Splunk Employee
Splunk Employee

If your disk is slow (e.g., a network volume) this will happen.

0 Karma
Get Updates on the Splunk Community!

Synthetic Monitoring: Not your Grandma’s Polyester! Tech Talk: DevOps Edition

Register today and join TekStream on Tuesday, February 28 at 11am PT/2pm ET for a demonstration of Splunk ...

Instrumenting Java Websocket Messaging

Instrumenting Java Websocket MessagingThis article is a code-based discussion of passing OpenTelemetry trace ...

Announcing General Availability of Splunk Incident Intelligence!

Digital transformation is real! Across industries, companies big and small are going through rapid digital ...