Getting Data In

indexing congestion consistenly happening

sventura15
Explorer

Hi,

I have only started using splunk on a test server, and I am consistently getting "skipped indexing of internal audit event will keep dropping events until indexer congestion is remedied. Check disk space and other issues that may cause indexer to block." The volume with the databases has 14 GB free. licensing states that I have only used 5% of quota. Restarting splunk did not seem to help either.

Any assistance would be greatly appreciated.

jonasm1
Explorer

yannk mentions the SOS app - this has been deprecated - if you have version 6.3 or above this has been replaced by the DMC. We too are getting this error message - any thoughts on a solution for this issue?
Audit event generator: Now skipping indexing of internal audit events, because the downstream queue is not accepting data. Will keep dropping events until data flow resumes. Review system health: ensure downstream indexing and/or forwarding are operating correctly.

0 Karma

k_harini
Communicator

what is the resolution

rabitoblanco
Path Finder

I am having this issue as well. It's on a SH which has some very full parsing queues. The other SHs on my Search Head Pool do not have full queues at all. All my Indexers' queues are all clear.

Suggestions on where to look next?

0 Karma

uuppuluri_splun
Splunk Employee
Splunk Employee

This happens also when the queues are full and the indexers are overloaded.

0 Karma

yannK
Splunk Employee
Splunk Employee

The exact message is a banner displayed on the search-head.

"Skipped indexing of internal audit event will keep dropping events until indexer congestion is remedied. Check disk space and other issues that may cause indexer to block"

The meaning of this message is that the indexers are busy, and the queues full.
Therefore the internal splunk logs (like audit) are disabled in order to dedicate all the performance to the indexing. "Your data is more important to us than our own logs"

If this is happening once a while, this may be peak of volume in your data, if this is happening constantly, this is a performance issue :
To troubleshoot, install the SOS app and check the "indexing performance" on your indexers. http://splunk-base.splunk.com/apps/29008/sos-splunk-on-splunk

The root causes can be

  • the forwarders are sending a large volume of events in burst -> you may change your monitoring or logrotation, and verify the thruput limit to spread the load over a longer period. (thruput in limits.conf)
  • too large metadata files -> fixed in 5.*, upgrade the indexers.
  • the indexer capacity is too low for your volume -> load balance over more indexers, faster cpu, faster disks
  • the format of the data requires more index time processing (multiline events, invalid timestamp, linebreaking rules, wrong sourcetype, non optimized sourcetype) -> check your props and transforms, and the internal splunkd.log
  • costly index time rules are setup (sedcmd, index time field extraction, complex routing rules, bad regexes) -> check your props and transforms.

gkanapathy
Splunk Employee
Splunk Employee

If your disk is slow (e.g., a network volume) this will happen.

0 Karma
Register for .conf21 Now! Go Vegas or Go Virtual!

How will you .conf21? You decide! Go in-person in Las Vegas, 10/18-10/21, or go online with .conf21 Virtual, 10/19-10/20.