Deployment Architecture

skipping indexing of internal audit events


I have 3 Indexers in a cluster and recently I changed the indexing path from defalut to different mount point. Initially everything was working fine, recently after 2 days I am getting a pop-up on searchead stating error pasted below:

Search peer Sample_Indexer03 has the following message: Audit event generator: Now skipping indexing of internal audit events, because the downstream queue is not accepting data. Will keep dropping events until data flow resumes. Review system health: ensure downstream indexing and/or forwarding are operating correctly.

Search peer Sample_Indexer03 has the following message: Index Processor: The index processor has paused data flow. Too many tsidx files in idx=_introspection bucket="/media/data/hot/_introspection/db/hot_v1_11" , waiting for the splunk-optimize indexing helper to catch up merging them. Ensure reasonable disk space is available, and that I/O write throughput is not compromised. Learn more.

From Internal logs I could see the below errors:

ERROR SplunkOptimize - (child_18426__SplunkOptimize) optimize finished: failed, see rc for more details, dir=/media/data/hot/_introspection/db/hot_v1_11, rc=-13 (unsigned 243), errno=2
host = Sample_Indexer03 source = /opt/splunk/var/log/splunk/splunkd.log sourcetype = splunkd

ERROR SplunkOptimize - (child_18426__SplunkOptimize) merge failed for path=/media/data/hot/_introspection/db/hot_v1_11 rc=-13 wrc=-13 errno=2 file=/media/data/hot/_introspection/db/hot_v1_11/1530062373-1530062373-7514170618120332262.tsidx hint=invalid magic]

I have checked other 2 Indexers and all are fine except this one. The settings done was sameon all 3. Also there isn't any space issue as such.
Any help would be appreciated.

0 Karma

Splunk Employee
Splunk Employee


The meaning of this message is that the indexers are busy, and the queues full.
Therefore the internal splunk logs (like audit) are disabled in order to dedicate all the performance to the indexing.

please check - apparently your Splunk instance is forwarding to itself.
Check on the indexer: Is a receiving port set? [okay]
Is the indexer forwarding? Where? If it is forwarding to itself, then that's the problem!
You can find both of these settings in the UI under Settings>>Forwarding and Receiving
Or, you can find the receiving settings in inputs.conf and the forwarding settings in outputs.conf

Please accept this answer, it this pointed you in the right direction.

0 Karma
Get Updates on the Splunk Community!

Build Scalable Security While Moving to Cloud - Guide From Clayton Homes

 Clayton Homes faced the increased challenge of strengthening their security posture as they went through ...

Mission Control | Explore the latest release of Splunk Mission Control (2.3)

We’re happy to announce the release of Mission Control 2.3 which includes several new and exciting features ...

Cloud Platform | Migrating your Splunk Cloud deployment to Python 3.7

Python 2.7, the last release of Python 2, reached End of Life back on January 1, 2020. As part of our larger ...