Archive
Highlighted

skipping indexing of internal audit events

Explorer

I have 3 Indexers in a cluster and recently I changed the indexing path from defalut to different mount point. Initially everything was working fine, recently after 2 days I am getting a pop-up on searchead stating error pasted below:


1.
Search peer Sample_Indexer03 has the following message: Audit event generator: Now skipping indexing of internal audit events, because the downstream queue is not accepting data. Will keep dropping events until data flow resumes. Review system health: ensure downstream indexing and/or forwarding are operating correctly.

2.
Search peer SampleIndexer03 has the following message: Index Processor: The index processor has paused data flow. Too many tsidx files in idx=introspection bucket="/media/data/hot/introspection/db/hotv1_11" , waiting for the splunk-optimize indexing helper to catch up merging them. Ensure reasonable disk space is available, and that I/O write throughput is not compromised. Learn more.


From Internal logs I could see the below errors:


3.
ERROR SplunkOptimize - (child18426SplunkOptimize) optimize finished: failed, see rc for more details, dir=/media/data/hot/introspection/db/hotv111, rc=-13 (unsigned 243), errno=2
host = Sample_Indexer03 source = /opt/splunk/var/log/splunk/splunkd.log sourcetype = splunkd

4.
ERROR SplunkOptimize - (child18426SplunkOptimize) merge failed for path=/media/data/hot/introspection/db/hotv111 rc=-13 wrc=-13 errno=2 file=/media/data/hot/introspection/db/hotv1_11/1530062373-1530062373-7514170618120332262.tsidx hint=invalid magic]


I have checked other 2 Indexers and all are fine except this one. The settings done was sameon all 3. Also there isn't any space issue as such.
Any help would be appreciated.

0 Karma
Highlighted

Re: skipping indexing of internal audit events

Splunk Employee
Splunk Employee

Hi,

The meaning of this message is that the indexers are busy, and the queues full.
Therefore the internal splunk logs (like audit) are disabled in order to dedicate all the performance to the indexing.

please check - apparently your Splunk instance is forwarding to itself.
Check on the indexer: Is a receiving port set? [okay]
Is the indexer forwarding? Where? If it is forwarding to itself, then that's the problem!
You can find both of these settings in the UI under Settings>>Forwarding and Receiving
Or, you can find the receiving settings in inputs.conf and the forwarding settings in outputs.conf

Please accept this answer, it this pointed you in the right direction.

0 Karma