Good morning,
I am suddenly receiving this error and not able to index:
skipped indexing of internal audit event will keep dropping events until indexer congestion is remedied. Check disk space and other issues that may cause indexer to block
The other day I received this error:
Applying indexing throttle for defaultdb\db because bucket has too many tsidx files, is your splunk-optimize working?
I have recently upgraded from 4.1.3 to 4.1.5. There was no immediate change but I did start using FSChange to monitor some directories.
I removed FSChange stanzas that I added from the inputs.conf and restarted and I am still having the issue, though the warning moved back to the second error.
In splunkd.log I see:
10-15-2010 08:20:18.131 ERROR DispatchCommand - Failed to start the search process.
10-15-2010 08:20:18.162 WARN DispatchCommand - The system is approaching the maximum number of historical searches that can be run concurrently. current=7 maximum=8
10-15-2010 08:20:18.193 ERROR DispatchCommand - Failed to start the search process.
10-15-2010 08:20:19.850 ERROR DispatchCommand - The maximum number of historical concurrent system-wide searches has been reached. current=8 maximum=8 Search not executed! SearchId=scheduler__nobody__windows_d2luX2V2ZW50bG9nX2NvdW50X3N1bV9pbmRleA_at_1287148800_967218771
10-15-2010 08:20:19.896 ERROR SearchScheduler - The maximum number of historical concurrent system-wide searches has been reached. current=8 maximum=8 Search not executed! SearchId=scheduler__nobody__windows_d2luX2V2ZW50bG9nX2NvdW50X3N1bV9pbmRleA_at_1287148800_967218771
10-15-2010 08:20:23.193 WARN timeinvertedIndex - splunk-optimize failed to start for index D:\Splunk_Data\var\defaultdb\db\hot_v1_16 : The session was canceled.
10-15-2010 08:20:23.193 WARN timeinvertedIndex - splunk-optimize failed to start for index D:\Splunk_Data\var\defaultdb\db\hot_v1_19 : The session was canceled.
I am not sure if it is related. Perhaps with all my alerts that run at various intervals (10 min, 15 min, 20 min, 30 min) I am eclipsing 8. Would that cause the errors regarding not indexing?
I am currently not able to view any data for the last two days.
Thanks for any help!
Kevin
I had this problem recently and it was for a tricky/silly reason. I got tired of the dispatch
directory being tied to the root volume and getting The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch
errors, so i created a 10G volume and mounted it over dispatch BUT I neglected to make it writable by the user running splunkd (i.e. "splunk"). In such a situation, 14 searches will start, but not really, and none will be able to complete so you get hung. I discovered the problem by going to the search head CLI and doing this (because I could not search against _*):
tail -f $SPLUNK_HOME/var/log/splunk/*
Very quickly I saw logs like this:
10-21-2016 12:02:10.208 -0400 ERROR SearchScheduler - failed to rm -r /opt/splunk/var/run/splunk/dispatch/scheduler__nobody_c3BsdW5rX21vbml0b3JpbmdfY29uc29sZQ__RMD54740dfff07b17ef1_at_1477065699_0: No such file or directory
In other words, it was trying to remove files that it was not able to create. OOPS! A simple chmod
later and all was good again.
Were you able to resolve this?
What did it end up being?
For a "down" kind of scenario like this, it may be best to contact splunk support. Email them with a link to this page, run the "splunk diag
" utility, upload the diag file to your case, then call the splunk support phone # to get in contact with someone quickly.
Things I would check:
*.tsidx
files in your buckets, you can simply run splunk-optimize /path/to/your/bucket
to force this process to run.Best bet is to contact support and let them work through it with you as there can be many causes of it.
also having the same issue.. not a disk space problem either on my end.. any resolution to this?
What was the resolution to this?!?
I suggest calling splunk support. Copied from the "Contact Us" web page: If you have purchased Enterprise Support, please call the Enterprise Support line at +1 415.848.8400 option 3.
Unfortunately, those did not work and I have not heard back in a couple days regarding my case with Splunk. Any other thoughts as I am completely down. If this goes on much longer I may have to downgrade back to 4.1.3, even if that means ripping out and re-setting up the installation. Thanks.
I have an open case and am working with them. Disk space is fine, I disabled a scheduled saved search, but just the one I added prior to this problem occurring.
I am looking into #2. Thanks very much for all your help.