I am getting messages (in the messages section, not in Splunkd) that:
Search peer idxX.XXX has the following message: Received event for unconfigured/disabled/deleted index=XXX with source=XXX host=XXX sourcetype="sourcetype::syslog_nohost". So far received events from 1 missing index(es).
I am having trouble isolating which sources are causing the trouble. I know that Splunk is binning these messages, and I have looked in _internal but am not able to get any further info on these. Are there any logs to see anything further about them? If I could graph the frequency of the messages over time to be able to determine when it started, it would be a help.
This log is notoriously nondescript and the best way to get complete detail is to configure a LAST_CHANCE_INDEX
:
This log is notoriously nondescript and the best way to get complete detail is to configure a LAST_CHANCE_INDEX
:
@Woodcock - A couple of ways around this .... creating the LAST_CHANCE_INDEX
is a good overall solution. Also, I guess I could have just created the missing index in my case and captured/analyzed the data (DOH!). Thx.
I configure it for main
and then only use main
for testing in production. In other words, there should NEVER be anything in main
, and if there is, it means that somebody goofed.
Hello @lennys26
the source for this error is that you have 1 or more inputs.conf that specifies an index that does not exist (or enabled) on your indexer.
you can look in inputs.conf using btool and see where in inputs.conf you have that index (that does not exist) specified.
more here: https://docs.splunk.com/Documentation/Splunk/6.6.0/Troubleshooting/Usebtooltotroubleshootconfigurati...
you can look for the first event by searching text literally and then piping to reverse ... | reverse
to see first event (play with time picker here)
or you can pipe to timechart count ... | timechart count
and see a graph of error counts over time.
hope it helps
Hi @adonio -- Thanks. These are coming from our app client so my dev team is looking for a bit of info as to which one/version/platform/etc. Anything from the raw log that could give us a pointer. Even if we could trim this to a time of day or starting date we could match it to a release schedule or something.
I have done a pure text search across all of _internal and see nothing at all which is not what I would expect, unless this is because I am in Splunk Cloud...
Hello,
I did a quick test (Splunk version 6.2.1):
On messages I got:
received event for unconfigured/disabled/deleted index='lost_index' with source='source::/tmp/test' host='host::localhost.localdomain' sourcetype='sourcetype::anything' (1 missing total)
On splunkd.log (/opt/splunk/var/log/splunk/splunkd.log):
05-22-2017 17:30:43.276 +0200 WARN IndexProcessor - received event for unconfigured/disabled/deleted index='lost_index' with source='source::/tmp/test' host='host::localhost.localdomain' sourcetype='sourcetype::anything' (1 missing total)
Regards
Hi @aakwah -- I would have expected to find those messages, however do not see anything at all related to my alerts. I have expanded to all of _internal and don't see anything. I wonder if it is because I am in Cloud, that the logs are missing...
@lennys26 I did the test on my lab, I've no idea how Splunk cloud handling these logs
are you an admin?
only admins has access to _internal indexes
also, being on cloud, I suspect you need a ticket for new index so maybe that index was yet to created by Cloud operation team