I have blue bar notification in each view informing me that an event was received "for unconfigured/disabled index='sample'..."
How would I be able to find when these events were received?
If you're seeing this error, its probably still occurring. Most likely there is some scheduled summary search that is trying to write to your (disabled or non-existent) sample index. If you need assistance tracking it down, please feel free to file a support ticket.
Have you upgraded any of your forwarders recently? Sometimes the install process triggers the sample logs from sample_app
to be forwarded again. (In my case, it was probably because I had deleted the entire sample_app
folder, instead of disabling the app via local/apps.conf
file; but it's possible you're seeing something similar.)
The sample_app was a demo app shipped with splunk.
It was shipped with some sample logs, and the configuration to create the sample index.
This app is usually disabled by default. (But was enabled on old versions of splunk 4.0 or 4.1)
It seems that in your case, the app is disabled on the indexer, but is enabled on at least one of your forwarders. Therefore it is sending some sample logs to an index that doesn't exists on the indexer, and causes the Error message.
Please login to the host mentioned on the error message, and disable the sample app :
in $SPLUNK_HOME/etc/apps/sample_app/local/app.conf add [install] state = disabled
Have you upgraded any of your forwarders recently? Sometimes the install process triggers the sample logs from sample_app
to be forwarded again. (In my case, it was probably because I had deleted the entire sample_app
folder, instead of disabling the app via local/apps.conf
file; but it's possible you're seeing something similar.)
I have a similar error. The splunkd.logs files reported the "received event" I believe I finally found the answer for my "unconfigured/disabled index=' conundrum, with the generous assistance of Splunk support. Scenario: I upgraded to 4.1.2 on our main indexer, and a windows server 2003 SPLUNK LWF. In my case it was a different index name. I disabled the indexes on both systems. Then created new indexes, and used the new indexes on both systems. Then restarted both instances.
After I received the same error with the "unconfigured/disabled index=' I went back to the LWF /splunk/etc/system/local to check the conf files. I compared the recent confs with *.conf.old (splunk backups during the upgrade)
I noticed that my inputs.conf on the LWF still had a reference to the original offensive index that I had disabled, and subsequently deleted. Ok thats odd, so I simply changed this value to the new index name. (see below)
Change inputs.conf from
Also verified the path to the DBs for good measure. I hope this helps.
V
This is the only issue ..I still dont figure..
1.) Our splunk main indexer was upgraded to 4.1.3,
2.) Our windows forwarders remained to version 3.4.x, and they still did have the configured index which didnt exist in the main indexer..
3.)Only after upgrading the windows boxes , I actually saw these error messages...
Thanks a lot . this helped....\m/
The error you are reporting implies that there are events being directed to an index (called sample) that does not exist. You should check your input settings to see if any are configured to be sent to that particular index.
As the wolverine has stated, there is the possibility that you have told your summary indexing to send to a nonexistent index. You should review your saved searches that have summary indexing enabled to see if they are sending to a summary index that exists.
If you're seeing this error, its probably still occurring. Most likely there is some scheduled summary search that is trying to write to your (disabled or non-existent) sample index. If you need assistance tracking it down, please feel free to file a support ticket.