Getting Data In

Tracking down an input going to a bad index

Contributor

Just had this pop up; there is only one instance of it in the notification area, but the time stamp keeps advancing, so I assume that this is recurring.

Search peer adculsplunkp1 has the following message: received event for unconfigured/disabled/deleted index='wineventhog' with source='source::WinEventLog:Security' host='host::PON-ASIADCP1' sourcetype='sourcetype::WinEventLog:Security' (1 missing total)

Note the bad index name. I'm reading this as adculsplunkp1 thinks it's getting events destined for index 'wineventhog' from PON-ASIADCP1.

I go to PON-ASIADCP1 and do a "splunk cmd btool input list" to a file, and search the output for "wineventhog", I get no hits.

I bring everything in the splunkuniversalforwarder/etc on PON-ASIADCP1 over to a linux box and grep everything there for "wineventhog", I get no hits. The etc directory does not appear to have been touched lately, and this machine is under deployment server control, and the deployment server has not been touched for weeks.

I grep everything in the splunk/etc directory on adculsplunkp1 (our indexer) for 'wineventhog', no hits.

I check index=_* and index=* for 'wineventhog' and only see my own searches for 'wineventhog'. I can't even find any messages with the text in the notification.

Any thoughts on where else to look to see what's going on?

0 Karma

Legend

Perhaps there is a transformation somewhere on the indexer that is doing this. Try doing the same grep on the indexer...

0 Karma

Contributor

I grepped everything in the splunk/etc directory on adculsplunkp1 (our indexer) for 'wineventhog', no hits.

0 Karma

Champion

What about narrowing it down? Are you able to stop splunk on PON-ASIADCP1 to see if the message goes away? If so, it's probably some config on that server. If not, then it's probably some config on the indexers. But at least gets you a smaller area to shine the flashlight...

0 Karma

Contributor

restarted forwarder on PON-ASIADCP1. Message is not appearing for the time being.

Still don't understand why I can't find the:

received event for unconfigured/disabled/deleted index='wineventhog' with source='source::WinEventLog:Security' host='host::PON-ASIADCP1' sourcetype='sourcetype::WinEventLog:Security' (1 missing total)

in any indices, nor why that forwarder is doing such a thing.

0 Karma

Champion

well I did get that same type message (missing index) on a standalone box I was playing with recently. The message was logged in _internal, splunkd sourcetype....so that's where I'd expect you would see it too. Strange that it's not there.

Also, I'm guessing this was just the way you were posting, but I noticed that btool command was using "input" instead of "inputs", but probably just a typing thing? And also that when you searched for wineventhog, it was all lowercase. Not sure if you actually searched that way or whether your search was case insensitive, but thought it worth at least asking.

For some reason that particular index name rings a bell with me but I can't remember why. I could very well be mistaken, but wondering if I had a similar issue at some point.

0 Karma

Contributor

Found the message in index=_internal; it happened once back on July 15. At least the "I can't find it in the logs" mystery is solved (wasn't looking far enough back).

0 Karma

Contributor

@maciep: can't speak to if I said 'input' or 'inputs' on the btool, but I got a good output file (looked at it).

grepping was done case-insensitively (-i switch).

0 Karma

Splunk Employee
Splunk Employee

Create the index, and see what host(s) start logging to it.

Out of curiosity, are you using a non-english input in addition to english?

Contributor

English only. I'll create the wineventhog index if the problem recurs (went away when I restarted the UF).

0 Karma

Legend

A-ha!!! Note that the forwarder will only re-scan its .conf files when splunkd restarts!

So the bad index entry could have been fixed in the .conf file some time ago, but the forwarder never saw the change until you restarted!

If you are using the deployment server to distribute conf files to forwarders, make sure that you set it so that the forwarder restarts after it updates an app. If you are making manual changes, remember to restart Splunk on the forwarder after you finish!

0 Karma