Developing for Splunk Enterprise

Best Practice - Logging Script Runtime Results

Drainy
Champion

I have multiple scripts that perform functions outside of Splunk to build XML on the fly. To run them its easiest to have Splunk actively schedule and execute these, what I wondered is what I should do with output.
For debugging I am designing the scripts to return messages to signify the success or failure of their run however I don't want this input to be indexed into main or another custom index.

Would it be appropriate in this instance to direct this to _internal or perhaps another default splunk index for troubleshooting and system logging?

Just to re-iterate, the script will not be returning ANY output that I want to be able to search except for in the event of an apparent failure or need to debug.

0 Karma
1 Solution

Lowell
Super Champion

With scripted inputs, you could simply write out a status code via the standard error output stream. (Most scripting languages and unix utilities will do this by default when the encounter a problem or programmatic exception). Splunk will index any such "error" messages in the internal index automatically so you can track down problems with your scripted inputs.

If you are using python, you can write to standard error using the command like:

 sys.stderr.write("Job complete:  status=SUCCESS\n")

I use a search like this to periodically report any errors from my various scripted inputs: (I also filter out a bunch of Splunk's built-in scripted input errors that happen frequently, but I don't really care about.)

index=_internal sourcetype=splunkd component=ExecProcessor "message from" NOT (splunk-regmon OR splunk-wmi) | rex " - message from \"(?<inputscript>[^\"]+)\"" | rename inputscript as script | rex mode=sed "s/^.* - message from \"[^\"]+\" (.*)$/\1/" | transaction fields="host,source,script" maxpause=45s

One this that isn't clear: Are you using a scripted inputs simply for it's scheduling capabilities? You made it sound as though you aren't sending any data to the indexes by default. If this is correct, then I'd really suggest using a different scheduling mechanism and simply writing out your status information to a log file and then monitor that file with splunk. CRON or the windows task scheduler work well.

View solution in original post

Lowell
Super Champion

With scripted inputs, you could simply write out a status code via the standard error output stream. (Most scripting languages and unix utilities will do this by default when the encounter a problem or programmatic exception). Splunk will index any such "error" messages in the internal index automatically so you can track down problems with your scripted inputs.

If you are using python, you can write to standard error using the command like:

 sys.stderr.write("Job complete:  status=SUCCESS\n")

I use a search like this to periodically report any errors from my various scripted inputs: (I also filter out a bunch of Splunk's built-in scripted input errors that happen frequently, but I don't really care about.)

index=_internal sourcetype=splunkd component=ExecProcessor "message from" NOT (splunk-regmon OR splunk-wmi) | rex " - message from \"(?<inputscript>[^\"]+)\"" | rename inputscript as script | rex mode=sed "s/^.* - message from \"[^\"]+\" (.*)$/\1/" | transaction fields="host,source,script" maxpause=45s

One this that isn't clear: Are you using a scripted inputs simply for it's scheduling capabilities? You made it sound as though you aren't sending any data to the indexes by default. If this is correct, then I'd really suggest using a different scheduling mechanism and simply writing out your status information to a log file and then monitor that file with splunk. CRON or the windows task scheduler work well.

View solution in original post

Drainy
Champion

Ta, just what I needed to know.
Well its a bit of both, I am using the script to generate some extra things for Splunk based on the results of an external process so I really want it to be handled and executed by Splunk

0 Karma