Splunk Search

Debugging Custom Splunk Search Commands

enielson
Explorer

I have taken iplocation.py as a skeleton for a simple custom search command that adds another column to the search results.

I am finding that when I have an error in my code, the exception handler at the bottom isn't executing properly and I have no way of seeing what the problem was. The search simply finishes with no results instead of whatever generateErrorResults would have returned.

How can I enable more debugging or see the actual python exception generated? I've looked under System Logging but I couldn't find anything that would seem to do it.

1 Solution

Lowell
Super Champion

I would recommend using python's standard logging module. You will still need to do some top-level exception handling but you'll get the full details of the problem in your log that way. I've also done quick and dirty ad-hoc logging with open("/tmp/debug.me","a").write("my message") kind of stuff, but that's kludgey and I always end up wishing that I would have just used the logging module in the first place.

You don't even have to do the normal logging setup work because python search commands are actually run from the $SPLUNK_HOME/bin/runScript.py script and by default setup any logging messages go to the $SPLUNK_HONE/var/log/splunk/python.log log file, which is already setup to be indexed by splunk.

Sample script with some exception handling and general log writing.

import logging
import splunk.Intersplunk

logger = logging.getLogger("NameOfMyCustomSearchScript")
# Set level of logger source.  Use debug for development time options, then bump it up
# to logging.INFO after your script is working well to avoid excessive logging.
logger.setLevel(logging.DEBUG)

def main():
    # Handle key/value and args passed in to my script
    args, kwargs = splunk.Intersplunk.getKeywordsAndOptions()
    logger.debug("Args:  %r, key/value settings:  %r", args, kwargs)

    setting1 = kwargs.get("some_optional_setting", "10")
    parameter1 = args[0]

    # Fetch results being passed to this search command
    (results, dummyresults, settings) = splunk.Intersplunk.getOrganizedResults()
    logger.debug("Settings passed:  %r", settings)

    logger.info("Custom search script processing %d results.", len(results))

    for result in results:
        # Do whatever real work here ...
        result["my_new_field"] = "true"

        # Say you encounter some kind of unexpected condition....
        logger.warn("Unsupported value '%s' found in field '%s', "eggs", "field_1")

    #  Return modified results back to splunk (for the next search command to process)
    splunk.Intersplunk.outputResults(results)

try:
    main()
except Exception, e:
    # Catch any exception, log it and also return a simplified version back to splunk (should be displayed in red at the top of the page)
    logger.exception("Unhanded top-level exception")
    splunk.Intersplunk.generateErrorResults("Exception! %s (See python.log)" % (e,))

You can then search for errors created by your custom search script by searching like so:

 index=_internal source=*python.log*

Unfortunately, you can't search for NameOfMyCustomSearchScript because the formatting setup in runScript.py doesn't include the logging module in the message format string, so it can sometimes be tricky to get just the log events that you want (depending on how busy your python.log file is). if that becomes too much of a pain, you can easily write a separate log file for your search command since you already using the logging module. I often use a simple logging setup function like this:

def setup_logging():
    logger = logging.getLogger() # Root-level logger
    logger.setLevel(logging.DEBUG)

    logfile = logging.StreamHandler(open("/tmp/debugging_my_saved_search.log", "a"))
    logfile.setLevel(logging.DEBUG)
    logfile.setFormatter(logging.Formatter('%(asctime)s [%(process)06d] %(levelname)-8s %(name)s:  %(message)s'))
    logger.addHandler(logfile)

# Then be sure to call the function, like so:
setup_logging()

Hope this gives you some ideas to get started with.

View solution in original post

Lowell
Super Champion

I would recommend using python's standard logging module. You will still need to do some top-level exception handling but you'll get the full details of the problem in your log that way. I've also done quick and dirty ad-hoc logging with open("/tmp/debug.me","a").write("my message") kind of stuff, but that's kludgey and I always end up wishing that I would have just used the logging module in the first place.

You don't even have to do the normal logging setup work because python search commands are actually run from the $SPLUNK_HOME/bin/runScript.py script and by default setup any logging messages go to the $SPLUNK_HONE/var/log/splunk/python.log log file, which is already setup to be indexed by splunk.

Sample script with some exception handling and general log writing.

import logging
import splunk.Intersplunk

logger = logging.getLogger("NameOfMyCustomSearchScript")
# Set level of logger source.  Use debug for development time options, then bump it up
# to logging.INFO after your script is working well to avoid excessive logging.
logger.setLevel(logging.DEBUG)

def main():
    # Handle key/value and args passed in to my script
    args, kwargs = splunk.Intersplunk.getKeywordsAndOptions()
    logger.debug("Args:  %r, key/value settings:  %r", args, kwargs)

    setting1 = kwargs.get("some_optional_setting", "10")
    parameter1 = args[0]

    # Fetch results being passed to this search command
    (results, dummyresults, settings) = splunk.Intersplunk.getOrganizedResults()
    logger.debug("Settings passed:  %r", settings)

    logger.info("Custom search script processing %d results.", len(results))

    for result in results:
        # Do whatever real work here ...
        result["my_new_field"] = "true"

        # Say you encounter some kind of unexpected condition....
        logger.warn("Unsupported value '%s' found in field '%s', "eggs", "field_1")

    #  Return modified results back to splunk (for the next search command to process)
    splunk.Intersplunk.outputResults(results)

try:
    main()
except Exception, e:
    # Catch any exception, log it and also return a simplified version back to splunk (should be displayed in red at the top of the page)
    logger.exception("Unhanded top-level exception")
    splunk.Intersplunk.generateErrorResults("Exception! %s (See python.log)" % (e,))

You can then search for errors created by your custom search script by searching like so:

 index=_internal source=*python.log*

Unfortunately, you can't search for NameOfMyCustomSearchScript because the formatting setup in runScript.py doesn't include the logging module in the message format string, so it can sometimes be tricky to get just the log events that you want (depending on how busy your python.log file is). if that becomes too much of a pain, you can easily write a separate log file for your search command since you already using the logging module. I often use a simple logging setup function like this:

def setup_logging():
    logger = logging.getLogger() # Root-level logger
    logger.setLevel(logging.DEBUG)

    logfile = logging.StreamHandler(open("/tmp/debugging_my_saved_search.log", "a"))
    logfile.setLevel(logging.DEBUG)
    logfile.setFormatter(logging.Formatter('%(asctime)s [%(process)06d] %(levelname)-8s %(name)s:  %(message)s'))
    logger.addHandler(logfile)

# Then be sure to call the function, like so:
setup_logging()

Hope this gives you some ideas to get started with.

enielson
Explorer

This worked great, thank you. The logger output went to python.log as you described and I found my error quickly.

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...