I added a custom endpoint successfully. But where can I find the output of my print statement?
Further more I like know where to find the startup log for my endpoint. For example in case of an error in my code.
import splunk.appserver.mrsparkle.controllers as controllers
from splunk.appserver.mrsparkle.lib.decorators import expose_page
class Controller(controllers.BaseController):
@expose_page(must_login=False)
def my_function(self, **kwargs) :
print "MY LOG MESSAGE"
return "HELLO WORLD"
You need to create a log for it first. Otherwise it will probably end up in splunkd.log aka index=_internal sourcetype=splunkd but only in the form of a stack trace when there's an exception... or maybe all output will end up there... not certain. If you do create a logger for it though like below, you can control where it goes:
import splunk.mining.dcutils as dcu
logger = dcu.getLogger()
logger.info("This is info")
logger.error("this is error")
logger.warn("this is warn")
logger.exception("this is exception")
The above will certainly log to python.log.
import logging
from splunk.appserver.mrsparkle.lib.util import make_splunkhome_path
def setup_logger(name, level=logging.WARNING, maxBytes=25000000, backupCount=5):
'''
Set up a default logger.
@param name: The log file name.
@param level: The logging level.
@param maxBytes: The maximum log file size before rollover.
@param backupCount: The number of log files to retain.
'''
# Strip ".py" from the log file name if auto-generated by a script.
if '.py' in name:
name = name.replace(".py", "")
logfile = make_splunkhome_path(["var", "log", "splunk", name + '.log'])
logger = logging.getLogger(name)
logger.propagate = False # Prevent the log messages from being duplicated in the python.log file
logger.level = level
# Prevent re-adding handlers to the logger object, which can cause duplicate log lines
handler_exists = any([True for h in logger.handlers if h.baseFilename == logfile])
if not handler_exists:
file_handler = logging.handlers.RotatingFileHandler(logfile, mode='a', maxBytes=maxBytes, backupCount=backupCount)
formatter = logging.Formatter('%(asctime)s %(levelname)s pid=%(process)d tid=%(threadName)s file=%(filename)s:%(funcName)s:%(lineno)d | %(message)s')
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
return logger
logger = setup_logger('yourLogName', level=logging.DEBUG)
The above will create a logs in /opt/splunk/var/log/splunk/yourLogName.log
You need to create a log for it first. Otherwise it will probably end up in splunkd.log aka index=_internal sourcetype=splunkd but only in the form of a stack trace when there's an exception... or maybe all output will end up there... not certain. If you do create a logger for it though like below, you can control where it goes:
import splunk.mining.dcutils as dcu
logger = dcu.getLogger()
logger.info("This is info")
logger.error("this is error")
logger.warn("this is warn")
logger.exception("this is exception")
The above will certainly log to python.log.
import logging
from splunk.appserver.mrsparkle.lib.util import make_splunkhome_path
def setup_logger(name, level=logging.WARNING, maxBytes=25000000, backupCount=5):
'''
Set up a default logger.
@param name: The log file name.
@param level: The logging level.
@param maxBytes: The maximum log file size before rollover.
@param backupCount: The number of log files to retain.
'''
# Strip ".py" from the log file name if auto-generated by a script.
if '.py' in name:
name = name.replace(".py", "")
logfile = make_splunkhome_path(["var", "log", "splunk", name + '.log'])
logger = logging.getLogger(name)
logger.propagate = False # Prevent the log messages from being duplicated in the python.log file
logger.level = level
# Prevent re-adding handlers to the logger object, which can cause duplicate log lines
handler_exists = any([True for h in logger.handlers if h.baseFilename == logfile])
if not handler_exists:
file_handler = logging.handlers.RotatingFileHandler(logfile, mode='a', maxBytes=maxBytes, backupCount=backupCount)
formatter = logging.Formatter('%(asctime)s %(levelname)s pid=%(process)d tid=%(threadName)s file=%(filename)s:%(funcName)s:%(lineno)d | %(message)s')
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
return logger
logger = setup_logger('yourLogName', level=logging.DEBUG)
The above will create a logs in /opt/splunk/var/log/splunk/yourLogName.log
I did a spot check and it turned out that your first example is logging to var/log/splunk/python.log
Anyway thanks for the hint with the logger! It's working!
Good catch, I updated my answer.