Splunk Search

Ticket analytics: How to chart open tickets over time like a "Burndown chart"?

charliehack
Engager

I would like to show an Open Ticket Count over time. A kind of “burndown chart”.

I’ve read around on the KB, but there seem to be quite a few people who have faced this issue over the years, but no good answer yet. The consensus seems to be to use lookups in order to maintain state, but this isn’t very useful for us since we are working off of purely historical data (not real-time). http://blogs.splunk.com/2011/01/11/maintaining-state-of-the-union/

In pseudocode, this is what I want to chart:

open_defects = {}

for day in date_range(now() - 90d, now()):
    open_defect_count = SELECT defects WHERE date_created < day AND date_closed > day OR status="Open"
    open_defects[day] = open_defect_count

line_chart(open_defects)

Any ideas for how to accomplish this? Just dedup'ing the tickets and charting count by date_created doesn't work, because the tickets are mostly all closed by now. I want the tickets that were "Open at X date".

Thanks in advance for help.

Charlie

0 Karma
1 Solution

charliehack
Engager

I ended up solving this with a custom search command. Praveen's answer above would work perfectly if the data were perfect, that is if you have an OPEN and a CLOSED event stored for each ticket. Unfortunately we don't have that.

Below is the script burndown.py for anyone who is interested. It runs slowly in Search but I think making it a saved search that runs daily will do the trick 🙂

'''
Burndown Chart custom search command for Splunk
Charles Hack, Accenture
<charles.hack@accenture.com>
April 2016

This script takes in a list of deduplicated ALM defect events (TODO: generalize)
and returns a table of days and the count of defects that were open on that day.

This can be timecharted to yield a picture of the ebb and flow of open tickets over time.

Input format:

  ```
[{'date_created':'02-06-1991 06:00:00', 'status':'Closed', 'date_closed':'02-07-1991 12:15:00', ...}, ...]
```

Output format:

  ```
[{'_time':'02-01-1991', 'defects_open':22}, {'_time':'02-02-1991', 'defects_open':19}, ...]
```
'''
import json
import logging
from datetime import datetime
from datetime import timedelta
from splunk.Intersplunk import readResults, outputResults

DATETIME_FORMAT = "%Y-%m-%d %H:%M:%S"
DATE_FORMAT     = "%Y-%m-%d"

#set up logging suitable for splunkd comsumption
logging.root
logging.root.setLevel(logging.INFO)
formatter = logging.Formatter('%(levelname)s %(message)s')
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logging.root.addHandler(handler)


# helper functions
def get_date_created(e):
    try:
        return datetime.strptime(e['fields.creation-time'][:10], DATE_FORMAT).date()
    except:
        return datetime.now().date()

def get_date_modified(e):
    try:
        return datetime.strptime(e['last-modified'][:19], DATETIME_FORMAT).date()
    except:
        return datetime.now().date()

def get_date_closed(e):
    try:
        return datetime.strptime(e['fields.closing-date'][:10], DATE_FORMAT).date()
    except:
        return datetime.now().date()

def get_dates(events, date_fieldname):
    if date_fieldname == 'created':
        return map(lambda x: get_date_created(x), events)
    elif date_fieldname == 'modified':
        return map(lambda x: get_date_modified(x), events)


if __name__ == '__main__':

    # get events piped in
    events = readResults()
    logging.info("Read in {} results.".format(len(events)))
    logging.info("Sample craetion time: [{}]".format(datetime.strptime(events[100]['fields.creation-time'][:10], DATE_FORMAT).isoformat()))



    # extract dates
    created_dates  = get_dates(events, 'created')
    logging.info('got {} created dates'.format(len(created_dates)))
    logging.info('5th val is {}'.format(created_dates[5].isoformat()))
    logging.info('1400th val is {}'.format(created_dates[1400].isoformat()))
    modified_dates = get_dates(events, 'modified')

    # get min and max dates
    min_date = min(created_dates)
    logging.info('Min date is {}'.format(min_date.isoformat()))
    max_date = max(modified_dates)
    logging.info('Max date is {}'.format(max_date.isoformat()))
    n_days   = (max_date - min_date).days  # timedelta object
    logging.info('{} days represented total.'.format(n_days))

    # create date range to populate
    date_range = [min_date + timedelta(days=n) for n in range(n_days)]


    output = []
    for d in date_range:

        # how many defects were in OPEN state on this day?
        defects_open = len(filter(lambda e: get_date_created(e) <= d <= get_date_closed(e), events))

        # append to the output table
        output.append({'_time': d.strftime(DATE_FORMAT), 'defects_open': defects_open})

    logging.info('Generated output table with {} results.'.format(len(output)))

    # output results
    outputResults(output)

View solution in original post

0 Karma

charliehack
Engager

I ended up solving this with a custom search command. Praveen's answer above would work perfectly if the data were perfect, that is if you have an OPEN and a CLOSED event stored for each ticket. Unfortunately we don't have that.

Below is the script burndown.py for anyone who is interested. It runs slowly in Search but I think making it a saved search that runs daily will do the trick 🙂

'''
Burndown Chart custom search command for Splunk
Charles Hack, Accenture
<charles.hack@accenture.com>
April 2016

This script takes in a list of deduplicated ALM defect events (TODO: generalize)
and returns a table of days and the count of defects that were open on that day.

This can be timecharted to yield a picture of the ebb and flow of open tickets over time.

Input format:

  ```
[{'date_created':'02-06-1991 06:00:00', 'status':'Closed', 'date_closed':'02-07-1991 12:15:00', ...}, ...]
```

Output format:

  ```
[{'_time':'02-01-1991', 'defects_open':22}, {'_time':'02-02-1991', 'defects_open':19}, ...]
```
'''
import json
import logging
from datetime import datetime
from datetime import timedelta
from splunk.Intersplunk import readResults, outputResults

DATETIME_FORMAT = "%Y-%m-%d %H:%M:%S"
DATE_FORMAT     = "%Y-%m-%d"

#set up logging suitable for splunkd comsumption
logging.root
logging.root.setLevel(logging.INFO)
formatter = logging.Formatter('%(levelname)s %(message)s')
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logging.root.addHandler(handler)


# helper functions
def get_date_created(e):
    try:
        return datetime.strptime(e['fields.creation-time'][:10], DATE_FORMAT).date()
    except:
        return datetime.now().date()

def get_date_modified(e):
    try:
        return datetime.strptime(e['last-modified'][:19], DATETIME_FORMAT).date()
    except:
        return datetime.now().date()

def get_date_closed(e):
    try:
        return datetime.strptime(e['fields.closing-date'][:10], DATE_FORMAT).date()
    except:
        return datetime.now().date()

def get_dates(events, date_fieldname):
    if date_fieldname == 'created':
        return map(lambda x: get_date_created(x), events)
    elif date_fieldname == 'modified':
        return map(lambda x: get_date_modified(x), events)


if __name__ == '__main__':

    # get events piped in
    events = readResults()
    logging.info("Read in {} results.".format(len(events)))
    logging.info("Sample craetion time: [{}]".format(datetime.strptime(events[100]['fields.creation-time'][:10], DATE_FORMAT).isoformat()))



    # extract dates
    created_dates  = get_dates(events, 'created')
    logging.info('got {} created dates'.format(len(created_dates)))
    logging.info('5th val is {}'.format(created_dates[5].isoformat()))
    logging.info('1400th val is {}'.format(created_dates[1400].isoformat()))
    modified_dates = get_dates(events, 'modified')

    # get min and max dates
    min_date = min(created_dates)
    logging.info('Min date is {}'.format(min_date.isoformat()))
    max_date = max(modified_dates)
    logging.info('Max date is {}'.format(max_date.isoformat()))
    n_days   = (max_date - min_date).days  # timedelta object
    logging.info('{} days represented total.'.format(n_days))

    # create date range to populate
    date_range = [min_date + timedelta(days=n) for n in range(n_days)]


    output = []
    for d in date_range:

        # how many defects were in OPEN state on this day?
        defects_open = len(filter(lambda e: get_date_created(e) <= d <= get_date_closed(e), events))

        # append to the output table
        output.append({'_time': d.strftime(DATE_FORMAT), 'defects_open': defects_open})

    logging.info('Generated output table with {} results.'.format(len(output)))

    # output results
    outputResults(output)
0 Karma

praveenkpatidar
Explorer

You can use the below query to do burnup and burndown charts. This will give you perfect graph with the continue line.

Burndown

* | bucket _time span=1d | eval statusnumber=case(status="open",1,status="closed",-1 ) | timechart sum(statusnumber) as count | streamstats sum(count) as cumulative | fields – count

Burnup

* | bucket _time span=1d | eval statusnumber=case(status="closed",1) | timechart sum(statusnumber) as count | streamstats sum(count) as cumulative | fields – count

scottsavaresevi
Path Finder

Does your ticketing system have log files? If so, can you scrape the log files for open ticket events and close ticket events and count those over time.

You are giving a SQL statement, so have you tried DB Connect? You can use an input and collect the statistic daily or as a lookup run each time you need to do the search.

0 Karma

somesoni2
Revered Legend

You can still use the lookups to find the state of tickets for historical period also. All you need to do is (after finalizing the search that will populate the lookup on daily basis) is to run that search for historical date. This could be done using Splunk's backfill process (generally used for backfill of summary indexes but same simulation of search execution for historical period would work in your case as well). See this link for more details on backfill process
http://docs.splunk.com/Documentation/Splunk/6.0.2/Knowledge/Managesummaryindexgapsandoverlaps#Use_th...

Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...