Splunk Search

Calculate Ticket Counts for Each Shift

dtaylor
Path Finder

Good day,

 

I feel like this should be a simple problem, but I've looked at it too long and need some help.

 

I have a CSV lookup file that lists all the tickets closed by my SOC in the past month. It has various fields such as when a ticket was created and when it was closed. These tickets were each closed across an 8 hour shift, and I'm attempting to find out the ticket closure numbers per a shift over the past month or more for every week.

 

This is what I've got so far

| inputlookup shift_tickets.csv
| eval closed_date = strptime(closed,"%a, %d %b %Y %T %Z"), _time = strptime(occurred,"%a, %d %b %Y %T %Z")
| addinfo
| where _time>=info_min_time AND (_time<=info_max_time OR info_max_time="+Infinity")
| bin closed_date span=8h aligntime=@d+6h+30m
| stats count by closed_date

 

This works for splitting out the count for each individual shift(which starts at 06:30), however, I don't actually have any fields setting a specific time slot as belonging to a specific shift, so I can't use something like 'stats count by shift'. And this makes it hard to get anything meaningful out of the numbers.

Labels (1)
0 Karma
1 Solution

ITWhisperer
SplunkTrust
SplunkTrust

Your bin command gives you the start time of the shift in which the ticket was closed, so you could just use the hour and minute of that to denote the shift

| inputlookup shift_tickets.csv
| eval closed_date = strptime(closed,"%a, %d %b %Y %T %Z"), _time = strptime(occurred,"%a, %d %b %Y %T %Z")
| addinfo
| where _time>=info_min_time AND (_time<=info_max_time OR info_max_time="+Infinity")
| bin closed_date span=8h aligntime=@d+6h+30m
| eval shift_start=strftime(closed_date,"%H:%M")
| stats count by shift_start

View solution in original post

tej57
Builder

Hey @dtaylor,

You can create static shift  times from the date field. You're already extracting closed date, and using strptime, identify the epoch time and then in increments of 8 hours, you can create the other shifts. Once we have the shift, using the case or if condition, assign the shift in which the task was closed. And once the shift is assigned, you can then do a stats by shift.

| inputlookup shift_tickets.csv
| eval closed_date = strptime(closed,"%a, %d %b %Y %T %Z"), _time = strptime(occurred,"%a, %d %b %Y %T %Z")
| addinfo
| where _time>=info_min_time AND (_time<=info_max_time OR info_max_time="+Infinity")
| eval shift_time_1=relative_time(closed_date, -24h@d)
| eval shift_time_1st=shift_time_1+86400+23500
| eval_shift_time_2nd=shift_time_1st+28800
| eval shift_time_3rd=shift_time_2nd+28800
| eval closed_ticket_shift=case(closed_date>shift_time_1st AND closed_date<shift_time_2nd, "1", closed_date>shift_time_2nd AND closed_date<shift_time_3rd, "2", closed_date>shift_time_3rd, "3")
| stats count by closed_ticket_shift

Let me know how this works for you and we can check further with the help of sample data. 

 

Thanks,
Tejas.

---
If the above solution helps, an upvote is appreciated..!!

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

Your bin command gives you the start time of the shift in which the ticket was closed, so you could just use the hour and minute of that to denote the shift

| inputlookup shift_tickets.csv
| eval closed_date = strptime(closed,"%a, %d %b %Y %T %Z"), _time = strptime(occurred,"%a, %d %b %Y %T %Z")
| addinfo
| where _time>=info_min_time AND (_time<=info_max_time OR info_max_time="+Infinity")
| bin closed_date span=8h aligntime=@d+6h+30m
| eval shift_start=strftime(closed_date,"%H:%M")
| stats count by shift_start

livehybrid
SplunkTrust
SplunkTrust

Hi @dtaylor 

How about something like this?

| eval shift_label = strftime(closed_date,"%H:%M")
| stats sum(ticketsClosed) by shift_label

livehybrid_0-1755095170007.png

 

Full example:

|makeresults count=48 
| streamstats count
| eval _time=_time-(count*60*60)
| eval ticketsClosed=random() % 9
| eval closed_date=_time
| bin closed_date span=8h aligntime=@d+6h+30m
| eval shift_label = strftime(closed_date,"%H:%M")
| stats sum(ticketsClosed) by shift_label

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

0 Karma

sainag_splunk
Splunk Employee
Splunk Employee

@dtaylor The issue might be because of your bin command  which creates time buckets but doesn't give them meaningful shift names. 

 

  Try to add shift logic after your existing binning:

  | inputlookup shift_tickets.csv

  | eval closed_date = strptime(closed,"%a, %d %b %Y %T %Z"), _time = strptime(occurred,"%a, %d %b %Y %T %Z")

  | addinfo

  | where _time>=info_min_time AND (_time<=info_max_time OR info_max_time="+Infinity")

  | eval hour_min = strftime(closed_date, "%H:%M")

  | eval shift = case(

      hour_min >= "06:30" AND hour_min < "14:30", "Day Shift",

      hour_min >= "14:30" AND hour_min < "22:30", "Evening Shift",

      true(), "Night Shift")

  | stats count by shift



  For weekly trends:

  | eval week = strftime(closed_date, "%Y-Week-%U")

  | stats count by week, shift

This keeps your time filtering logic but adds the shift names you need. Much easier than trying to decode those binned timestamps!
Hope this helps.



If this Helps Please Upvote.

If this helps, Upvote!!!!
Together we make the Splunk Community stronger 
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Index This | What travels the world but is also stuck in place?

April 2026 Edition  Hayyy Splunk Education Enthusiasts and the Eternally Curious!   We’re back with this ...

Discover New Use Cases: Unlock Greater Value from Your Existing Splunk Data

Realizing the full potential of your Splunk investment requires more than just understanding current usage; it ...

Continue Your Journey: Join Session 2 of the Data Management and Federation Bootcamp ...

As data volumes continue to grow and environments become more distributed, managing and optimizing data ...