Deployment Architecture

How can I add empty time buckets to my table?

jedatt01
Builder

I have a dataset which I cannot use timechart because i'm splitting by two fields. Not all of the values of message have events in all time buckets. Is there a way to add 0 time buckets for each of value of message?

index=myindex msg_severity=* | bucket span=1m _time | stats count by message msg_severity _time
0 Karma
1 Solution

jedatt01
Builder

I just figured out the solution. Somesoni2's response gave me hint. Since msg_severity is always the same for each message, I combined the fields, then ran it through the timechart command. Then untable and wa-lah!

index=myindex msg_severity=* | eval message_severity = message + "|" + msg_severity | timechart span=1m count by message_severity | untable _time message_severity count | eval temp = split(message_severity,"|") | eval message = mvindex(temp,0) | eval msg_severity = mvindex(temp,1) | table _time message msg_severity count

View solution in original post

jedatt01
Builder

I just figured out the solution. Somesoni2's response gave me hint. Since msg_severity is always the same for each message, I combined the fields, then ran it through the timechart command. Then untable and wa-lah!

index=myindex msg_severity=* | eval message_severity = message + "|" + msg_severity | timechart span=1m count by message_severity | untable _time message_severity count | eval temp = split(message_severity,"|") | eval message = mvindex(temp,0) | eval msg_severity = mvindex(temp,1) | table _time message msg_severity count

somesoni2
Revered Legend

If there could be more 10 messages possible, then you can include limit=0 in you timechart to ensure you get every message-severity listed for you.

0 Karma

woodcock
Esteemed Legend

You need the fillnull command:

http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/fillnull

Use it like this:

index=myindex msg_severity=* | fillnull value=NULL message msg_severity | bucket span=1m _time | stats count by message msg_severity _time
0 Karma

jedatt01
Builder

Not quite there yet. Let me try to explain a little better. Not all of the values of message have entries every minute, so they won't show up in all time buckets. In my resulting table the entries for those time buckets are just missing, example below

_time message    msg_severity count
9:00   message1  error               10
9:01   message1  error               6
9:00   message2  warning          3
9:01   message2  warning          4
9:01   message3  notice              6

The 9:00 entry for message3 is missing. I want the table to look like

_time message    msg_severity count
9:00   message1  error               10
9:01   message1  error               6
9:00   message2  warning          3
9:01   message2  warning          4
9:00   message3  notice              0
9:01   message3  notice              6
0 Karma

somesoni2
Revered Legend

What's max time range (considering 1m time bucket) and what's the total number of message-msg_severity combination? If either of them is smaller, we can try something with chart command.

0 Karma

jedatt01
Builder

time range is only 2 minutes. Total message - msg_severity combinations is one. It's always unique. Hope that helps.

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Index This | What travels the world but is also stuck in place?

April 2026 Edition  Hayyy Splunk Education Enthusiasts and the Eternally Curious!   We’re back with this ...

Discover New Use Cases: Unlock Greater Value from Your Existing Splunk Data

Realizing the full potential of your Splunk investment requires more than just understanding current usage; it ...

Continue Your Journey: Join Session 2 of the Data Management and Federation Bootcamp ...

As data volumes continue to grow and environments become more distributed, managing and optimizing data ...