Splunk Search

Weird behavior with timechart - any way to workaround?

sideview
SplunkTrust
SplunkTrust

1) If I run a regular timechart command against normal rows.

* | timechart span=1h count by sourcetype limit=500

then for timebuckets and sourcetypes where no data existed, the timechart command fills in a "0" for me. All is well.

2) If I use timechart however with data that's already been aggregated.

ie (forgive the artificial example here)

* 
| bin _time span=1h
| stats count by _time sourcetype 
| timechart span=1h sum(count) by sourcetype limit=500

or

| tstats prestats=t count WHERE index=*  GROUPBY sourcetype _time span=1h
| stats count by _time sourcetype 
| timechart span=1h sum(count) as count by sourcetype limit=500

Then it's weird. I get the same chart obviously but the timechart command fails to fill in the zeros. Instead I get null values everywhere there should be a zero.

This is throwing things off for me, and I'm curious if anyone knows the root cause or if there's any way to work around the problem so I get my zeros back.

Notes:

* I actually need the zeros back in the actual search result data, rather that just being graphed as zeros in the charting stuff. )
* if I didn't have a split-by field I could just do fillnull, but I don't know the values of all the split-by field's field values in advance.

1 Solution

daniel_splunk
Splunk Employee
Splunk Employee

How about this searach.

index=_internal | bin _time span=1h
| stats count by _time sourcetype|timechart span=1h sum(count) by sourcetype limit=500|fillnull

View solution in original post

daniel_splunk
Splunk Employee
Splunk Employee

How about this searach.

index=_internal | bin _time span=1h
| stats count by _time sourcetype|timechart span=1h sum(count) by sourcetype limit=500|fillnull

sideview
SplunkTrust
SplunkTrust

It does work as a workaround. I had no idea that you could use fillnull with no args and it would be able to find the null-valued columns in a splitby timechart. Good to know.

I think ultimately the sum() function in timechart is just being overly cautious. It makes sense of course that avg() would output null if there were no values in that bucket, and it makes sense that count() would output zero even if there were no values. But sum() seems to think it needs to be careful like avg(), when it would be perfectly reasonable for it to output zeros.

0 Karma

sideview
SplunkTrust
SplunkTrust

linu1988: your search does indeed give zeros instead of nulls, but that's more a side effect of your "count(count)". That "count(count)" alters the meaning of the search quite a bit and in my case makes it meaningless. you're just asking stats to count the number of rows that have a field called count, rather than asking it to add up the counts. And if it was correct behavior that doesn't answer why timechart does create the same zeros if you pass it unaggregated event rows.

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

As a dirty fix you could write an enhanced fillnull that doesn't need to know the column names in advance.

0 Karma

linu1988
Champion

its the aggregate functions which are not filling zeros, which i think is correct.

i tested with

index=_internal | bin _time span=1h | stats count by _time sourcetype | makecontinuous _time span=1h | timechart span=1h count(count) by sourcetype limit=500

it still gives zero. For other cases you need to use FILLNULL.

0 Karma

sideview
SplunkTrust
SplunkTrust

I'm afraid makecontinuous doesn't fix the issue. Try it yourself. With
index=_internal | timechart span=1h count by sourcetype limit=500
I get zeros. Whereas with the following I get null wherever there should be zeros:

index=_internal
| bin _time span=1h
| stats count by _time sourcetype
| makecontinuous _time span=1h
| timechart span=1h sum(count) by sourcetype limit=500

0 Karma

linu1988
Champion

| stats count by _time sourcetype
|makecontinuous _time span=1m
| timechart span=1h sum(count) as count by sourcetype limit=500

Could you give it a try, then fill 0s?

0 Karma
Get Updates on the Splunk Community!

Routing Data to Different Splunk Indexes in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. The OpenTelemetry project is the second largest ...

Getting Started with AIOps: Event Correlation Basics and Alert Storm Detection in ...

Getting Started with AIOps:Event Correlation Basics and Alert Storm Detection in Splunk IT Service ...

Register to Attend BSides SPL 2022 - It's all Happening October 18!

Join like-minded individuals for technical sessions on everything Splunk!  This is a community-led and run ...