Reporting

timechart not showing correct values

nikhilpai
Explorer

Hi, I am new to splunk and am trying to build one timechart.

we have the following timechart search query which is not giving the correct values in statistics but when we browse the events from the statistics the required data seems to be there. 

Not able to figure out how timechart exactly works here. Query as below , request help / explanation for the behavior. Filtered for a particular bizname, I select the date range from say 00:45 to 1:30 for a particular day.

I get the wrong "Percentage" value [say 60%] for the first block [00:45 to 1:00], but when go to the events and check it comes out to be 93%. What am I doing wrong here.

index=index1 sourcetype=*XYZ* 
| dedup col1, col2,col3 | search bizname="ABC"
| where completed in("Y","N")
| eval status=if(completed ="Y",100,0)
| timechart span=15m mean(status) as Percentage by bizname useother=false limit=100
| fillnull value=100

Thanks.

 

Labels (1)
0 Karma
1 Solution

nikhilpai
Explorer

We figured this out. We are using dedup as we have some values which het updated in the source at different intervals.

Hence when we were selecting the window 00:45 - 1:30 the dedup would be done across this time period and hence the % would go less in the first 15 min slot as they had corresponding  duplicate values in the third 15 min slot.

but when we would filter based only on the first 15 min slot it would show a higher % as there were no corresponding duplicates in this window.

So not a issue it only how the data is collected. Thanks .

 

 

View solution in original post

0 Karma

richgalloway
SplunkTrust
SplunkTrust

The mean() function does not calculate a percentage.  It's just an average of the values it's seen.  Percentages have to be calculated manually using the eval command.

---
If this reply helps you, Karma would be appreciated.
0 Karma

nikhilpai
Explorer

Thanks for the reply @richgalloway .

In this case we are using 100 for success & 0 for failure , no other values That would kind of work like a % in this case, I might be wrong.

Would you please advice how do I use calculation of % via eval in the query I am using. I have tried with count but it gives same kind of values as with mean function.

Thanks.

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Yes, I suppose an average of 0's and 100's is the same as a percentage.

What data are you seeing where you are calculating the 60% figure?

---
If this reply helps you, Karma would be appreciated.
0 Karma

nikhilpai
Explorer

For e.g. in the case mentioned if I go down to the events it shows me 93% "Y" & 7% "N" with about 104 total events for the mentioned time frame [00:45 - 01:00].

But on the statistics page we are seeing 60% "Y" and if I put a count in the timechart as below it shows a count of 14. Again if I go in the events I see 104 events. So, very much confused on how the timechart is interpreting the data.

timechart span=15m count(completed) as tot_cnt  by bizname useother=false limit=100

0 Karma

richgalloway
SplunkTrust
SplunkTrust

I am confused as well.  Can you share screen shots showing where you see the 60% and 93%.  I suspect you may be comparing an overall figure with one for a 15-minute interval.

---
If this reply helps you, Karma would be appreciated.
0 Karma

nikhilpai
Explorer

Thank you for your patience .This is the first screen. I have selected a time period from 00:45 to 1:30 span=15m as given in the query. I am taking example of the first entry on 00:45. The count is shown as 14 & mean of around 57%.

1-statistics1-statistics

for the first one 00:45 if I go to the events, I see 104 events and the status count as below. [which is not same as count of 14 ]

2-status_count2-status_count

From  the statistics page if I narrow down to the 00:45 window I get expected values. So not sure what's happening here. Appreciate any insights on this.

3-narrow_dwn3-narrow_dwn

Thanks.

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Thanks for the pix.  I'm afraid I still can't explain the difference.  Perhaps someone else will have a suggestion.

---
If this reply helps you, Karma would be appreciated.
0 Karma

nikhilpai
Explorer

Thank You for your time hope someone else can help.

0 Karma

nikhilpai
Explorer

We figured this out. We are using dedup as we have some values which het updated in the source at different intervals.

Hence when we were selecting the window 00:45 - 1:30 the dedup would be done across this time period and hence the % would go less in the first 15 min slot as they had corresponding  duplicate values in the third 15 min slot.

but when we would filter based only on the first 15 min slot it would show a higher % as there were no corresponding duplicates in this window.

So not a issue it only how the data is collected. Thanks .

 

 

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...