Splunk Search

Cumulative counters with "slice" logic

Nikita_Danilov
Path Finder

Hi all!

I am working on task: Create cumulative chart for counting Success and Error entities, by 1 hour slice interval, with checking latest [Status] value by [ID] and [StatusDateTime] to every [Slice].

"Slice logic" - for example, exist next Events:

ID   Status   StatusDateTime
------------------------------
1    Error    2014-04-23 10:55
2    Success  2014-04-23 10:55
1    Success  2014-04-23 11:55

Need to get next result:

Slice              Success    Error
------------------------------------
2014-04-23 11:00   1          1
2014-04-23 12:00   2          0

I know how to calculate count separately for 1 hour periods:

index="log_index"  
| eval GroupDate=strftime(relative_time(StatusDateTime, "+1h@h"), "%Y-%m-%d %H:%M")  
| stats latest(Status) as Status by ID, GroupDate  
| stats c(eval(Status="Success")) as SuccessCount, c(eval(Status="Error")) as ErrorCount by GroupDate

In SQL, I can do subqueries for each period and calculate it (specifying latest in Sub-Search as GroupDate). But, as I understood, Splunk does not support passing parameters/values from Main-Search to Sub-Search, is it true?

I do not have any ideas how to create needed cumulative logic.
Anyone can guide me please on this?

Thanks!

0 Karma
1 Solution

Nikita_Danilov
Path Finder

In the end, we decided to create an internal cumulative index and accumulate therein summary statistics using scheduled search (http://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Usesummaryindexing).

Thank you all for your help!

View solution in original post

0 Karma

Nikita_Danilov
Path Finder

In the end, we decided to create an internal cumulative index and accumulate therein summary statistics using scheduled search (http://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Usesummaryindexing).

Thank you all for your help!

0 Karma

somesoni2
Revered Legend

I guess the output should be like this for your example

Slice   Success   Error
------------------------------------
1:00    1         1
2:00    2         1
3:00    2         2
4:00    3         1
5:00    4         1
6:00    5         1

Try this if it works

index=log_index | bucket span=1h _time as slice | eval slice=slice+3600| dedup ID slice | sort _time,ID| streamstats current=f window=2 first(Status) as fStatus by ID | eval ErrorCount=case(Status="Success" AND isnotnull(fStatus),-1, Status="Error",1) | stats count(eval(Status="Success")) as SuccessCount, sum(ErrorCount) as ErrorCount by slice | convert ctime(slice) as slice| streamstats sum(ErrorCount) as ErrorCount sum(SuccessCount) as SuccessCount | fillnull value=0

Nikita_Danilov
Path Finder

This example works only for cases when the "error" has been corrected in adjacent periods. With a larger gap in time, it will not work. However, the example is useful for another task, thank you!

0 Karma

Nikita_Danilov
Path Finder

Somesin2, thanks for this question, I forgot to say about this case: "Success" it's a final Status for each ID.
If already exists event with Status=Success and ID=1, then new event with Status=Error and ID=1 will not be added.

0 Karma

somesoni2
Revered Legend

What should be the output if you have another entry like this

ID Status StatusDateTime

1 Error 2014-04-23 12:55

0 Karma

Nikita_Danilov
Path Finder

More detailed example.
Events in index:

ID  Status      StatusDateTime
--------------------------------------
1   Error       0:30
2   Success     0:30
1   Success     1:30
3   Error       1:30
4   Error       2:30
3   Success     3:30
4   Success     4:30
5   Error       4:30
6   Success     5:30

Required overall result:

Slice   Success   Error
------------------------------------
1:00    1         1
2:00    2         1
3:00    2         2
4:00    3         1
5:00    4         1
6:00    5         1

Main issue - correctly calculate total Error's count.
If added new event, with the same ID and status "Success" (that before then had "Error"), total Error's count must decrease.
"Success" it's a final Status for each ID.
If already exists event with Status=Success and ID=1, then new event with Status=Error and ID=1 will not be added.
I have to get this result in one search, is it realizable?

Thanks!

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

Something like this?

index="log_index" | bucket span=1h _time as slice | dedup ID slice | timechart span=1h count by Status

The bucket will take care of your one-hour-slices, and the dedup will discard all but the latest event per slice for every ID.

martin_mueller
SplunkTrust
SplunkTrust

I guess I don't understand what difference from accum you'd like...

0 Karma

Nikita_Danilov
Path Finder

Unfortunately no... I need to calculate all slices in one search, "accum" will not give me result as in example.

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

Isn't that what accum does?

0 Karma

Nikita_Danilov
Path Finder

Hm... Sorry, I'm not sure that it's realy called as "slice cumulative logic", it's only my version. Therefore I tried to describe it in the first post.

Explanation: At every slice, search must include all events in previous slices. For example: if earliest boundary it's 8:00 AM, then:
1) At 9:00 AM - Includes events from 8:00 to 9:00 AM;
2) At 10:00 AM - Includes events from 8:00 to 10:00 AM;
3) At 11:00 AM - Includes events from 8:00 to 11:00 AM;
and so on.

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

Googling that term yields this question as the top result for me: https://www.google.com/search?q=slice+cumulative+logic
I guess you'll have to explain what you mean by that...

0 Karma

Nikita_Danilov
Path Finder

Therefore I called it as "slice cumulative logic" 🙂
I understand that it's unordinary logic, but there is such requirements.

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

In my mind a cumulative value cannot decrease, so maybe we're thinking of different things.

0 Karma

Nikita_Danilov
Path Finder

Martin, it's not fully what I need. I described task in the my first post. As you can see, in the second slice total Success increased, but total Error decreased.

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

Append this after the timechart:

... | accum Error | accum Success

How does that not calculate your cumulative values?

0 Karma

Nikita_Danilov
Path Finder

Martin, great thanks!

But, it is not cumulative. With this search, I'll get in slice 2014-04-23 12:00 - 1 Success and 0 Error.

I know about "accum" and "delta" operators, but they doesn't allow realize cumulative "slice logic by ID" fully.

0 Karma
Get Updates on the Splunk Community!

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...