Splunk Search

Timechart duration, with values rolling into next span

ALXWBR
Path Finder

I am running the below search to get a sum of starvation per 15 minute period. The problem I am having, is that duration is always attributed to the start time of the event; So if the starvation runs over more than one 15 minutes period, it's still attributing it back to the start time-slice. Ideally I need it to roll over seconds into the next span if they exceed 900 seconds.

Here is the search:

index=idx_sems source="sems_north" sourcetype=SEMSSmartLaneEvents Tag="XRY_STVD" EquipmentID=3
| transaction EquipmentID  startswith=TagBitData="1" endswith=TagBitData="0"
| timechart span=15m sum(duration) as total

Here is an example of the result

_time                  sum
2019-07-22 02:15:00 98.893
2019-07-22 02:30:00 937.92
2019-07-22 02:45:00 1009.674
2019-07-22 03:00:00 2593.638
2019-07-22 03:15:00  
2019-07-22 03:30:00  
2019-07-22 03:45:00 706.153

I need to to show a maximum of 900 in the sum and roll down any difference.

0 Karma
1 Solution

dmarling
Builder

You can use streamstats to carryover the overflow to the next 15 minute bucket and then perform some evals to make sure you are not overflowing the next one. The below query is making the assumption that your overflow won't exceed two 15 minute buckets. If that happens you'll need to add another streamstats and round of adjusted totals/overflows. Hopefully there's a better way to do this, but this is the only way I could think of. Here's a run anywhere example using the sample data you provided:

| makeresults count=1 
| eval data=" 2019-07-22 02:15:00    98.893
 2019-07-22 02:30:00    937.92
 2019-07-22 02:45:00    1009.674
 2019-07-22 03:00:00    2593.638
 2019-07-22 03:15:00     
 2019-07-22 03:30:00     
 2019-07-22 03:45:00    706.153" 
| rex max_match=0 field=data "(?<data>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}[^\n]+)" 
| mvexpand data 
| rex field=data "(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})" 
| rex field=data "(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})\s+(?<total>[^\s]+)" 
| eval _time=strptime(time, "%Y-%m-%d %H:%M:%S") 
| fields - time data
| eval overflow=if(total>900, total-900, null())
| streamstats window=1 current=false values(overflow) as previousoverflow
| eval adjustedtotal=coalesce(previousoverflow,0)+coalesce(total, 0)
| eval adjustedoverflow=if(adjustedtotal>900, adjustedtotal-900, 0)
| streamstats window=1 current=false values(adjustedoverflow) as previousadjustedoverflow
| eval finalTotal=case(adjustedtotal=0, previousadjustedoverflow, adjustedtotal>900, 900, NOT adjustedtotal=0 AND NOT adjustedtotal>900, adjustedtotal)
| rename total as oldTotal
| table _time oldTotal overflow previousoverflow adjustedtotal adjustedoverflow  previousadjustedoverflow finalTotal

Here's it with the query that you provided:

index=idx_sems source="sems_north" sourcetype=SEMSSmartLaneEvents Tag="XRY_STVD" EquipmentID=3 
| transaction EquipmentID startswith=TagBitData="1" endswith=TagBitData="0" 
| timechart span=15m sum(duration) as total 
| eval overflow=if(total>900, total-900, null()) 
| streamstats window=1 current=false values(overflow) as previousoverflow 
| eval adjustedtotal=coalesce(previousoverflow,0)+coalesce(total, 0) 
| eval adjustedoverflow=if(adjustedtotal>900, adjustedtotal-900, 0) 
| streamstats window=1 current=false values(adjustedoverflow) as previousadjustedoverflow 
| eval finalTotal=case(adjustedtotal=0, previousadjustedoverflow, adjustedtotal>900, 900, NOT adjustedtotal=0 AND NOT adjustedtotal>900, adjustedtotal) 
| rename total as oldTotal 
| table _time oldTotal overflow previousoverflow adjustedtotal adjustedoverflow previousadjustedoverflow finalTotal
If this comment/answer was helpful, please up vote it. Thank you.

View solution in original post

0 Karma

dmarling
Builder

You can use streamstats to carryover the overflow to the next 15 minute bucket and then perform some evals to make sure you are not overflowing the next one. The below query is making the assumption that your overflow won't exceed two 15 minute buckets. If that happens you'll need to add another streamstats and round of adjusted totals/overflows. Hopefully there's a better way to do this, but this is the only way I could think of. Here's a run anywhere example using the sample data you provided:

| makeresults count=1 
| eval data=" 2019-07-22 02:15:00    98.893
 2019-07-22 02:30:00    937.92
 2019-07-22 02:45:00    1009.674
 2019-07-22 03:00:00    2593.638
 2019-07-22 03:15:00     
 2019-07-22 03:30:00     
 2019-07-22 03:45:00    706.153" 
| rex max_match=0 field=data "(?<data>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}[^\n]+)" 
| mvexpand data 
| rex field=data "(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})" 
| rex field=data "(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})\s+(?<total>[^\s]+)" 
| eval _time=strptime(time, "%Y-%m-%d %H:%M:%S") 
| fields - time data
| eval overflow=if(total>900, total-900, null())
| streamstats window=1 current=false values(overflow) as previousoverflow
| eval adjustedtotal=coalesce(previousoverflow,0)+coalesce(total, 0)
| eval adjustedoverflow=if(adjustedtotal>900, adjustedtotal-900, 0)
| streamstats window=1 current=false values(adjustedoverflow) as previousadjustedoverflow
| eval finalTotal=case(adjustedtotal=0, previousadjustedoverflow, adjustedtotal>900, 900, NOT adjustedtotal=0 AND NOT adjustedtotal>900, adjustedtotal)
| rename total as oldTotal
| table _time oldTotal overflow previousoverflow adjustedtotal adjustedoverflow  previousadjustedoverflow finalTotal

Here's it with the query that you provided:

index=idx_sems source="sems_north" sourcetype=SEMSSmartLaneEvents Tag="XRY_STVD" EquipmentID=3 
| transaction EquipmentID startswith=TagBitData="1" endswith=TagBitData="0" 
| timechart span=15m sum(duration) as total 
| eval overflow=if(total>900, total-900, null()) 
| streamstats window=1 current=false values(overflow) as previousoverflow 
| eval adjustedtotal=coalesce(previousoverflow,0)+coalesce(total, 0) 
| eval adjustedoverflow=if(adjustedtotal>900, adjustedtotal-900, 0) 
| streamstats window=1 current=false values(adjustedoverflow) as previousadjustedoverflow 
| eval finalTotal=case(adjustedtotal=0, previousadjustedoverflow, adjustedtotal>900, 900, NOT adjustedtotal=0 AND NOT adjustedtotal>900, adjustedtotal) 
| rename total as oldTotal 
| table _time oldTotal overflow previousoverflow adjustedtotal adjustedoverflow previousadjustedoverflow finalTotal
If this comment/answer was helpful, please up vote it. Thank you.
0 Karma

ALXWBR
Path Finder

Thanks for this. However it's not quite doing the trick. Its only rolling over twice, but if the number is in excess of 2700, it doesn't continue to keep rolling down until the seconds have all bee accounted for.

0 Karma

ALXWBR
Path Finder

oh yeah sorry! Well the machines could be out of use for a few hours at times, for example, the day I am looking at right now has a transaction over 10,000 seconds long, which would roll over into 12 buckets.

0 Karma

dmarling
Builder

Okay, yeah that's doable, but gets irritatingly complicated the more buckets you go down. I'll need to find something that iterates the process automatically without having to have 12 sets of streamstats as that's not really sustainable.

If this comment/answer was helpful, please up vote it. Thank you.
0 Karma

ALXWBR
Path Finder

I've been racking my brain with streamstats>eval>if combos for two days now and just can't seem to find the combo.

0 Karma

dmarling
Builder

Okay I took this thing back to the drawing board. Due to concurrency we really just need to identify groups where a duration will cause it to go over to another 15 minute bucket and determine how many 15 minute buckets it needs. If something else has multiple buckets inbetween that spill over and need less buckets than one of those 10k seconds ones you found then we don't really care about it as it'll be 900 seconds in this timechart due to that concurrency.

With that in mind if we create a subsearch that assigns groups based on how many buckets are needed and then create some rules around that it should accomplish what we need. I took your example and added some data to it for testing different use cases. Here's the run anywhere example:

| makeresults count=1 
| eval data=" 2019-07-22 02:00:00    
2019-07-22 02:15:00    98.893
  2019-07-22 02:30:00    937.92
  2019-07-22 02:45:00    1009.674
  2019-07-22 03:00:00    2593.638
  2019-07-22 03:15:00      
  2019-07-22 03:30:00     
  2019-07-22 03:45:00    706.153
  2019-07-22 04:00:00    2800.153
  2019-07-22 04:15:00     
  2019-07-22 04:30:00     
  2019-07-22 04:45:00     
  2019-07-22 05:00:00     
  2019-07-22 05:15:00     
  2019-07-22 05:30:00     " 
| rex max_match=0 field=data "(?<data>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}[^\n]+)" 
| mvexpand data 
| rex field=data "(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})" 
| rex field=data "(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})\s+(?<total>[^\s]+)" 
| eval _time=strptime(time, "%Y-%m-%d %H:%M:%S") 
| fields - time data 
| eval amount=coalesce(case((total/900)>1 AND round(total/900,0)>(total/900), round(total/900,0)-1, (total/900)>1 AND round(total/900,0)<(total/900), round(total/900,0)), 0) 
| streamstats current=f count as counter 
| eval carryover=if(total>900, total-900, 0) 
| eval future=if(carryover>0, counter+amount, 0) 
| eval group=case( 
    [| makeresults count=1 
    | eval data=" 2019-07-22 02:00:00   
2019-07-22 02:15:00    98.893
  2019-07-22 02:30:00    937.92
  2019-07-22 02:45:00    1009.674
  2019-07-22 03:00:00    2593.638
  2019-07-22 03:15:00      
  2019-07-22 03:30:00     
  2019-07-22 03:45:00    706.153
  2019-07-22 04:00:00    2800.153
  2019-07-22 04:15:00     
  2019-07-22 04:30:00     
  2019-07-22 04:45:00     
  2019-07-22 05:00:00     
  2019-07-22 05:15:00     
  2019-07-22 05:30:00     " 
    | rex max_match=0 field=data "(?<data>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}[^\n]+)" 
    | mvexpand data 
    | rex field=data "(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})" 
    | rex field=data "(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})\s+(?<total>[^\s]+)" 
    | eval _time=strptime(time, "%Y-%m-%d %H:%M:%S") 
    | fields - time data 
    | eval amount=coalesce(case((total/900)>1 AND round(total/900,0)>(total/900), round(total/900,0)-1, (total/900)>1 AND round(total/900,0)<(total/900), round(total/900,0)), 0) 
    | streamstats current=f count as counter 
    | eval carryover=if(total>900, total-900, 0) 
    | eval future=if(carryover>0, counter+amount, 0) 
    | search NOT future=0 
    | stats min(counter) as counter by future 
    | sort 0 - future 
    | eval future="counter<=".future 
    | eval counter="counter>=".counter 
    | streamstats count as group 
    | eval search="(".counter." AND ".future."), \"".group."\"" 
    | stats list(search) as search 
    | eval search=mvjoin(search, ", ")]) 
| eventstats max(amount) as maxamount max(future) as maxfuture max(total) as maxcarryover count as groupcount by group 
| eval finalTotal=case(isnotnull(group) AND (NOT counter=maxfuture OR groupcount=1), 900, isnotnull(group) AND counter=maxfuture, maxcarryover-(maxamount*900), isnull(group), total)
| rename total as oldTotal
| table _time oldTotal finalTotal

If you add a larger number on there it will automatically account for that since it's mapping groups based on how many buckets the largest duration requires. Here's it mapped to your specific use case:

index=idx_sems source="sems_north" sourcetype=SEMSSmartLaneEvents Tag="XRY_STVD" EquipmentID=3 
| transaction EquipmentID startswith=TagBitData="1" endswith=TagBitData="0" 
| timechart span=15m sum(duration) as total 
| eval amount=coalesce(case((total/900)>1 AND round(total/900,0)>(total/900), round(total/900,0)-1, (total/900)>1 AND round(total/900,0)<(total/900), round(total/900,0)), 0) 
| streamstats current=f count as counter 
| eval carryover=if(total>900, total-900, 0) 
| eval future=if(carryover>0, counter+amount, 0) 
| eval group=case( 
    [ search index=idx_sems source="sems_north" sourcetype=SEMSSmartLaneEvents Tag="XRY_STVD" EquipmentID=3 
    | transaction EquipmentID startswith=TagBitData="1" endswith=TagBitData="0" 
    | timechart span=15m sum(duration) as total 
    | eval amount=coalesce(case((total/900)>1 AND round(total/900,0)>(total/900), round(total/900,0)-1, (total/900)>1 AND round(total/900,0)<(total/900), round(total/900,0)), 0) 
    | streamstats current=f count as counter 
    | eval carryover=if(total>900, total-900, 0) 
    | eval future=if(carryover>0, counter+amount, 0) 
    | search NOT future=0 
    | stats min(counter) as counter by future 
    | sort 0 - future 
    | eval future="counter<=".future 
    | eval counter="counter>=".counter 
    | streamstats count as group 
    | eval search="(".counter." AND ".future."), \"".group."\"" 
    | stats list(search) as search 
    | eval search=mvjoin(search, ", ")]) 
| eventstats max(amount) as maxamount max(future) as maxfuture max(total) as maxcarryover count as groupcount by group 
| eval finalTotal=case(isnotnull(group) AND (NOT counter=maxfuture OR groupcount=1), 900, isnotnull(group) AND counter=maxfuture, maxcarryover-(maxamount*900), isnull(group), total) 
| rename total as oldTotal 
| table _time oldTotal finalTotal

Your only problem at this point would be if the subsearch that generates the groups exceeds 30 seconds which will cause it to timeout if you have the generic timeouts in your confs.

If this comment/answer was helpful, please up vote it. Thank you.
0 Karma

ALXWBR
Path Finder

Sorry, but still not there. That period between 03:00 and 04:00 isn't right.

_time   oldTotal    finalTotal
2019-07-22 00:00:00      
2019-07-22 00:15:00      
2019-07-22 00:30:00      
2019-07-22 00:45:00      
2019-07-22 01:00:00      
2019-07-22 01:15:00      
2019-07-22 01:30:00      
2019-07-22 01:45:00      
2019-07-22 02:00:00      
2019-07-22 02:15:00 98.893  98.893
2019-07-22 02:30:00 937.92  900
2019-07-22 02:45:00 1009.674    900
2019-07-22 03:00:00 2593.638    900
2019-07-22 03:15:00     900
2019-07-22 03:30:00     793.638
2019-07-22 03:45:00 706.153 706.153
2019-07-22 04:00:00 1015.469    900
2019-07-22 04:15:00 451.892 115.469
2019-07-22 04:30:00 329.411 329.411
2019-07-22 04:45:00 319.438 319.438
0 Karma

dmarling
Builder

Thanks for the additional use cases. I believe I have them accounted for now with this iteration:

| makeresults count=1 
| eval data=" 2019-07-22 00:00:00          
 2019-07-22 00:15:00          
 2019-07-22 00:30:00          
 2019-07-22 00:45:00          
 2019-07-22 01:00:00          
 2019-07-22 01:15:00          
 2019-07-22 01:30:00          
 2019-07-22 01:45:00          
 2019-07-22 02:00:00          
 2019-07-22 02:15:00    98.893
   2019-07-22 02:30:00    937.92
   2019-07-22 02:45:00    1009.674
   2019-07-22 03:00:00    2593.638
   2019-07-22 03:15:00      
   2019-07-22 03:30:00     
   2019-07-22 03:45:00    706.153
   2019-07-22 04:00:00    1015.469
   2019-07-22 04:15:00    451.892
   2019-07-22 04:30:00    329.411
   2019-07-22 04:45:00    319.438
   2019-07-22 05:00:00     
   2019-07-22 05:15:00     
   2019-07-22 05:30:00     " 
| rex max_match=0 field=data "(?<data>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}[^\n]+)" 
| mvexpand data 
| rex field=data "(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})" 
| rex field=data "(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})\s+(?<total>[^\s]+)" 
| eval _time=strptime(time, "%Y-%m-%d %H:%M:%S") 
| fields - time data 
| eval amount=coalesce(case((total/900)>1 AND round(total/900,0)>(total/900), round(total/900,0), (total/900)>1 AND round(total/900,0)<(total/900), round(total/900,0)+1), 0) 
| streamstats current=f count as counter 
| eval carryover=if(total>900, total-900, 0) 
| eval future=if(carryover>0, counter+amount, 0) 
| eval group=case( 
    [| makeresults count=1 
    | eval data=" 2019-07-22 00:00:00          
 2019-07-22 00:15:00          
 2019-07-22 00:30:00          
 2019-07-22 00:45:00          
 2019-07-22 01:00:00          
 2019-07-22 01:15:00          
 2019-07-22 01:30:00          
 2019-07-22 01:45:00          
 2019-07-22 02:00:00          
 2019-07-22 02:15:00    98.893
   2019-07-22 02:30:00    937.92
   2019-07-22 02:45:00    1009.674
   2019-07-22 03:00:00    2593.638
   2019-07-22 03:15:00      
   2019-07-22 03:30:00     
   2019-07-22 03:45:00    706.153
   2019-07-22 04:00:00    1015.469
   2019-07-22 04:15:00    451.892
   2019-07-22 04:30:00    329.411
   2019-07-22 04:45:00    319.438
   2019-07-22 05:00:00     
   2019-07-22 05:15:00     
   2019-07-22 05:30:00     " 
    | rex max_match=0 field=data "(?<data>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}[^\n]+)" 
    | mvexpand data 
    | rex field=data "(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})" 
    | rex field=data "(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})\s+(?<total>[^\s]+)" 
    | eval _time=strptime(time, "%Y-%m-%d %H:%M:%S") 
    | fields - time data 
    | eval amount=coalesce(case((total/900)>1 AND round(total/900,0)>(total/900), round(total/900,0), (total/900)>1 AND round(total/900,0)<(total/900), round(total/900,0)+1), 0) 
    | streamstats current=f count as counter 
    | eval carryover=if(total>900, total-900, 0) 
    | eval future=if(carryover>0, counter+amount, 0) 
    | search NOT future=0 
    | stats min(counter) as counter by future 
    | sort 0 - future 
    | eval future="counter<=".future 
    | eval counter="counter>=".counter 
    | streamstats count as group 
    | eval search="(".counter." AND ".future."), \"".group."\"" 
    | stats list(search) as search 
    | eval search=mvjoin(search, ", ")]) 
| eventstats max(amount) as maxamount max(future) as maxfuture max(total) as maxcarryover count as groupcount by group 
| eval finalTotal=case(isnotnull(group) AND (NOT counter=maxfuture OR groupcount=1), 900, isnotnull(group) AND counter=maxfuture, (maxcarryover-((maxamount-1)*900))+coalesce(total,0), isnull(group), total) 
| eval finalcarryover=if(finalTotal>900, finalTotal-900, null()) 
| streamstats window=1 current=f values(finalcarryover) as lastfinalcarryover 
| eval finaltotal=case(finalTotal>=900, 900, finalTotal<900 AND (lastfinalcarryover+finalTotal)<900, lastfinalcarryover+finalTotal, finalTotal<900 AND (lastfinalcarryover+finalTotal)>900, 900, finalTotal<900 AND NOT (lastfinalcarryover+finalTotal)<900 AND NOT (lastfinalcarryover+finalTotal)>900, finalTotal) 
| rename total as oldTotal 
| table _time oldTotal finaltotal

This returns this results:

_time   oldTotal    finaltotal
2019-07-22 00:00:00      
2019-07-22 00:15:00      
2019-07-22 00:30:00      
2019-07-22 00:45:00      
2019-07-22 01:00:00      
2019-07-22 01:15:00      
2019-07-22 01:30:00      
2019-07-22 01:45:00      
2019-07-22 02:00:00      
2019-07-22 02:15:00 98.893  98.893
2019-07-22 02:30:00 937.92  900
2019-07-22 02:45:00 1009.674    900
2019-07-22 03:00:00 2593.638    900
2019-07-22 03:15:00     900
2019-07-22 03:30:00     900
2019-07-22 03:45:00 706.153 900
2019-07-22 04:00:00 1015.469    900
2019-07-22 04:15:00 451.892 900
2019-07-22 04:30:00 329.411 444.880
2019-07-22 04:45:00 319.438 319.438
2019-07-22 05:00:00      
2019-07-22 05:15:00      
2019-07-22 05:30:00      
If this comment/answer was helpful, please up vote it. Thank you.
0 Karma

ALXWBR
Path Finder

I'm getting an error in the | eval group=case(.....

Only works with makeresults but not search

😞

0 Karma

dmarling
Builder

What's the error?

If this comment/answer was helpful, please up vote it. Thank you.
0 Karma

ALXWBR
Path Finder

Error in 'eval' command: The expression is malformed. An unexpected character is reached at ') )'.

index=idx_sems source="sems_north" sourcetype=SEMSSmartLaneEvents Tag="XRY_STVD" EquipmentID=3 
 | transaction EquipmentID startswith=TagBitData="1" endswith=TagBitData="0" 
 | timechart span=15m sum(duration) as total 
 | eval amount=coalesce(case((total/900)>1 AND round(total/900,0)>(total/900), round(total/900,0), (total/900)>1 AND round(total/900,0)<(total/900), round(total/900,0)+1), 0) 
 | streamstats current=f count as counter 
 | eval carryover=if(total>900, total-900, 0) 
 | eval future=if(carryover>0, counter+amount, 0) 
 | eval group=case( 
     [| search index=idx_sems source="sems_north" sourcetype=SEMSSmartLaneEvents Tag="XRY_STVD" EquipmentID=3 
 | transaction EquipmentID startswith=TagBitData="1" endswith=TagBitData="0" 
 | timechart span=15m sum(duration) as total  
     | eval amount=coalesce(case((total/900)>1 AND round(total/900,0)>(total/900), round(total/900,0), (total/900)>1 AND round(total/900,0)<(total/900), round(total/900,0)+1), 0) 
     | streamstats current=f count as counter 
     | eval carryover=if(total>900, total-900, 0) 
     | eval future=if(carryover>0, counter+amount, 0) 
     | search NOT future=0 
     | stats min(counter) as counter by future 
     | sort 0 - future 
     | eval future="counter<=".future 
     | eval counter="counter>=".counter 
     | streamstats count as group 
     | eval search="(".counter." AND ".future."), \"".group."\"" 
     | stats list(search) as search 
     | eval search=mvjoin(search, ", ")]) 
 | eventstats max(amount) as maxamount max(future) as maxfuture max(total) as maxcarryover count as groupcount by group 
 | eval finalTotal=case(isnotnull(group) AND (NOT counter=maxfuture OR groupcount=1), 900, isnotnull(group) AND counter=maxfuture, (maxcarryover-((maxamount-1)*900))+coalesce(total,0), isnull(group), total) 
 | eval finalcarryover=if(finalTotal>900, finalTotal-900, null()) 
 | streamstats window=1 current=f values(finalcarryover) as lastfinalcarryover 
 | eval finaltotal=case(finalTotal>=900, 900, finalTotal<900 AND (lastfinalcarryover+finalTotal)<900, lastfinalcarryover+finalTotal, finalTotal<900 AND (lastfinalcarryover+finalTotal)>900, 900, finalTotal<900 AND NOT (lastfinalcarryover+finalTotal)<900 AND NOT (lastfinalcarryover+finalTotal)>900, finalTotal) 
 | rename total as oldTotal 
 | table _time oldTotal finaltotal
0 Karma

dmarling
Builder

Thank you. I recreated it on my install as well. This happens when the subsearch that generates the groups returns no results due to nothing exceeding 900 seconds. This accounts for that by making that field null when that use case happens:

index=idx_sems source="sems_north" sourcetype=SEMSSmartLaneEvents Tag="XRY_STVD" EquipmentID=3 
| transaction EquipmentID startswith=TagBitData="1" endswith=TagBitData="0" 
| timechart span=15m sum(duration) as total 
| eval amount=coalesce(case((total/900)>1 AND round(total/900,0)>(total/900), round(total/900,0), (total/900)>1 AND round(total/900,0)<(total/900), round(total/900,0)+1), 0) 
| streamstats current=f count as counter 
| eval carryover=if(total>900, total-900, 0) 
| eval future=if(carryover>0, counter+amount, 0) 
| eval group=case( 
    [ search index=idx_sems source="sems_north" sourcetype=SEMSSmartLaneEvents Tag="XRY_STVD" EquipmentID=3 
    | transaction EquipmentID startswith=TagBitData="1" endswith=TagBitData="0" 
    | timechart span=15m sum(duration) as total 
    | eval amount=coalesce(case((total/900)>1 AND round(total/900,0)>(total/900), round(total/900,0), (total/900)>1 AND round(total/900,0)<(total/900), round(total/900,0)+1), 0) 
    | streamstats current=f count as counter 
    | eval carryover=if(total>900, total-900, 0) 
    | eval future=if(carryover>0, counter+amount, 0) 
    | search NOT future=0 
    | stats min(counter) as counter by future 
    | sort 0 - future 
    | eval future="counter<=".future 
    | eval counter="counter>=".counter 
    | streamstats count as group 
    | eval search="(".counter." AND ".future."), \"".group."\"" 
    | stats list(search) as search 
    | eval search=mvjoin(search, ", ") 
    | append 
        [| makeresults count=1 
        | eval search="\"ERROR\"=\"ERROR\", null()" 
        | table search] 
    | eventstats count 
    | search NOT (count>1 search="\"ERROR\"=\"ERROR\", null()") 
    | fields - count)]) 
| eventstats max(amount) as maxamount max(future) as maxfuture max(total) as maxcarryover count as groupcount by group 
| eval finalTotal=case(isnotnull(group) AND (NOT counter=maxfuture OR groupcount=1), 900, isnotnull(group) AND counter=maxfuture, (maxcarryover-((maxamount-1)*900))+coalesce(total,0), isnull(group), total) 
| eval finalcarryover=if(finalTotal>900, finalTotal-900, null()) 
| streamstats window=1 current=f values(finalcarryover) as lastfinalcarryover 
| eval finaltotal=case(finalTotal>=900, 900, finalTotal<900 AND (lastfinalcarryover+finalTotal)<900, lastfinalcarryover+finalTotal, finalTotal<900 AND (lastfinalcarryover+finalTotal)>900, 900, finalTotal<900 AND NOT (lastfinalcarryover+finalTotal)<900 AND NOT (lastfinalcarryover+finalTotal)>900, finalTotal) 
| rename total as oldTotal 
| table _time oldTotal finaltotal
If this comment/answer was helpful, please up vote it. Thank you.
0 Karma

ALXWBR
Path Finder

Looks good, thank you 🙂

0 Karma

dmarling
Builder

Yeah, I called that out that it's limited to two buckets. Can you identify the maximum duration you have observed? Knowing the maximum amount of carryover buckets to deal with can help with what you need to do.

If this comment/answer was helpful, please up vote it. Thank you.
0 Karma

dmarling
Builder

If the query wasn't clear, the "finalTotal" column is what you would use on your report. I included all of the fields as a way to show the work that the evals are doing.

If this comment/answer was helpful, please up vote it. Thank you.
0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi ALXWBR,
did you already seen the option "keeporphans" of transaction command (at https://docs.splunk.com/Documentation/Splunk/7.3.0/SearchReference/Transaction )?

Bye.
Giuseppe

0 Karma

ALXWBR
Path Finder

Hi Giuseppe

There are no orphans, there's only ever a start and an end event which is very consistent.

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...