Can you track long term service ceilings?

Path Finder

Is there a method of tracking a service ceiling over the long term?  I have daily transaction that are being summarized over suitable interval and written to a summary index.   I wish to keep a maximum of the transaction fields (count, success, by category, etc) for hourly, daily intervals, and have the maximum of the maximum or peak(maximum) for each of those transaction fields representing the service ceiling or maximum observed values for those fields.  The maximum observed value will be later used to calculate a utilization of the service.

I am kind of thinking that the answer is probably a daily report that consumes the summary index, calculates the daily maximum observed values, then write the daily maximums to a summary_index stash.

I am having trouble approaching the problem and am looking for ideas and/or guidance.  Currently I am playing with streamstats and a window.


| search ...
| bin span=600s _time
| streamstats window=1 current=f sum(successful) AS previous_successful_transactions
| streamstats sum(successful) as successful_transactions
| fillnull value=0 previous_successful_transactions successful_transactions, peak_transactions
| eval peak_transactions=if(successful_transactions>previous_successful_transactions, successful_transactions, peak_transactions)
| chart  max(previous_successful_transactions) as previous_successful_transactions max(peak_transactions) as peak_transactions by _time



Labels (1)
Tags (1)
0 Karma

Path Finder

I came up with my own resolution.  My initial data is already summarized into neat 10 minute sample buckets.  I set up a similar second summary to just capture just the maximums.  A subsearch on the secondary search gave me the historic data for capacity measurements. 

I hope this helps someone else.

index=my_summary report=summary_appservice source=summary_appservice
| timechart span=10m sum(successful) as successful_transactions earliest=-1d
| eval maximum_observed_transactions=[
`comment("Historic Maximum for capacity")`
search index=my_summary report=summary_appservice_ceiling earliest=-3y latest=-1m
| stats max(successful_transactions) as x
| return $x
| eval capacity = maximum_observed_transactions
| eval utilization=successful_transactions/capacity*100


0 Karma
Get Updates on the Splunk Community!

Webinar Recap | Revolutionizing IT Operations: The Transformative Power of AI and ML ...

The Transformative Power of AI and ML in Enhancing Observability   In the realm of IT operations, the ...

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...