Splunk Search

How to count events in a time frame based on a time elapsed field

CarbonCriterium
Path Finder

What is the is the best approach to creating a field that shows the number of incomplete requests in a given period of time?  

  • For the machine in question, events are logged when it completes the Request-Response Loop.   
  • I have a field `time_taken` which shows, in milliseconds, how long the Request-Response Loop has taken. 
  • I have already done the following, now how do I evaluate the total number of `open_requests`  for each second?

 

| eval responded = _time
| eval requested = _time - time_taken

| eval responded = strftime(responded ,"%Y/%m/%d %H:%M:%S")
| eval requested = strftime(requested ,"%Y/%m/%d %H:%M:%S")

| eval open_requests = ??? 

| table _time open_requests
| sort - _time

 

 

Labels (2)
0 Karma

yuanliu
SplunkTrust
SplunkTrust

It looks like the challenge is how to define the requirement, i.e., the difference between _time at the beginning of the pseudo code which you use as a marker of "responded", and _time at the end of the pseudo code which you intend as a marker of clock unit (second).

I assume that fields _time and time_taken, therefore responded and requested as well, are all in time format, i.e., can be used in numeric comparisons.  Ignoring the strftime() calculations which are meant for display only, the following can give you something meaningful:

| eval responded = _time
| eval requested = _time - time_taken
| bin _time span=1s ``` chop _time into 1-s bins ```
| where requested < _time AND time_taken > 1s ``` many ways to construct this, depending on interpretation and preference ```
| timechart count

Hope this helps.

CarbonCriterium
Path Finder

Thanks, I eventually came to something similar!  I think this is the solution I am after, unless you can spot a hole in the logic.

 

| eval seconds_taken = time_taken/1000
| eval responded = _time, requested = _time - seconds_taken
| where requested <= responded AND seconds_taken > 0
``` | where requested <= responded AND seconds_taken >= 0 ```
| timechart count span=1s

 

0 Karma

yuanliu
SplunkTrust
SplunkTrust

As long as you test a variety of data manually and are satisfied with the results, there should be no concern.

This said, both conditions "requested <= responded" and "seconds_taken > 0" will always be true.  Shouldn't it be "seconds_taken > 1"? ("requested <= responded" is always true.)  At the bottom of this, any event in which time_taken > 1000 would be characterized as "open request" because you wanted to count from the end of each second.

To get the results logically sound, you also want to shift time axis according to requested, something like

| eval seconds_taken = time_taken/1000
| eval responded = _time, requested = _time - seconds_taken
| where seconds_taken > 1
| rename requested AS _time
| timechart count span=1s

On the other hand, now that I look it from this angle, there's another consideration that needs attention: If an event's seconds_taken > 2 but < 3, the event should be counted as "open request" in two 1s bins; the "open" state will be concurrent with other "open" requests (older and newer) for the entire duration.  Effectively, you would be stacking Gantt charts.

I faced a very similar problem years ago that somesoni2 helped solve.  You can see the answer in https://community.splunk.com/t5/Splunk-Search/How-to-compute-concurrent-members-in-events/m-p/112163...

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Can’t Make It to Boston? Stream .conf25 and Learn with Haya Husain

Boston may be buzzing this September with Splunk University and .conf25, but you don’t have to pack a bag to ...

Splunk Lantern’s Guide to The Most Popular .conf25 Sessions

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Unlock What’s Next: The Splunk Cloud Platform at .conf25

In just a few days, Boston will be buzzing as the Splunk team and thousands of community members come together ...