Splunk Search

Creating Transactions to Calculate duration based on Measurement Value

ltemple1
Engager

I have a piece of machinery with PLC tags that record either 1 if it is running or 0 if it is down. I am trying to use this to calculate runtime, not just cumulatively but also as a series of up and downtimes such as 3.4 hrs running, 2.4 down, 2.2 running, etc.

I have tried using the transaction command but it does not seem to be grouping things properly. I would like to have transactions where the measurement value is all 1 and then once the first 0 appears a new transaction is formed and goes on until the next 1 appears and so on and so forth so I can get the duration for each transaction.

The posted code is very basic but has the basis for my search. I have tried using startswith, endswith, and maxpause, etc. and they don't seem to be grouping properly.

index="index" AppName="Appname" ItemName="Machine1"
| transaction Measurement
| stats sum(duration) as duration by Measurement

0 Karma
1 Solution

woodcock
Esteemed Legend
|makeresults | eval state="0 0 0 0 0 1 1 1 0 1 1 1 1 1 1 0 0 0 0 1 1 1"
| makemv state
| mvexpand state
| streamstats count AS serial
| eval _time = _time - serial
| eval Measurement=if((serial%2=0), "M1", "M2")
| fields - serial
| sort 0 - _time

| rename COMMENT AS "Everything above fakes event data; everything below is your solution"

| reverse
| streamstats current=f last(state) AS prev_state BY Measurement
| reverse
| streamstats count(eval(state!=prev_state)) AS sessionID BY Measurement
| stats range(_time) AS duration first(state) AS state BY Measurement sessionID

View solution in original post

woodcock
Esteemed Legend
|makeresults | eval state="0 0 0 0 0 1 1 1 0 1 1 1 1 1 1 0 0 0 0 1 1 1"
| makemv state
| mvexpand state
| streamstats count AS serial
| eval _time = _time - serial
| eval Measurement=if((serial%2=0), "M1", "M2")
| fields - serial
| sort 0 - _time

| rename COMMENT AS "Everything above fakes event data; everything below is your solution"

| reverse
| streamstats current=f last(state) AS prev_state BY Measurement
| reverse
| streamstats count(eval(state!=prev_state)) AS sessionID BY Measurement
| stats range(_time) AS duration first(state) AS state BY Measurement sessionID

DalJeanis
SplunkTrust
SplunkTrust

Much simpler than mine. Good to realize that with current=f, the window does not include the current record...

For anyone who is comparing the various techniques, please note that in woodcock's code, Measurement is the original code's Itemname (the machine name), and state is the original code's Measurement (the 1 or 0).

Ooops. Nope, doesn't match the requirements. duration in this calculation is getting the time between the start of the first event in the session and the start of the last event in the session, not the time between the start of the first event in the session and the start of the first event in the next session.

0 Karma

DalJeanis
SplunkTrust
SplunkTrust

This version works. I've changed the field names to match the original post.

We use the _time of the next reading to calculate the duration of the current state, then sum them rather than use range.

At the end, I'm passing the _time through the stats command so together with the duration and the readingcount you can see that it's calculating correctly.

 | makeresults 
 | eval Measurement="0 0 0 0 0 1 1 1 0 1 1 1 1 1 1 0 0 0 0 1 1 1"
 | makemv Measurement
 | mvexpand Measurement
 | streamstats count AS serial
 | eval _time = _time - serial
 | eval Itemname=if((serial%3=0), "M1", "M2")
 | fields - serial
 | rename COMMENT AS "Everything above fakes event data; everything below is your solution"

 | sort 0  _time
 | streamstats current=f last(Measurement) AS prev_Measurement BY Itemname
 | reverse 
 | streamstats current=f last(_time) AS next_time  BY Itemname
 | eval duration=next_time-_time
 | reverse 
 | streamstats count(eval(Measurement!=prev_Measurement)) AS sessionID  BY Itemname
 | stats min(_time) as _time, sum(duration) as sumduration, count as readingcount BY Itemname sessionID
0 Karma

woodcock
Esteemed Legend

Thanks for the adjustment.

0 Karma

DalJeanis
SplunkTrust
SplunkTrust

This part creates test data with Itemname, Measurement and _time. Recno is just for testing and has no effect on the logic.

| makeresults count=20 | eval Itemname="Machine1" | eval duration=17*(random()%250)+1 
| eval Measurement=(random()%107+random()%6)%2 
| streamstats count as recno, sum(duration) as nettime 
| eval _time=strptime("2017-01-01","%Y-%m-%d")+nettime 
| table _time Itemname recno Measurement
| appendpipe [| stats min(_time) as mintime | eval _time = strptime("2017-01-01","%Y-%m-%d") | eval duration= mintime-_time | eval Itemname="Machine1" | eval Measurement=(random()%107+random()%6)%2 | eval recno=0 | table _time Itemname recno Measurement]

This part calculates the duration of each step based on _time, and copies the prior Measurement onto each record. Only the first record of each series of Measurement value will end up being retained, so the records when the prior was different are the ones that are relevant. Since we want the sum of all the future event durations in the sequence, and the duration of this event is ended by the next event, we have to reverse the order here.

| sort 0 _time
| autoregress Measurement as LastMeasure p=1
| reverse 
| delta _time as duration 
| eval duration=round(-coalesce(duration,0))

This part adds up the durations in the sequence, and resets when the Measurement has just changed. The coalesce is because we need to retain the first record, and there was no prior record, so LastMeasure is null on that record.

| streamstats count as streamcount sum(duration) as sameduration reset_after="Measurement!=coalesce(LastMeasure,2)"
| where Measurement!=coalesce(LastMeasure,2)

That's it. The following are just for presentation. Calculate the end time of each sequence, using the duration, and calculate one overall record to show that the total duration matches the duration of the individual sequences. Then present in _time order.

| eval endtime = strftime(_time + sameduration,"%Y-%m-%d %H:%M:%S")
| appendpipe [| stats sum(streamcount) as streamcount min(_time) as mintime sum(sameduration) as sameduration by Itemname | eval _time=mintime+sameduration | eval endtime = strftime(_time,"%Y-%m-%d %H:%M:%S")| eval duration=0]
| sort 0 _time

DalJeanis
SplunkTrust
SplunkTrust

It would be helpful to post some actual test data (with confidential items masked).

0 Karma

cmerriman
Super Champion

did you have: endswith=eval(match(Measurement,"0")) in your transaction? it can be picky how you enter in the endswith/startswith.

have you tried using streamstats? it's a great alternative to transaction.
http://docs.splunk.com/Documentation/Splunk/6.5.3/SearchReference/Streamstats

maybe something like this:

index="index" AppName="Appname" ItemName="Machine1" 
|stats count by Measurement _time
|sort 0 + _time
|streamstats window=1 current=f values(Measurement) as prevMeasurement values(_time) as prevTime
|eval duration=_time-prevTime
|stats sum(duration) as duration by Measurement
0 Karma

ltemple1
Engager

Thanks cmerriman!

This works to get the overall duration of each Measurement over the length of the search but it does not break it into segments. I would like to be able to see how long the machine was down each time or how long it is running before it goes down, not just the overall sum for the day.

0 Karma

cmerriman
Super Champion

you might just need to tweak the streamstats command. there are lots of options for it, depending on what you're doing and what your data looks like. Have you looked through the options in the doc I included?

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...