- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hi Splunk people.
I am trying to map the number of concurrent transactions.
This is not exactly the same than the concurrency command, that will show the concurrency at the beginning of the transaction. I want to show the concurrency on a span of 5 minutes like a timechart.
base search for the transaction :
id | transaction id startswith="start" endswith="stop" maxpause=3600 | table _time id duration
Sample :
# simple single transaction (A)
2012-12-01 10:00:00 id=A start
2012-12-01 10:10:00 id=A whatever
2012-12-01 10:30:00 id=A stop
# 2 overlapping transactions (B and C)
2012-12-01 11:00:00 id=B start
2012-12-01 11:05:00 id=B whatever
2012-12-01 11:10:00 id=C start
2012-12-01 11:15:00 id=C whatever
2012-12-01 11:20:00 id=C stop
2012-12-01 11:35:00 id=B stop
#same transactions restarting several time (D)
2012-12-01 12:00:00 id=D start
2012-12-01 12:10:00 id=D start
2012-12-01 12:15:00 id=D whatever
2012-12-01 12:20:00 id=D stop
the result should look like:
2012-12-01 10:00:00 concurrency=1
2012-12-01 10:05:00 concurrency=1
2012-12-01 10:10:00 concurrency=1
2012-12-01 10:15:00 concurrency=1
2012-12-01 10:20:00 concurrency=0
0
0
...
0
2012-12-01 11:00:00 concurrency=1
2012-12-01 11:05:00 concurrency=1
2012-12-01 11:10:00 concurrency=2
2012-12-01 11:15:00 concurrency=2
2012-12-01 11:20:00 concurrency=1
2012-12-01 11:25:00 concurrency=1
2012-12-01 11:30:00 concurrency=1
2012-12-01 11:35:00 concurrency=0
0
0
...
0
2012-12-01 12:00:00 concurrency=1
2012-12-01 12:05:00 concurrency=1
2012-12-01 12:10:00 concurrency=1
2012-12-01 12:15:00 concurrency=1
2012-12-01 12:20:00 concurrency=0
Currently the result has gaps
id | transaction id startswith="start" endswith="stop" maxpause=3600 | table _time id duration | concurrency duration=duration | timechart span=5m max(concurrency)
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

As per direct advice from Gerald, here is the grail of the concurrency search.
They were 2 paths :
- fill the holes with artificial events , in order to have events every 5 min. use the command | gentimes increment=5 in a sub search and append to the previous results. Then do some magic
However the gentimes
scripted command do not exists on Storm, and this is for a Storm search.
- decompose the transaction in a start and an stop events, then add a counter that increments (adding one for a start, removing one for a stop), fill the gaps with
makecontinuous
, and finallystreamstats
the sum of the counter for the concurrency_counter.
of course somemakemv
magic is still required to turn a single transaction into 2 events (start and stop)
id | eval mytime=_time | transaction id startswith="start" endswith="stop" | eval transactionid=id._time | stats min(mytime) AS start max(mytime) AS stop values(id) AS id values(duration) AS duration by transactionid | eval mytimeconcat="1_".start." -1_".stop | eval mytimemv=split(mytimeconcat," ") | mvexpand mytimemv | rex field=mytimemv "(?(1|\-1))_(?<_time>\d+)" | table _time id counter | sort _time | bucket _time span=5m | makecontinuous _time span=5m | streamstats sum(counter) AS concurrent_counter | table _time concurrent_counter
see result
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Proposal for the gentimes approach:
sourcetype=mysourcetype
| TRANSACTION callid maxspan=60m startswith(event=ENTERQUEUE)
| APPEND [| GENTIMES start=0 end=1 increment=1m | EVAL _time=starttime | EVAL duration=60 | FIELDS _time, duration]
| CONCURRENCY duration=duration output=concurrency
| TIMECHART span=1m max(concurrency) AS operators
| EVAL operators=operators-1
Caution, this approach doesn't work in real-time mode due to the APPEND command.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Can we get details on the code that supports the "counter" field?
Also, has anyone developed the query using "gentimes"?
Regards.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

As per direct advice from Gerald, here is the grail of the concurrency search.
They were 2 paths :
- fill the holes with artificial events , in order to have events every 5 min. use the command | gentimes increment=5 in a sub search and append to the previous results. Then do some magic
However the gentimes
scripted command do not exists on Storm, and this is for a Storm search.
- decompose the transaction in a start and an stop events, then add a counter that increments (adding one for a start, removing one for a stop), fill the gaps with
makecontinuous
, and finallystreamstats
the sum of the counter for the concurrency_counter.
of course somemakemv
magic is still required to turn a single transaction into 2 events (start and stop)
id | eval mytime=_time | transaction id startswith="start" endswith="stop" | eval transactionid=id._time | stats min(mytime) AS start max(mytime) AS stop values(id) AS id values(duration) AS duration by transactionid | eval mytimeconcat="1_".start." -1_".stop | eval mytimemv=split(mytimeconcat," ") | mvexpand mytimemv | rex field=mytimemv "(?(1|\-1))_(?<_time>\d+)" | table _time id counter | sort _time | bucket _time span=5m | makecontinuous _time span=5m | streamstats sum(counter) AS concurrent_counter | table _time concurrent_counter
see result
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Can someone please translate the <em> portions of this syntax from @yannK:
id | eval mytime=<em>time
| transaction id startswith="start" endswith="stop"
| eval transactionid=id./<em>time
| stats min(mytime) AS start max(mytime) AS stop values(id) AS id values(duration) AS duration by transactionid
| eval mytimeconcat="1</em>".start." -1</em>".stop
| eval mytimemv=split(mytimeconcat," ")
| mvexpand mytimemv
| rex field=mytimemv "(?(1|-1))_(?<_time>\d+)"
| table _time id counter
| sort _time
| bucket _time span=5m
| makecontinuous _time span=5m
| streamstats sum(counter) AS concurrent_counter
| table _time concurrent_counter
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

The portions are just underscores. The search would be as follows with the replaced and fixing the counter field missing:
id | eval mytime=time
| transaction id startswith="start" endswith="stop"
| eval transactionid=id._time
| stats min(mytime) AS start max(mytime) AS stop values(id) AS id values(duration) AS duration by transactionid
| eval mytimeconcat="1".start." -1_".stop
| eval mytimemv=split(mytimeconcat," ")
| mvexpand mytimemv
| rex field=mytimemv "(?(1|-1))_(?<_time>\d+)"
| table _time id counter
| sort _time
| bucket _time span=5m
| makecontinuous _time span=5m
| streamstats sum(counter) AS concurrent_counter
| table _time concurrent_counter
Cheers!!!
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Fantastic question and answer
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @yannK,
I understand this is a pretty old thread, but I'm having similar problem and trying to follow your query. Could you share some idea how to calculate the counter
field?
Thanks!
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

The counter field is constructed from the regex as below:
| rex field=mytimemv "(?(1|-1))_(?<_time>\d+)"
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

My suggestion would be:
- use the
gentimes
command to generate a set of events every 5 minutes over the relevant span - use
eval
to set the duration of each of those events to 5 minutes (300 seconds). append
those generated events to the results of your transaction search- use the
concurrency
command to get the concurrency at the start of every one of the combined set of events - subtract 1 from every concurrency value
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
HI Yann, I am running into problem when following the instructions:
I get an error when makecontinuoue is executed: Unexpected duplicate values in field '_time' have been detected. I tried to just select one user_id and run the query for that one - to avoid the dup _time value which I was getting for this one user_id, I tried to adjust the stop time if duration is 0: after the mvexpand for this user, only 6 events are returned, yet after adding the last step (makecontinuous), it came back (tried it more than once) with the error The specified span would result in too many (>50000) rows.
thx
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Perfect, see detailed answer bellow, here are your 100karma point Sir.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Example would help a lot to digest this
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Yannk,
Try This:
id | transaction id startswith="start" endswith="stop" maxpause=3600 |concurrency duration=duration|timechart span=5m max(concurrency) as concurrency|fillnull value=0 concurrency
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Not enough the gaps between the transactions starts are still null.
See the screenshot added to the question.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It looks to me like you can do something like that by adding
| bucket _time span=5m | chart count AS concurrency by _time
though you don't get the exact k=v formatting in your example output.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
you are incorrectly assuming that you are working with the result set. The actual challenge is to generate what you see as the result set (at the bottom) from the transaction set you see at the top.
Once you generate the result set, the rest is easy.
