Is it possible to get the first and last concurrent events by a field? I'm trying to use this with NAT translations to determine how long each user uses the network at any one given time. Events occur with the following interesting information:
However, a translation is done for each session of a computer by port. I don't really care about the port information as I already have a search for it. I'm trying to get the first and last concurrent event in which the nat_inside_local_ip and nat_inside_global_ip match into a stats table. Here's what I got so far, but its returning translation times that overlap.
<search string> |
stats earliest(_time) as First latest(_time) as Last by nat_inside_local_ip,nat_inside_global_ip |
eval First=strftime(First, "%D %H%M") |
eval Last=strftime(Last, "%D %H%M") |
eval Length=Length+" Min"
I was thinking the localize/concurrency command(s), or a subsearch might help, but I'm rather new/self-taught to splunk. I wish there was the ability to do something like this:
stats earliest(_time) as First latest(_time) as Last by concurrent nat_inside_local_ip,nat_inside_global_ip
I also want to include some sort of span to trunicate concurrent events that are too far apart - but then again, beggars/choosers. Any ideas?
The concurrency command is cool, but it's a bit of a toy and it is not useful in any cases where you need to calculate the concurrency but split the concurrency calculation up by the distinct values of some field foo. However the same logic can be done generically.
First though, we need duration. Since you need/want to separate events that are too far apart in time (to avoid one user's activity across 2 weeks becoming one "session"), you'll be better off using transaction. transaction is great at that sort of fuzziness.
step 2. now we add the logic to get concurrency split by user. No I am not insane.
<search string> | transaction nat_inside_local_ip nat_inside_global_ip maxpause=5min | eval increment = mvappend("1","-1") | mvexpand increment | eval _time = if(increment==1, _time, _time + duration) | sort 0 + _time | fillnull user value="NULL" | streamstats sum(increment) as post_concurrency by user | eval concurrency = if(increment==-1, post_concurrency+1, post_concurrency) | timechart bins=400 max(concurrency) as max_concurrency last(post_concurrency) as last_concurrency by user | filldown last_concurrency* | foreach "max_concurrency: *" [eval <<MATCHSTR>>=coalesce('max_concurrency: <<MATCHSTR>>','last_concurrency: <<MATCHSTR>>')] | fields - last_concurrency* max_concurrency*
Fully describing why this needs to be so complicated could fill a book. It begins simply enough, with us splitting every event into two events, one at the start time and one at the end time. We have an "increment" field on each event that is variously +1 or -1. Then there's some weird sleight of hand regarding "concurrency" vs "post_concurrency", which is quite critical for the answer to be entirely accurate. Then because we need to get some value for every bucket of time considered, regardless of whether any starts or ends actually occurred there, we appeal to timechart. Then it gets a bit weird. You can pick it apart pipe by pipe to see what it's doing but you might wish you hadn't.
It works well. We use the technique in our apps as the basis for a lot of critical concurrency analysis.