I'm new to Splunk and trying to figure out how to find all events of type X that do NOT have an event of type Y within 1 minute (before or after) of them. I found http://answers.splunk.com/answers/137069/find-all-events-not-having-a-corresponding-event-matched-by... , but in my case the events have nothing to correlate them except for time, and I haven't been able to adapt the answer for that question to my case. Any suggestions about the best way to accomplish such a search?
OK, I realized that I forgot to discriminate one more time once I find values in the map. This will work for sure:
eventtype=type_y
| streamstats current=f window=1 max(_time) as prevTime
| eval myTime=_time
| eval delta=myTime-prevTime
| where delta>120
| map maxsearches=10000 search="search eventtype=type_x earliest=$prevTime$ latest=$myTime$
| eval lowDelta=_time-$prevTime$
| eval highDelta=$mYtime$-_time
| where lowDelta>60 highDelta>60"
HI,
How about finding the different time ranges for every occurance of X and then use this to find all the events which do not have the occurance of Y within specified time range (i.e. 1 min in your case). Following query can give you some idea:
index=_internal [ search index=_internal log_level="ERROR" | eval latestTime = (strptime(strftime(_time,"%m/%d/%Y:%H:%M:%S"),"%m/%d/%Y:%H:%M:%S") + (1* 60) ) | eval earliestTime = (latestTime-(1 * 60)) | table latestTime earliestTime | eval QueryToken = "(earliest=".earliestTime." latest=".latestTime.") OR" | stats values(QueryToken) as QueryValues | makemv delim="||" QueryValues | eval QueryFilter = substr(QueryValues , 1, len(QueryValues)-3) | return $QueryFilter] log_level!="WARNING" |chart count over _time by log_level usenull=f
Following is the logic:
Hope this will help to solve the problem.
Thanks!!
This isn't going to work how you think it is. Namely, returning multiple earliest and latest values from a subsearch doesn't cause the parent search to look at multiple earliest and latest segments. Searches span a single time range each. Therefore only one earliest and latest winds up being effective.
OK, I realized that I forgot to discriminate one more time once I find values in the map. This will work for sure:
eventtype=type_y
| streamstats current=f window=1 max(_time) as prevTime
| eval myTime=_time
| eval delta=myTime-prevTime
| where delta>120
| map maxsearches=10000 search="search eventtype=type_x earliest=$prevTime$ latest=$myTime$
| eval lowDelta=_time-$prevTime$
| eval highDelta=$mYtime$-_time
| where lowDelta>60 highDelta>60"
When I run the query above I get the following error:
"Error in 'map': Did not find value for required attribute 'prevTime'."
You must paste it in exactly; something is being modified wrong.
Here is my query as I am looking for users who have logged in more than once:
eventtype="sremote_login_succeeded"
| streamstats max(_time) as prevTime
| eval myTime=_time
| eval delta=myTime-prevTime
| where delta>120
| map search="search eventtype=sremote_login_succeeded earliest=$prevTime$ latest=$myTime$
| eval lowDelta=_time-$prevTime$
| eval highDelta=$mYtime$-_time
| where lowDelta>60 highDelta>60"
The first problem is that you hare using the same values for eventtypes
(inside and outside the map
), which is either wrong, or means that you can use a FAR simpler method. Which is it?
Trying to determine concurrent logins by user. Is there a different (simpler) method for doing so?
Thx
Check out the concurrency
command:
https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Concurrency
Thx for the pointer to the command/doc. I'll take a look...
Try this:
eventtype="sremote_login_succeeded"
| sort _time
| streamstats current=f window=1 max(_time) as prevTime
| eval myTime=_time
| eval delta=myTime-prevTime
| where delta>120
| map search="search eventtype=sremote_login_succeeded earliest=$prevTime$ latest=$myTime$
| eval lowDelta=_time-$prevTime$
| eval highDelta=$mYtime$-_time
| where lowDelta>60 highDelta>60"
Looking at earlier queries you listed, I modified the following line as such to see if I could get rid of the error re: search results count exceeding maximum
| map maxsearches=10000 search="search eventtype=sremote_login_succeeded earliest=$prevTime$ latest=$myTime$
and I get a new error message under Job:
Unable to run query 'search eventtype=sremote_login_succeeded earliest=1467642228 latest=1467642422 | eval lowDelta=_time-1467642228 | eval highDelta=1467642422-_time | where lowDelta>60 highDelta>60'.
Thx for the new query - I ran it for a 24 hour time period (7 day and 30 day as well) and no results found (which is good), but the Job has the yellow exclamation point and states the following:
The search result count (161) exceeds maximum (10), using max. To override it, set maxsearches appropriately.
Unable to run query 'search eventtype=sremote_login_succeeded earliest=1467642228 latest=1467642422 | eval lowDelta=_time-1467642228 | eval highDelta=1467642422-_time | where lowDelta>60 highDelta>60'.
Now do it without the map
and I will accept the answer 😉
OK - After playing with it for a while I came up with this, which seems to work:
eventtype=type_y
| sort _time
| streamstats current=f window=1 max(_time) as prevTime
| eval myTime=_time
| eval delta=myTime-prevTime
| where delta>120
| map search="search eventtype=type_x earliest=$prevTime$ latest=$myTime$
| eval lowDelta=_time-$prevTime$
| eval highDelta=$myTime$-_time
| where lowDelta>60 highDelta>60"
Pretty close to the above, obviously, although the sort _time clause and the current=f window=1 params for the streamstats clause are critical to make sure that delta actually turns out as expected. Thanks for all of the help, everyone - this turned out to be much more complicated than I originally thought!
I am glad it was close enough for you to adjust to perfection without too much hassle.
Also, you are missing the maxsearches=10000
part, which is very important.
I think the choice is between map and subsearch and map is better so why bother?
Yes, it makes my point, not yours: non-subsearch
(non-join
) options such as stats
+ map
are generally preferable. Does my answer work or not?