Hi @vikas_gopal, The previous response provides searches to calculate time differences between known notable time values. Original event time values may not be available. For example, the Expected ...
See more...
Hi @vikas_gopal, The previous response provides searches to calculate time differences between known notable time values. Original event time values may not be available. For example, the Expected Host Not Reporting rule uses the metadata command to identify hosts with a lastTime value between 2 and 30 days ago. The lastTime field is stored in the notable, and we can use it to calculate time-to-detect by subtracting the lastTime value from the _time value. An example closer to your description, the Excessive Failed Logins rule, does not store the original event time(s). We could evaluate the notable action definition for the rule to find and execute a drill-down search, which would in turn give us one or more _time values, but as with the rules themselves, success depends on the implementation of the action and drill-down search. When developing rules, understanding event lag is usually a prerequisite. We typically calculate lag by subtracting event _time from event _indextime. The lag value is used as a lookback in rule definitions. For example, a 90th percentile lag of 5 minutes may suggest a lookback of 5 minutes. A rule scheduled to search the last 20 minutes of events would then search between the last 25 and the last 5 minutes. Your mean time-to-detect should be approximately equal to your mean lag time plus rule queuing and execution time. You'll need to adjust your lookback threshold relative to your tolerance for missed detections (false negatives), but this is generally how I would approach the problem. As an alternative, you could enforce design constraints within your rules and require all notables to include original event _time values.