Hi @TMaltizo !
Fun question. Its probably easier to just for me to show some example of something similar since I don't have enough specifics to make it perfect for your data. I didn't do the 30 day count adding to the 60 day count adding to the 90 day count. Maybe I'll come back to figure that out. If you provide some data, it might be a bit earlier. In the mean time, maybe the example will help you see how to change it for your data.
I'll try to mimic your data by looking for web errors within a nine hour period, then call that error "non_compliant". I'll count up errors in the first 3 hours, 6 hours, and total of 9 hours, similar to what you want to do with the months. I want them to occur randomly for the sake of demonstration:
sourcetype=access_combined earliest=-9h
| eval non_compliant = if(random() % 123 == 0, 1, 0)
^ So for the above, you'd just have your root search constraints, and an indicator field.
index=forescout sourcetype="fs_av_compliance" sourcetype=fs_av_compliance
| eval non_compliant = if(status="non-compliant", 1, 0)
After that is where things get more fun.
First we'll bin by _time , making sure we choose our granularity intentionally - e.g. if you only need a date (no hour), then its fine for us to choose a span of 1d. For my example, I'll just use an hour.
| bin _time span=1h
Next, I just want to know if a violation occurred within a given time hour (or day or whatever), and will use the max function to either get 1 if a violation occurred, and a 0 otherwise:
| stats max(non_compliant) as non_compliant by _time host
So at this point, my table looks like so:
Pretty chill. Let's create our time chunks, or 3 hour segments, or 30 days, or whatever:
| eval time_chunk = case(
_time > relative_time(now(), "-3h"), "first_chunk",
_time > relative_time(now(), "-6h"), "second_chunk",
_time > relative_time(now(), "-9h"), "third_chunk")
Here we use the case function for eval to check in order - is the time within the last 3 hours? If so, say first_chunk, and so on. So you can modify these to be 30, 60, 90, or whatever you want. The point is that we can now group by this new field.
If there is a non_compliant day/hour, lets copy the _time over to a new field.
| eval non_compliant_time = if(non_compliant == 1, _time, null())
Now our table looks like so:
Now we're just a single stats and eval away!
| stats
sum(non_compliant) as total_violations
min(non_compliant_time) as earliest_violation_time
by host, time_chunk
| eval earliest_violation_time = strftime(earliest_violation_time, "%F %T")
The eval is just for formatting the time string from an integer to a datetime format.
The best part about this, is that when you want to do this for multiple sourcetypes, all you have to do is change your root search to include them, then add sourcetype to each by clause. Chill, right? The last thing would be to try to visualize this, which we could do, but I'm not really sure what you're going for there. Hopefully the above is useful in getting your compliance reports all squared away!
For completeness sake, here is the total search I used to create this:
sourcetype=access_combined earliest=-9h
| eval non_compliant = if(random() % 123 == 0, 1, 0)
| bin _time span=1h
| stats
max(non_compliant) as non_compliant
by _time host
| eval time_chunk = case(
_time > relative_time(now(), "-3h"), "first_chunk",
_time > relative_time(now(), "-6h"), "second_chunk",
_time > relative_time(now(), "-9h"), "third_chunk")
| eval non_compliant_time = if(non_compliant == 1, _time, null())
| stats
sum(non_compliant) as total_violations
min(non_compliant_time) as earliest_violation_time
by host, time_chunk
| eval earliest_violation_time = strftime(earliest_violation_time, "%F %T")
Edit: Oh yeah, you mentioned "first time being seen by the system", which I'm assuming you mean is Splunk. If you want the earliest seen occurrence of each host, you could do a simple join at the end like this
| join host [|metadata type=hosts index=main | fields host firstTime | rename firstTime as earliest_ever_seen]
| eval earliest_ever_seen = strftime(earliest_ever_seen, "%F %T")
🙂
... View more