I'm trying to create a report that shows unpatched vulnerabilities broken down into buckets of time by days.
For example, I'd like to see the number of unpatched vulnerabilities based on their first seen date that are placed into the following buckets: <10d, 10-30d, >30d
Nessus has two fields: 1) firstSeen and 2) lastSeen that are in epoch time. I convert the fields via eval commands:
| eval firstSeen=strftime(firstSeen, "%m/%d/%Y") | eval lastSeen=strftime(lastSeen, "%m/%d/%Y")
How would I calculate the delta between those two fields and then assign them to a duration (<10d, 10-30d, >30d) bucket?
Calculating the delta is very easy with the epoch timestamps, so you won't need the evals you posted.
| eval vuln_age=lastSeen-firstSeen will give you the difference between firstSeen and lastSeen.
Then you can compare the value of vuln_age to:
- 10 days * 24 hours/day * 60 minutes/hour * 60 seconds/minute = 864000 - 30 days * 24 hours/day * 60 minutes/hour * 60 seconds/minute = 2592000
You can use a
case statement to find the bucket:
| eval 10d=864000, 30d=2592000 | eval duration_bucket=case(vuln_age<10d, "<10d", vuln_age<=30d, "10-30d", vuln_age>30d, ">30d")
So in total, the code will be this:
[ your base search ] | eval vuln_age=lastSeen-firstSeen | eval 10d=864000, 30d=2592000 | eval duration_bucket=case(vuln_age<10d, "<10d", vuln_age<=30d, "10-30d", vuln_age>30d, ">30d")
Thx for the reply - greatly appreciated
After digging around a bit, I came up with the following:
index=nessus severity!=informational exploitAvailable=Yes | lookup ip_cidr.csv cidr_range as ip | eval firewall=coalesce(firewall,"null") | eval context=coalesce(context,"null") | eval zone=coalesce(zone,"null") | eval StartTime=strptime(firstSeen, "%Y-%m-%d %H:%M:%S") | eval EndTime=strptime(lastSeen, "%Y-%m-%d %H:%M:%S") | eval duration=strftime(lastSeen-firstSeen,"%d") | eval duration_group = case(duration < 10, "<10d", duration = 10 AND duration <30, "10-30d", duration < 30, ">30d") | chart count by duration_group
Two ways to skin a cat as they say?
I found that:
| eval duration=strftime(lastSeen-firstSeen,"%d")
Would always return the day of the month (1 through 31) and not the duration. I ended up using:
| eval duration_in_days=(lastSeen-firstSeen)/86400
instead which I believe will return the amount of days past.
and if you want to catch as well events outside of your defined sets of durations, just add a catch-all clause at the end of the
case() like this:
case(duration < 10, "<10d", duration = 10 AND duration <30, "10-30d", duration < 30, ">30d", 1=1, "older")
Wait, that actually shouldn't be catching anything, if you fix another typo. Right now, your second case statement should be
duration>=10 AND duration<30, and your third case statement should be
duration>30. Your original code has the third case as
duration<30, which doesn't match your needs.
Thx again for catching another typo! I know have the following eval statement:
| eval duration_group = case(duration < 10, "<10d", duration >= 10 AND duration <30, "10-30d", duration > 30, ">30d")
Great! And I think your case now is actually missing one more thing (sorry for not catching it earlier!) - it won't give any message when the duration is exactly 30. So probably the case statement for "10-30d" should be
duration >= 10 AND duration <=30.
3rd time is the charm and I should be typo free (I hope).
Another two questions if you don't mind:
1) How could I create a list for each bucket group by host? For example, I'd like the list to look like as follows:
duration group Host 10-30d system 1 system 5 system 11 <10d system 2 system 4 >30d system 3 system 6
2) How could I show the average amount of days it takes before a vulnerability is mitigated/patched by host?
1) Assuming each relevant event has the field
Host, it's very straightforward:
...| stats values(Host) AS Host BY duration_group
2) Calculating the average is also very easy:
...| stats avg(duration) AS average_duration BY Host
Note that you can't stack those two
stats commands one after the other, because
stats will discard the original
duration field. It wouldn't make a very nice looking table (in my opinion), but you could combine them like this if you wanted:
...| stats avg(duration) AS average_duration BY Host duration_group
But really, I think you'd be better served by two separate tables, so hopefully that's what you plan to do. 🙂
TYVM - very helpful and illuminating.
Thinking more about the when I calculate the avg, the
...| stats avg(duration) AS average_duration BY Host is calculating the avg time to mitigate of all vulnerabilities for a host, correct? For example, say Host A has 10 vulnerabilities and vuln 1 is mitigated in two days, then vuln 2 is mitigated on 5 days, etc., the calculation is looking at the duration off all 10 vulnerabilities combined.
And I agree, I'm better served by two separate tables.
Glad to help. And yes, that's the average that will be calculated by that command. So if each host has its own administrator, you could draw conclusions like, "The owner of Host A is reacting more quickly and addressing vulnerabilities in a more timely manner than the owner of Host B."
If there are other averages you are looking to calculate, the structure will remain the same, you'd just sub in different field names instead of
Host. For example, if each event also has a field called
severity, you could calculate the average time to mitigating vulnerabilities by severity:
...| stats avg(duration) AS average_duration BY severity
Absolutely. However, if that's a direct cut & paste of your search, I'll note that you have a typo/logic error in your case statement. The second option
duration = 10 AND duration <30 will only evaluate to
duration is exactly 10. I think you're looking for
duration >= 10 AND duration <30. Cheers!