Question 1 for you to think about; I am Teaching to Fish 1. How is your scans setup, frequency (active, passive, agent).? 2. What is the scan window you are looking to search and roll-up numbers? 3. What connector is used to get scan results in? (a) Splunk Add-on for Tenable (per scan is brought in lacks some fields, scan name, 18 months since I used/tested) (b) Tenable Add-on for Splunk (analysis data is brought in lacks some fields, 18 months since I used/tested) (c) A variant of the first, some US Gov customers is using; also al fields would need to rollup clean depending on example. If there is a chance you are to have an asset scanned more than once within the time window of the search to be run; you need to be able to find the latest scan per asset within the time window and only compute based on the latest scan. I am going to show you how I do this with my connector; notice the field alias will be different than your example: # Here is my code to extract variables for hostname and domainName, I used assetName to allow me to see the new names versus the existing fields so I can validate effectiveness of my grouping I also make sure lowercase as stats need exact match in the by clause, I take nothing to chance. I did use your example as the base search: If you run STIG/Complaince scan in addition to Vuln scanning ; you need to consider them a separate bucket and base search should be further refined to exclude them as a separate scoring methodology search should be done to track them (If not an issue; forget I said it, don't need to overwhelm you index=nessus plugin_family_type="active" severity="High" OR severity="Critical" state="Open" OR state="Reopened"
| rex field=dnsName "^(?<dns_short>[a-z\-0-9A-Z_]+)|\.(?<domain_short>[a-z\-0-9A-Z_]+)\."
| rex field=netbiosName "(?<netbdomain_short>[a-z\-0-9A-Z_]+)[\\\](?<netbname_short>[a-z\-0-9A-Z_]+)"
| eval assetName = lower( coalesce(netbname_short, dns_short))
| eval domainName = lower( coalesce(netbdomain_short, domain_short)) You mentioned fall back to IP; unless you have statics avoid all possible. Get all the IoT crap in DNS, use dhcp reservations, static ip's for that stuff; (it is going to skew your numbers. Consider ignoring it until you can figure out how best to track it. Also if you have some cloud stuff they are going to have ip dash notation named. ip-10-2-4-2 (10.2.4.2) example. to deal with IP if you must convert them to dash notation for a name versus putting an ip in the assetName field at least cloud names and bad scans dont results in potential for two entries for the same asset (if by fortunate to be static ip or very long lease times and scanned more than once) you would have set assetName with fall-back to ip last resort | eval ipHostname = "ip-" . (replace(ip, "\.", "-"))
| eval assetName = lower( coalesce(netbname_short, dns_short, ipHostname)) The the question of more than one scan comes in: Need a timestamp lastSeen, last_seen this is per asset/ per scan as unique | eval dtVULN=strftime(lastSeen,"%Y-%m-%d")
| eventstats latest(dtVULN) as dtVULN by assetName I then push through Stats to score (totals per host) I do this to table out a report to say get the top 100 vulnerable hosts? What good is a total number of Vulns without some context and something actionable (place to start, What is your High Count, Low Count, How is your average compared to some threadhold? | stats dedup_splitvals=true values(dnsName) as hostFQDN, values(ip) as ip, latest(ScanName) as ScanName, latest(lastSeen) as lastSeen sum(cvssV3BaseScore) as scoreCVSS, count(plugin_id) as TotalVulns, count(eval(LIKE(severityName,"Critical"))) as iCritical, count(eval(LIKE(severityName,"High"))) as iHigh, by assetName
| table hostFQDN, assetName, ip, ScanName, lastSeen, scoreCVSS, TotalVulns, iCritical, iHigh
| sort -scoreCVSS As you requested; Totals for latest scan per asset totaled. If you don't have overlap scans then last seen and eventstats assetName, dtVULN (last_seen) is not necessary. Otherwise counting devices twice can occur. | stats dedup_splitvals=true count(plugin_id) as totalVuln, dc(plugin_id) as distinctVulnTotal, dc(assetName) as hostTotal, sum(eval(LIKE(severityName,"Critical"))) as totalCritical, sum(eval(LIKE(severityName,"High"))) as totalHigh
| eval avgPerHost = ( totalVuln / hostTotal )
| table totalVulns, hostTotal, avgPerHost
... View more
This works great, very interesting to know how the sum works (I had actually gotten as far as the Boolean values when declaring all my fields), but didn't know that was how the sum feature worked.
Now, is there a way to show this final value as well as the (for instance) the percentage value of the first example I provided so it would look like:
Group | Percent | Percent w/ No Failures?
... View more
I updated my syntax to almost get to the end result. I can now get it to display the compliance and the weighted compliance; however, I cannot seem to get the 3rd part (weighted compliance with 0 high_failed_checks)
I get accurate results provided I use my stats count using both computer_dns_name and Checklist; however in the table I only want it to reflect the compliance values of the checklists and not by dns_name and checklist. In removing the dns_name portion from the stats query it ensures though that the weighted_compliance_with_no_high will always be 0 since 1 high_failed_check since it bases it off the respective checklist versus each individual machine within that checklist.
Thoughts are appreciated:
My Search Query
| stats count(eval(compliance_result="passed" OR compliance_result="excepted_passed")) AS Passed, count(eval(compliance_result="failed")) AS Failed, count(eval(source_severity="high" AND (compliance_result="passed" OR compliance_result="excepted_passed"))) AS HPassed, count(eval(source_severity="high" AND compliance_result="failed")) AS HFailed, count(eval(source_severity="medium" AND (compliance_result="passed" OR compliance_result="excepted_passed"))) AS MPassed, count(eval(source_severity="medium" AND compliance_result="failed")) AS MFailed, count(eval(source_severity="low" AND (compliance_result="passed" OR compliance_result="excepted_passed"))) AS LPassed, count(eval(source_severity="low" AND compliance_result="failed")) AS LFailed by check_checklist_name computer_dns_name
| eval Compliance=(100-((Failed/(Passed+Failed))*100))
| eval Compliance=round(Compliance,1)
| eval WP_High=HPassed*10, WP_Med=MPassed*5, WP_Low=LPassed*1, WF_High=HFailed*10, WF_Med=MFailed*5, WF_Low=LFailed*1
| eval Weighted_Passed=WP_High+WP_Med+WP_Low, Weighted_Failed=WF_High+WF_Med+WF_Low
| eval WC_Perc=(100-((Weighted_Failed/(Weighted_Passed+Weighted_Failed))*100))
| eval WC_Perc=round(WC_Perc,1)
| eval Weighted_Comp_Passed=if((HFailed=0 AND WC_Perc>90), "1", "0")
| eval Weighted_Comp_Failed=if((HFailed!=0 OR WC_Perc<90), "1", "0")
| eval Weighted_Compliance_With_No_High=(100-((Weighted_Comp_Failed/(Weighted_Comp_Passed+Weighted_Comp_Failed))*100))
| eval Weighted_Compliance_With_No_High=round(Weighted_Compliance_With_No_High,1)
| rename computer_dns_name AS Computer, check_checklist_name AS Checklist, Compliance AS All_Compliance, Failed AS All_Failed_Checks, HFailed AS High_Failed_Checks, MFailed AS Medium_Failed_Checks, LFailed AS Low_Failed_Checks, WC_Perc AS Weighted_Compliance
| table Computer Checklist All_Compliance Weighted_Compliance High_Failed_Checks Weighted_Compliance_With_No_High
... View more
Sorry, it looks like the difference between dedup and not is super negligible so I can simply remove the dedup and the bin.
However; when I change the syntax to:
index=bigfix sourcetype="bigfix:compliance" check_checklist_name="Windows 2008 " OR check_checklist_name="Windows 2012" OR check_checklist_name="Windows 2012" OR check_checklist_name="Windows 7" OR check_checklist_name="Windows 10" OR check_checklist_name="RHEL 6" OR check_checklist_name="RHEL 7" state="passed" OR state="failed"
| timechart span=1mon count(eval(state="passed")) AS Passed, count(eval(state="failed")) AS Failed by check_checklist_name
| eval Percent_Compliance=(100-((Failed/(Passed+Failed))*100))
| eval Percent_Compliance=round(Percent_Compliance,1)
It shows the chart like before with 14 total columns for Passed and Failed for each OS (such as: Failed: RHEL6, Failed: RHEL7, Passed: RHEL6 etc), broken down by 6 months worth of rows. it does not compute the percentage though
Thanks again for all of your help.
... View more