Hello,
i get in Splunk every 7 days a vulnerability scan log from all Hosts in our infrastructure - in the future the scan should be everyday .
Now i want to filter which of the vulnerability findings are really new and which one is equal to last scan because they are not new anymore and have a reason that they are still in the filter and they should be excluded in the search output.
If they scan output are the same the CVE number and the message is identical only the date is different.
My output should look like that i see only event scan messages when they are only on time in the logs. When a scan log is the same (CVE Number) two times in the log it should not be showed in the output. The best thing would be when i can see in the statistics field which of the extracted_Host are new or in the logs.
Right now my filter is like this:
I can see in the statistics which of the extracted Host are new with the CVE number but i see in the main Event logs equal logs which are not new anymore. I tried with dedup but thats only deleting the old event logs field value and i can exclude the old event log but the newest is still here.
index=nessus Risk=Critical
| stats count as event_count by CVE, extracted_Host
| where event_count=1
| rename extracted_Host as Host
| table CVE, Host
Thanks for the Help
Thats my filter now and it seems working
index=nessus Risk=Critical
| transaction CVE, extracted_Host
| table CVE, extracted_Host
Here is an event log output. Its both the same log only with an other date. I see both event logs in the output in splunk but i dont want see one of them if in the search are two same event logs. Means if i filter for 7 days and there is only one event log with CVE-2023-21554 then i want to see this because its "new" but when i filter for 30 days and then i find two equal eventlogs i dont want to see it in the output because its not new - right now i see it
16/10/2023
04:00:03.000
"175373","CVE-2023-21554","10.0","Critical","10.56.93.133","tcp","1801","Microsoft Message Queuing RCE (CVE-2023-21554, QueueJumper)","A message queuing application is affected a remote code execution vulnerability.","The Microsoft Message Queuing running on the remote host is affected
by a remote code execution vulnerability. An unauthenticated remote
attacker can exploit this, via a specially crafted message, to
execute arbitrary code on the remote host.","Apply updates in accordance with the vendor advisory.","https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-21554
http://www.nessus.org/u?383fb650","Nessus was able to detect the issue by sending a specially crafted message to remote TCP port 1801."
CVE = CVE-2023-21554 Risk = Critical extracted_Host = 192.168.0.1 sourcetype = csv
09/10/2023
04:00:03.000
"175373","CVE-2023-21554","10.0","Critical","10.56.93.133","tcp","1801","Microsoft Message Queuing RCE (CVE-2023-21554, QueueJumper)","A message queuing application is affected a remote code execution vulnerability.","The Microsoft Message Queuing running on the remote host is affected
by a remote code execution vulnerability. An unauthenticated remote
attacker can exploit this, via a specially crafted message, to
execute arbitrary code on the remote host.","Apply updates in accordance with the vendor advisory.","https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-21554
http://www.nessus.org/u?383fb650","Nessus was able to detect the issue by sending a specially crafted message to remote TCP port 1801."
CVE = CVE-2023-21554 Risk = Critical extracted_Host = 192.168.0.1 sourcetype = csv
Hi @LionSplunk,
you should identify the period, using eval.
so if you run the scan every day, you could try something like this:
index=nessus Risk=Critical
| eval period=if(_time<now()-86400,"Last","Previous")
| stats
dc(period) AS period_count
values(period) AS period
BY CVE extracted_Host
| where period_count=1 AND period="Last"
| rename extracted_Host as Host
| table CVE Host
Ciao.
Giuseppe