I have setup the Splunk Add-on for Nessus and both the scan data and plugin data is coming through as expected. Strangely enough though, I'm not seeing any "port" field or similar within the scan data.
For example, if Nessus discovers an OpenSSH related vulnerability, chances are the port it reports will be "22 / tcp". I'm not seeing this field anywhere within the scan data that Splunk is pulling in. Even in the raw data view (which is the converted JSON data), there is no port information for any vulnerability. Checking the respective reports in Nessus however does confirm that port information is there.
This could well be an issue with the Nessus REST API, in that it might not be divulging the port information when it should be, OR, maybe the Splunk Add-on is missing the port field in the conversion process. As soon as I get some time I'll be querying the Nessus REST API manually to see what it's returning and assuming the port information is there I guess it will then become a debugging exercise on the Splunk TA.
Has anyone else had this problem?
Nessus version is 6.5.3 (#40)
Splunk Add-on for Nessus version is 4.0.0
Splunk version is Enterprise 6.1.2
Any help would be greatly appreciated.
---- UPDATE ----
Have done some more investigation and found that the TA's python scripts are initially getting data out of Nessus in the .nessus XML format. This data does indeed have port information. Below I have grepped out a few lines from a .nessus file as an example...
<ReportItem port="0" svc_name="general" protocol="tcp" severity="0" pluginID="25220" pluginName="TCP/IP Timestamps Supported" pluginFamily="General">
<ReportItem port="1720" svc_name="h323hostcall?" protocol="tcp" severity="0" pluginID="10335" pluginName="Nessus TCP scanner" pluginFamily="Port scanners">
<ReportItem port="5269" svc_name="jabber-server?" protocol="tcp" severity="0" pluginID="10335" pluginName="Nessus TCP scanner" pluginFamily="Port scanners">
<ReportItem port="5061" svc_name="sip" protocol="tcp" severity="0" pluginID="10335" pluginName="Nessus TCP scanner" pluginFamily="Port scanners">
Then within /opt/Splunk/etc/apps/Splunk_TA_nessus/bin/nessusclienthandler2.py I found the following code that indicates that the port information should be parsed out from the above examples...
elif name == "ReportItem":
self.isReportItemElement = 1
self.reportItem['Port'] = attributes.get("svc_name") + " (" + self.replaceUnknown(attributes.get("port")) + "/" + self.replaceUnknown(attributes.get("protocol")) + ")"
self.reportItem['Severity'] = self.replaceUnknown(attributes.get("severity"))
self.reportItem['PluginFamily'] = self.replaceUnknown(attributes.get("pluginFamily"))
self.reportItem['PluginID'] = self.replaceUnknown(attributes.get("pluginID"))
self.reportItem['PluginName'] = self.replaceUnknown(attributes.get("pluginName"))
... but this isn't working. I get all the other elements in the Splunk data, such as Severity, PluginFamily and so on, but not the port.
There appears to be something wrong within the python scripts included in the TA but there are several scripts all tied to each other and I have no idea where to start looking. Help!
... View more
I just fixed this issue. The answer was so simple and I found it in a post 3 years old!
... View more
Exactly the same problem here so keen to get an answer on this. The only difference is that I'm running on Unbuntu 14.04. After applying a bunch of OS patches last night this is the only thing that isn't working. My jbridge.log file is showing identical errors to those originally posted here.
I'll post back if I get anywhere with this as I need to get this working ASAP.
... View more
Hi everyone. I've got an instance of Splunk 5.0.1 running with a large amount of firewall data coming into it daily (roughly 15GB). I created a relatively simply dashboard with 5 panels with the intent of scheduling the view for PDF delivery once per week. The view itself is fine, I tested it using a relatively short timespan on my search (e.g. last 60 mins of data). The problem is when I want to generate the view based on 1 week worth of data, it always fails and I suspect it has something to do with the large amount of data its trying to search through.
Some further points to add context to the problem...
Each of the 5 panels in the view run their own search, even though the base search is the same. e.g. index=firewall type=opsec attack="*" | ... After the base search the results are piped to stuff like "top src_ip", "top des_ip", and stats. Since each panel uses the same base search I thought about using post processing to make things more efficient but I read in the documentation somewhere that you can't post process if the base search returns more than 10,000 events. My base search is returning close to 2 million matching events over the course of a week. 😞
So... that left things in a position where I have 5 saved searches, one for each dashboard panel. To try and speed things up I turned acceleration on for the searches and specified the summary period as 7 days (since I need to run this view to produce a PDF on a weekly basis). The acceleration doesn't appear to have had much (if any) effect.
I've also tried opening the view, then going to the job manager and clicking save on each of the jobs that the view has kicked off thinking when they're done I can reopen the view and it should load the cached results. This doesn't work but I did learn that the searches take roughly 10 hours to complete 😞
Now I'm pretty sure I'm doing this in a way that's highly inefficient.. I know there must be a better way. Please help me with any ideas. I'm more than happy to provide more technical detail if need be.
... View more