- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We have installed and configured the Splunk Add-on for Tenable to talk to a Tenable Security Center appliance. We were not seeing any results. We ran tcpdump on the Splunk server to see if there was any traffic between Splunk and Security Center and there was no traffic at all.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We found the issue in the Python code. This TA was installed on a search head that was configured to utilize an index cluster as it's search peers. The search head itself is not part of a search head cluster. However, the Nessus TA thinks it is part of a cluster.
In the app directory in /bin/splunk_ta_nessus/splunktaucclib/data_collection, there is a piece of code called ta_mod_input.py. In the run() function of that script, there is a section that resembles the following:
if tconfig.is_shc_but_not_captain():
# In SHC env, only captain is able to collect data
stulog.logger.debug("This search header is not captain, will exit.")
return
Once we commented out this code, we began seeing communications with the Security Center server.
We investigated this further to determine why this was happening. Part of the script looks to the server roles of the Splunk instance by calling the REST service at https://SPLUNKSERVER:8089/services/server/info. It looks to see if one of the roles is "cluster_search_head". If it is, it then looks to see if this search is the captain of the cluster and only if it is, does it allow the input to occur.
We did some testing and found that because this search head utilizes an index cluster for it's search peers, this role does appear.
SOLUTION
1. Comment out this code. (Not recommended as it will be over written in future upgrades.)
2. Do not install this TA on any search heads. Instead, install on a forwarder or indexer to extract the data.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hello,
Are you using a proxy ?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We found the issue in the Python code. This TA was installed on a search head that was configured to utilize an index cluster as it's search peers. The search head itself is not part of a search head cluster. However, the Nessus TA thinks it is part of a cluster.
In the app directory in /bin/splunk_ta_nessus/splunktaucclib/data_collection, there is a piece of code called ta_mod_input.py. In the run() function of that script, there is a section that resembles the following:
if tconfig.is_shc_but_not_captain():
# In SHC env, only captain is able to collect data
stulog.logger.debug("This search header is not captain, will exit.")
return
Once we commented out this code, we began seeing communications with the Security Center server.
We investigated this further to determine why this was happening. Part of the script looks to the server roles of the Splunk instance by calling the REST service at https://SPLUNKSERVER:8089/services/server/info. It looks to see if one of the roles is "cluster_search_head". If it is, it then looks to see if this search is the captain of the cluster and only if it is, does it allow the input to occur.
We did some testing and found that because this search head utilizes an index cluster for it's search peers, this role does appear.
SOLUTION
1. Comment out this code. (Not recommended as it will be over written in future upgrades.)
2. Do not install this TA on any search heads. Instead, install on a forwarder or indexer to extract the data.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

This is also an issue with TA-crowdstrike, for the exact same reason.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The solutions proposed are interesting as realistically only #2 is acceptable. Commenting out the code isn't just not recommended due to it being over written in the future but also because you really shouldn't be doing data collection on a search head.
In regards to the CrowdStrike TA - the installation details state that you shouldn't configure inputs when deploying the TA to the search head. That's the only way that the TA would throw this error.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks. I just recently downloaded the 5.1.1 release and I see where this code has changed. We are getting ready to load the new version and begin testing. Thanks for the quick turn around on this improvement.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am seeing issues when trying to index data on 6.5.2. Put it on a SH and I am getting the following, see if you can assist. Looks like other python scripts causing issues as well.
Traceback (most recent call last):
File "/opt/splunk/etc/apps/Splunk_TA_nessus/bin/splunk_ta_nessus/splunktaucclib/data_collection/ta_data_collector.py", line 115, in index_data
self._do_safe_index()
File "/opt/splunk/etc/apps/Splunk_TA_nessus/bin/splunk_ta_nessus/splunktaucclib/data_collection/ta_data_collector.py", line 148, in _do_safe_index
self._client = self._create_data_client()
File "/opt/splunk/etc/apps/Splunk_TA_nessus/bin/splunk_ta_nessus/splunktaucclib/data_collection/ta_data_collector.py", line 95, in _create_data_client
self._checkpoint_manager)
File "/opt/splunk/etc/apps/Splunk_TA_nessus/bin/splunk_ta_nessus/splunktaucclib/data_collection/ta_data_client.py", line 55, in __init__
self._ckpt)
File "/opt/splunk/etc/apps/Splunk_TA_nessus/bin/splunk_ta_nessus/ta_tenable_sc_data_collector.py", line 18, in do_job_one_time
return _do_job_one_time(all_conf_contents, task_config, ckpt)
File "/opt/splunk/etc/apps/Splunk_TA_nessus/bin/splunk_ta_nessus/ta_tenable_sc_data_collector.py", line 57, in _do_job_one_time
logger_prefix=logger_prefix)
File "/opt/splunk/etc/apps/Splunk_TA_nessus/bin/splunk_ta_nessus/security_center.py", line 219, in get_security_center
sc.login(username, password)
File "/opt/splunk/etc/apps/Splunk_TA_nessus/bin/splunk_ta_nessus/security_center.py", line 45, in login
result = self.perform_request('POST', 'token', data)
File "/opt/splunk/etc/apps/Splunk_TA_nessus/bin/splunk_ta_nessus/security_center.py", line 140, in perform_request
self._error_check(response, result)
File "/opt/splunk/etc/apps/Splunk_TA_nessus/bin/splunk_ta_nessus/security_center.py", line 189, in _error_check
result['error_msg'])
APIError: 'status=403, error_code=163, error_msg=Account locked.\n'
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Seeing this issue as well; if there was any update or change needed due to Splunk upgrade or something similar, that would be fantastic. Thanks!
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Having the same problem here with SecurityCenter 5.6.0, Splunk Enterprise 7.0.0 and Tenable App 5.1.2. The Nessus inputs (both Nessus and Tenable.io) are working but SecurityCenter input is not.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We are seeing a similar issue. Anyone resolved this issue.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Thanks. Let me know if anything is wrong when you are testing.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Thanks for the inputs. This is a bug and we will fix it in the following release.
