All Apps and Add-ons

Splunk Add-on For Tenable connection reset by peer

craigwilkinson
Path Finder

Hi All,

Am having issues with the Splunk Add-on for Tenable - receiving the error connection closed - hoping you guys can help!

Splunk Version: 6.55
Tenable version: 5.12

Error

2018-05-24 06:09:25,812 +0000 log_level=ERROR, pid=19741, tid=Thread-4, file=ta_data_collector.py, func_name=index_data, code_line_no=118 | [stanza_name="TSC_INPUT" data="sc_vulnerability" server="TNS_VM_SC"] Failed to index data
Traceback (most recent call last):
  File "/apps/pcehr/splunk/etc/apps/Splunk_TA_nessus/bin/splunk_ta_nessus/splunktaucclib/data_collection/ta_data_collector.py", line 115, in index_data
    self._do_safe_index()
  File "/apps/pcehr/splunk/etc/apps/Splunk_TA_nessus/bin/splunk_ta_nessus/splunktaucclib/data_collection/ta_data_collector.py", line 148, in _do_safe_index
    self._client = self._create_data_client()
  File "/apps/pcehr/splunk/etc/apps/Splunk_TA_nessus/bin/splunk_ta_nessus/splunktaucclib/data_collection/ta_data_collector.py", line 95, in _create_data_client
    self._checkpoint_manager)
  File "/apps/pcehr/splunk/etc/apps/Splunk_TA_nessus/bin/splunk_ta_nessus/splunktaucclib/data_collection/ta_data_client.py", line 55, in __init__
    self._ckpt)
  File "/apps/pcehr/splunk/etc/apps/Splunk_TA_nessus/bin/splunk_ta_nessus/ta_tenable_sc_data_collector.py", line 18, in do_job_one_time
    return _do_job_one_time(all_conf_contents, task_config, ckpt)
  File "/apps/pcehr/splunk/etc/apps/Splunk_TA_nessus/bin/splunk_ta_nessus/ta_tenable_sc_data_collector.py", line 53, in _do_job_one_time
    logger_prefix=logger_prefix)
  File "/apps/pcehr/splunk/etc/apps/Splunk_TA_nessus/bin/splunk_ta_nessus/security_center.py", line 219, in get_security_center
    sc.login(username, password)
  File "/apps/pcehr/splunk/etc/apps/Splunk_TA_nessus/bin/splunk_ta_nessus/security_center.py", line 45, in login
    result = self.perform_request('POST', 'token', data)
  File "/apps/pcehr/splunk/etc/apps/Splunk_TA_nessus/bin/splunk_ta_nessus/security_center.py", line 133, in perform_request
    self._uri(path), method, data, headers)
  File "/apps/pcehr/splunk/etc/apps/Splunk_TA_nessus/bin/splunk_ta_nessus/httplib2/__init__.py", line 1609, in request
    (response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
  File "/apps/pcehr/splunk/etc/apps/Splunk_TA_nessus/bin/splunk_ta_nessus/httplib2/__init__.py", line 1351, in _request
    (response, content) = self._conn_request(conn, request_uri, method, body, headers)
  File "/apps/pcehr/splunk/etc/apps/Splunk_TA_nessus/bin/splunk_ta_nessus/httplib2/__init__.py", line 1272, in _conn_request
    conn.connect()
  File "/apps/pcehr/splunk/etc/apps/Splunk_TA_nessus/bin/splunk_ta_nessus/httplib2/__init__.py", line 1075, in connect
    raise socket.error, msg
error: [Errno 104] Connection reset by peer
0 Karma

xpac
SplunkTrust
SplunkTrust

It looks as if the TA is trying to log in (using HTTP POST), but the connection is reset which usually means that there is no service running on where it wants to connect to.
I'd double check any connection information you had to enter, like URL/IP/port, because this looks like an issue of the Tenable service not being available where you expected it to be.

Hope that helps.

0 Karma

craigwilkinson
Path Finder

Thanks for the reply xpac.

From the SecurityCenter POV - can see from the logs that the user/TA app is logging in successfully, but there is a delay from the Splunk TA error message, and successful login.

The service is running too 😕

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...