- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hi,
we have deployed the message tracking app to one of our heavy forwarders to pull the logs, and also have the TA on our clustered search heads. We are getting the logs fine, but we do see the following errors in the splunkd.log from the Search heads. Any idea what could be causing this?
02-19-2019 17:11:36.742 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ms_o365_message_trace.py" HTTPError: HTTP 500 Internal Server Error -- {"messages":[{"type":"ERROR","text":"Unexpected error \"\" from python handler: \"REST Error [500]: Internal Server Error -- Traceback (most recent call last):\n File \"/opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ta_ms_o365_reporting/splunktaucclib/rest_handler/handler.py\", line 113, in wrapper\n for name, data, acl in meth(self, *args, **kwargs):\n File \"/opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ta_ms_o365_reporting/splunktaucclib/rest_handler/handler.py\", line 348, in _format_all_response\n self._encrypt_raw_credentials(cont['entry'])\n File \"/opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ta_ms_o365_reporting/splunktaucclib/rest_handler/handler.py\", line 382, in _encrypt_raw_credentials\n change_list = rest_credentials.decrypt_all(data)\n File \"/opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ta_ms_o365_reporting/splunktaucclib/rest_handler/credentials.py\", line 286, in decrypt_all\n all_passwords = credential_manager._get_all_passwords()\n File \"/opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ta_ms_o365_reporting/solnlib/utils.py\", line 154, in wrapper\n return func(*args, **kwargs)\n File \"/opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ta_ms_o365_reporting/solnlib/credentials.py\", line 272, in _get_all_passwords\n clear_password += field_clear[index]\nTypeError: cannot concatenate 'str' and 'NoneType' objects\n\". See splunkd.log for more details."}]}
thanks
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Resolved the issue. As we have a different splunk.secret for all out Splunk Tiers. We had to create a custom app for the heavy forwarder which sits along side the main add on. Our errors where as a result of mistakenly deploying the custom app to the search head cluster. Removing this resolved our issue.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Resolved the issue. As we have a different splunk.secret for all out Splunk Tiers. We had to create a custom app for the heavy forwarder which sits along side the main add on. Our errors where as a result of mistakenly deploying the custom app to the search head cluster. Removing this resolved our issue.
