All Apps and Add-ons

TA doesn't appear to be using configured proxy

Path Finder

I've installed the TA on our Heavy Forwarder, and configured it with the details needed to connect to the Event Hub, as well as the settings for our proxy that it needs to use.

Despite this, we're not seeing any traffic on our proxy from the Heavy Forwarder, despite it appearing to have tried and failed to connect:

2020-03-30 10:37:53,193 ERROR pid=11042 tid=MainThread | Get error when collecting events.
Traceback (most recent call last):
  File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/modinput_wrapper/", line 127, in stream_events
  File "/opt/splunk/etc/apps/TA-MS-AAD/bin/", line 92, in collect_events
    input_module.collect_events(self, ew)
  File "/opt/splunk/etc/apps/TA-MS-AAD/bin/", line 112, in collect_events
    partition_ids = client.get_partition_ids()
  File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure/eventhub/", line 163, in get_partition_ids
    return self.get_properties()['partition_ids']
  File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure/eventhub/", line 146, in get_properties
    response = self._management_request(mgmt_msg, op_type=b'')
  File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure/eventhub/", line 127, in _management_request
    self._handle_exception(exception, retry_count, max_retries)
  File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure/eventhub/", line 105, in _handle_exception
    _handle_exception(exception, retry_count, max_retries, self)
  File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure/eventhub/", line 196, in _handle_exception
    raise error
ConnectError: Unable to open management session. Please confirm URI namespace exists.
Unable to open management session. Please confirm URI namespace exists.

I've configured the proxy and can see it showing up in ta_ms_aad_settings.conf as:

proxy_enabled = 1
proxy_port = <proxyport>
proxy_url = <proxyurl>

And when searching for sourcetype=ta:ms:aad:log, I can see the message:

2020-03-30 10:37:58,475 DEBUG pid=12915 tid=MainThread | _Splunk_ Proxy is enabled: <proxyurl>:<proxyport>

However when I run tcpdump on the host, I can see it making DNS requests to resolve the host in the EventHub Connection String I provided it, and then making a request directly out to the host - without using the proxy

2020-03-27 15:34:38.750364 IP (tos 0x0, ttl 64, id 26550, offset 0, flags [DF], proto TCP (6), length 60)
 <SplunkHeavyForwarder> > <AzureEventHub>: Flags [S], cksum 0xf2b6 (incorrect -> 0xbf5a), seq 330547664, win 29200, options [mss 1460,sackOK,TS val 3370165862 ecr 0,nop,wscale 7], length 0
2020-03-27 15:34:39.752354 IP (tos 0x0, ttl 64, id 26551, offset 0, flags [DF], proto TCP (6), length 60)
 <SplunkHeavyForwarder> > <AzureEventHub>: Flags [S], cksum 0xf2b6 (incorrect -> 0xbb70), seq 330547664, win 29200, options [mss 1460,sackOK,TS val 3370166864 ecr 0,nop,wscale 7], length 0
2020-03-27 15:34:41.757349 IP (tos 0x0, ttl 64, id 26552, offset 0, flags [DF], proto TCP (6), length 60)
 <SplunkHeavyForwarder> > <AzureEventHub>: Flags [S], cksum 0xf2b6 (incorrect -> 0xb39b), seq 330547664, win 29200, options [mss 1460,sackOK,TS val 3370168869 ecr 0,nop,wscale 7], length 0
2020-03-27 15:34:45.768341 IP (tos 0x0, ttl 64, id 26553, offset 0, flags [DF], proto TCP (6), length 60)
 <SplunkHeavyForwarder> > <AzureEventHub>: Flags [S], cksum 0xf2b6 (incorrect -> 0xa3f0), seq 330547664, win 29200, options [mss 1460,sackOK,TS val 3370172880 ecr 0,nop,wscale 7], length 0

I'm out of ideas on where this is failing - has anyone had a similar issue, or can you see something I've missed?

I don't need to edit ta_ms_aad_settings.conf.spec do I? I assume it's like a template for the ta_ms_aad_settings.conf which has been populated with the proxy config. Currently the spec file is empty:

proxy_enabled =
proxy_type =
proxy_url =
proxy_port =
proxy_username =
proxy_password =
proxy_rdns =

loglevel =

Splunk Employee
Splunk Employee

The add-on uses the Azure Python SDK to get Event Hub data. There are a few things we can do to troubleshoot:

Set the logging level to DEBUG in the add-on

  • From the add-on UI, select Configuration -> Logging -> Log Level
  • Add a debug log to the TA to print out the HTTP Proxy

    • In add the following line after HTTP_PROXY = get_proxy(helper, "event hub") :
    • helper.log_debug("_Splunk_ HTTP_PROXY: %s" % str(HTTP_PROXY))
    • Run this search to display the proxy used:
    • index=_internal _Splunk_ HTTP_PROXY

Test the connection outside of Splunk

0 Karma

Path Finder

Thanks for your suggestions.

The debug modifications resulted in the following output:

2020-04-03 15:20:33,490 DEBUG pid=3750 tid=MainThread | _Splunk_ HTTP_PROXY: {'username': '', 'proxy_hostname': u'', 'password': '', 'proxy_port': 9090}

This all looks as expected except for the "u" character sitting between the proxy_hostname field and value. Could this be an issue? No idea where that came from, and checking in the inputs tab of the TA, the proxy is clearly configured as "".

The proxy test script worked using the same Connection String and Event Hub Name provided to the TA, and we were able to receive logs.

To me this means that the issue isn't in the Azure variables we're using here, and there's clearly an problematic difference between how the Python test script polls the Event Hub and what the TA is configured to do. Exactly what it is, I haven't been able to figure out.

Any further advice you could provide would be greatly appreciated.

0 Karma


can you post $SPLUNK_HOME/etc/apps/TA-MS-AAD/local/ta_ms_aad_settings.conf ?

0 Karma

Path Finder

It's in the original post:

 proxy_enabled = 1
 proxy_port = <proxyport>
 proxy_url = <proxyurl>

our proxy details omitted for obvious reasons

0 Karma


the first log indicates a namespace problem, not a proxy one, can you check this post:

0 Karma

Path Finder

Yeah I've seen that post and ensured I'm using the event hub name in the inputs tab and not the namespace.

As mentioned, a tcpdump of relevant traffic shows Splunk failing at the SYN stage of the 3-way TCP handshake, and it appears to be because it's attempted to go directly to the public IP instead of through the gateway. I believe it's been dropped by the network device it attempted to transit through - hence the absence of a RST or response from Azure.

That being the case, I believe the error based on the URI namespace is simply because it couldn't reach the resource due to its attempts to reach it being dropped en route - not necessarily because it doesn't exist.

0 Karma
State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!