Hi, I think this maybe a bug of some sorts. I have recently installed both the Splunk addon/app AWS & Pagerduty apps.
AWS for splunk has a builtin proxy function 🙂
Pagerduty does not and relies on the proxy being defined in splunk-launcher.conf
We have to go through a proxy to get outside.
The problem is, when a proxy is defined in splunk-launcher.conf, it breaks the AWS app. In AWS any accounts that may have been configured are lost and AWS fails to function. Removing the proxy from splunk-launcher.conf, the AWS accounts re-appear.
The AWS app will not use the proxy defined in splunk-launcher.conf.
So we are stuck.
I've been experiencing this issue also but upon searching through the code for all apps involved I've discovered the following:
Both of these points explain why there's dedicated proxy settings for the AWS Add-On. On the other hand, however, the AWS Add-On does check for splunk-launch/environment proxy settings. When adding an AWS account via the Add-On with proxy settings in place you're likely to see an error unless the proxy is set up to handle loop-back (i.e. 127.0.0.1) - which it probably won't be because, why would it!? To overcome this error, add
no_proxy=127.0.0.1 to splunk-launch.conf.
The only remaining problem then is for the webhook approach to be updated to account for proxy settings. It would be nice if Splunk had an API to make this easier and avoid the need to pull in the proxy settings and establish the connection manually.
Hi, which version of AWS app did you install? The proxy support is one of the new features in version 4.1, which was delivered 2 months ago.
Hi, Thanks for the replay. These are the versions in use:
Splunk Add-on for AWS - 3.0.0
Splunk App for AWS - 4.1.1
Splunk Enterprise - 6.3.0
PagerDuty Incidents - 1.0