Has anyone run into and/or resolved this with 6.2 -->
"Forbidden: Strict SSO Mode View more information about your request (request ID = XX) in Search"
I get this message when attempting to load ANY splunk URL, except for the login page, which is just a blank grey page.
Not my blog, but this site has some more detail about the issue: http://translate.google.com.au/translate?hl=en&sl=ja&u=http://snickerjp.blogspot.com/2014/10/splunk-...
In /opt/splunk/etc/system/default/web.conf 'SSOMode = permissive' should have gotten past this from what i read on http://docs.splunk.com/Documentation/Splunk/6.2.0/admin/Webconf
In saying that, I don't believe that should be needed anyway, as I have not set up SSO on this instance (nor can I properly, as it is the free version).
Running on Ubuntu if it makes any difference (an upgrade I did on RHEL hasn't encountered this issue)
It works fine if I make appServerPorts 0, so it goes into legacy mode
Thanks,
Carson.
Mine was much more complicated, and I got an answer from Splunk support.
The very short story is that I had some misconfigured IP Tables rules that were masquerading traffic from the loopback to come from my eth0 IP Address... as Splunk was seeing the source IP as not 127.0.0.1, it was freaking out.
Fixing the IP Tables rule resolved my issue.
Mine was much more complicated, and I got an answer from Splunk support.
The very short story is that I had some misconfigured IP Tables rules that were masquerading traffic from the loopback to come from my eth0 IP Address... as Splunk was seeing the source IP as not 127.0.0.1, it was freaking out.
Fixing the IP Tables rule resolved my issue.
To add some detail to this, I had this error until I set all traffic destined to 127.0.0.1 to skip MASQUERADE. The new iptables rule immediately fixed the error, no service restart required.
iptables -I POSTROUTING 1 -t nat -d 127.0.0.1 -j ACCEPT
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
10 600 ACCEPT all -- * * 0.0.0.0/0 127.0.0.1
5710K 580M MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0
I had this issue after the upgrade as well. I think the correct fix is to set "tools.proxy.on" to "false". I know this setting was required, or at least made it easier, when running splunk behind a (reverse) proxy. Now the setting is only needed when using SSO or very old apache proxies.
That didn't seem to work for me. by default tools.proxy.on is false, and I hadn't overridden it...
I think the key part is, I'm not actually running SSO, so it shouldn't be showing this. I am currently running behind an Apache proxy, but purely to make the URL better, not for any SSO functionality. I was getting the same error when going direct to Splunk via port 8000.
What settings are in your web.conf? There are a few other settings that may enable/force SSO unintentionally.
[settings]
x_frame_options_sameorigin = False
root_endpoint = /splunk
#SSOMode = permissive
#trustedIP = 1.0.0.0/23, 127.0.0.1
#http://docs.splunk.com/Documentation/Splunk/6.2.0/Admin/Webconf appServerPorts
appServerPorts = 0 #This is my workaround to get it working
I'm pretty sure I've tried it without the first two settings, and had the same outcome.
It's a bit unclear from the docs, but it seems like trustedIp
doesn't support ranges unless the appServerPorts
is set to something other than 0. I've made a few requests into the docs team on this topic today, so hopefully we can all benefit.