Installation
Highlighted

Splunk Search Head giving 500 internal server error after upgrading to Splunk 6.4

Contributor

Hi,

I just upgraded my Splunk Deployment from 6.3 to 6.4.

While I am still able to authenticate to the search head, I am getting 500 Internal Server Error which is preventing me from doing anything on the search head.

Did anyone else have the same issue? Not sure why this is happening.

I filed support case as well but thought I would also ask here in case anyone knew the cause.

This is my first time having an issue during a Splunk Upgrade.

Any help would be much appreciated. Let me know.

Thanks.
Brian

Labels (2)
0 Karma
Highlighted

Re: Splunk Search Head giving 500 internal server error after upgrading to Splunk 6.4

Contributor

Seems like this is an issue regarding my use of env proxy variables on the system side.

To mitigate the issue, I had to unset httpproxy and httpsproxy and set the following:

NO_PROXY=127.0.0.1,localhost

This has resolved my issue.

Now I just need to figure out how to configure Splunk to go out through proxy so I can access splunkbase apps.

Thanks
Brian

View solution in original post

Highlighted

Re: Splunk Search Head giving 500 internal server error after upgrading to Splunk 6.4

SplunkTrust
SplunkTrust

You're not alone, this is the #1 highlighted known issue at http://docs.splunk.com/Documentation/Splunk/6.4.0/ReleaseNotes/Knownissues

0 Karma
Highlighted

Re: Splunk Search Head giving 500 internal server error after upgrading to Splunk 6.4

Esteemed Legend

This happens when you accidentally turn off the management interface. You can look for this with either of these 2 commands:

find /opt/splunk/etc/ -type f -name server.conf -exec grep -il disableDefaultPort {} \;
/opt/splunk/bin/splunk btool server list --debug | grep disableDefaultPort

To brute force a quick-fix until you sort out your configuration files, just put this in /opt/splunk/etc/system/local/server.conf:

[httpServer]
disableDefaultPort = false

Then restart Splunk.
To add insult to injury, neither the splunk logs, nor the dead page served to you give you any indication that this is the situation and either could and BOTH SHOULD. Even when we turned on debug with /opt/splunk/bin/splunk start --debug, we STILL do not get any log telling us that this setting has explicitly disabled this core function. The ONLY place that you see this, and the only reason that we figured it out, is that it IS logged to STDOUT when you start splunk. You will see this somewhat casual note:

$ /opt/splunk/bin/splunk start

Splunk> All batbelt. No tights.

Checking prerequisites...
        Management port has been set disabled; the web UI cannot work.
        Checking http port [8000]: open
        Management port has been set disabled; cli support for this configuration is currently incomplete.

I opened a P4/ER to have this logged as a WARN but who knows if this will ever get implemented. Hopefully this answer will save somebody the day that I wasted on this. To be fair, it was my own fault; I was hardening UFs and did not have my blacklist correct for my server class so it hit a few of my Search Heads. DOH!

0 Karma