Splunk Search

SPLUNK_BINDIP, web and daemon don't agree

grijhwani
Motivator

I've quickly skimmed through the answers already here, and not found a corresponding answer, although there is a question from 4 years ago which sort of touches on the subject. I have a server on 6.2.4 (which will go to 6.3.0 as soon as I fix this particular issue).

It has two interfaces, one of which potentially faces the big bad world. That interface is currently shut down, but as a precaution, in case someone enables it again, I want to bind Splunk to the internal interface only. The problem is, that when I set SPLUNK_BINDIP to the internal address (RFC 1918 private address space) although the initial web UI comes up prompting me to download an update, as soon as I go beyond that it gives a server 500 error, with a footnote that it is trying to connect to the daemon on 127.0.0.1:8089, but the daemon too is only bound to the same RFC 1918 address, and not on 127.0.0.1.

1) There seems to be no mechanism for specifying binding to all interfaces/addresses excluding exceptions (in order to disregard the external interface)
2) There seems to be no mechanism for specifying multiple explicit address/interface bindings (in order to explicitly list the internal interface and loopback)
3) Despite setting mgmtHostPort in ~splunk/etc/system/local/web.conf to the bound IP address, btool is showing me that Splunk is still picking it up from ~splunk/etc/system/default/web.conf as 127.0.0.1.

What gives?

(It's running on Debian, and yes, ownership and permission on web.conf are correct - or at least they match those on default. This is a root run process, and all ownerships are by root. Still one of my biggest gripes with Splunk architecture on *ix.)

1 Solution

grijhwani
Motivator

Well that was a facepalm moment - I forgot the stanza heading...

Now there's another problem.

View solution in original post

napomokoetle
Communicator

I had this similar issue. The mgmtHostPort parameter in web.conf was misconfigured in my case. Unfortunately the logs were as vague as vague can be. Your post made me realize that the mgmtHostPort was pointing at a different splunk instance's IP. Once I changed the value to the local IP all worked again. I had been working very late the previous night and was fatigued and changed the IP without realizing I was not on the right instance.
Thank you!

0 Karma

grijhwani
Motivator

Well that was a facepalm moment - I forgot the stanza heading...

Now there's another problem.

Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

[Puzzles] Solve, Learn, Repeat: Matching cron expressions

This puzzle (first published here) is based on matching timestamps to cron expressions.All the timestamps ...

Design, Compete, Win: Submit Your Best Splunk Dashboards for a .conf26 Pass

Hello Splunkers,  We’re excited to kick off a Splunk Dashboard contest! We know that dashboards are a primary ...

May 2026 Splunk Expert Sessions: Security & Observability

Level Up Your Operations: May 2026 Splunk Expert Sessions Whether you are refining your security posture or ...