- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
changing License Manager serverName affects SHC
I have a splunk clustered environment, where the License Manager has a none existent(cannot be resolved/name-lookup) servername configured (etc/system/local/server.conf -> serverName). This has been running like this for a some time.
But this is introducing issues with license monitoring in Montioring Console. To eliminate this issue and make this Splunk instance to comply to other existing instances, i tried to simply change the serverName in server.conf to the hostname and restarting the Splunk service.
Splunk service is starting without complains, but the Monitoring Console reports that suddenly all the SearchHeads are unreachable.
Querying the Searchheads for shcluster-status, results in errors.
Reverting back to the old name and restarting, fixes that SearchHead unreachable issue and status.
This License Manager server has following roles:
* License manager
* (Monitoring Console)
* Manager Node
I do not see any connection on why this change is affecting Searchheads. Indexers are fine. Deployer is a different server. I found documented issues (for this kind of change) for Indexers and the Monitoring Console itself or that it can have side affects for the Deployment Server, but no real hit on Searchheads/SHC.
As i do not have permanent access to this instance. I have to prepare kind of a remediation plan or at least analysis.
I'm searching for hints where I can start with my investigation. Maybe someone had successfully changed a License Master name. Hoping that I'm missing something obvious.
Thanks
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hi
as you have Cluster Manager as your LM too this this severName is actually your CM's name not only your LM name.
I assume that you are using also e.g. indexer discovery and other stuff on your environment?
Is suppose that when you have changed that serverName then all those entities which are connected to your CM could have some issues as there is not existing that serverName what they are expecting.
It's hard to say what all those are without deeper look into your environment.
r. Ismo
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You are correct. I had prepared and found steps for fixing indexers, but they seemed fine.
The configuration for manager_uri and alike is largely based on IP (which is another topic on its own), the IP did not change. So endpoints should be able to reach the "modified" server (but may expect a different response).
I have to dig into indexer_discovery (and alike). I did not prepare for it. To my documentation it is not configured.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hi @stei-f
Its very odd that this would only affect the SH, especially as any outbound connection from the monitoring console shouldnt be impacted by the change to the MC Server name.
From Monitoring Console, if you go to Settings->General Setup - What does this screen look like? Do you see the remote SHs in there?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'm pretty sure the SH were in the "Settings->General Setup" (Listed as remote-instances), as I wanted to apply the config to make the name change applied to the apps lookups (splunk_ apps). At that time I was still thinking the unreachable status was a timing/communication thing. So to verify my point, I checked the shcluster-status (CLI) on the SH just to discover that the SHC failed (was not able to query the state). Thats when I chickened out and reverted back the configuration.
I will add this to my checklist.
In reflection, I messed up. I missed to take evidence of the situation (e.g. screenshots and error messages). Focusing on restoring the service.
