Have run into a minor problem when cloning virtual machines with a forwarder installed. Unfortunately, the
guid (globally unique identifier) was not set to empty in
server.conf prior to cloning, and we've ended up with several forwardes with the same
guid values. This does not seem to affect anything but some graphs in the Deployment Monitor, e.g. the Forwarder Connections graph on the main page, which is based on a distinct count of
guids, resulting in a lower reported number of active forwarders than there really are. (NB: the "all forwarders" view in DM shows the correct number of forwarders). There might however be other consequences not yet discovered.
What I want to do is to manually change the
guid values for the cloned hosts, but I want to know if the
guid is just a random string of hex, or if the string actually represents something, like OS version, IP-address, forwarder build #, hostname etc, or if there is some sort of checksum to be taken into account.
Can I just change the
guid value in
server.conf of an already installed forwarder, restart it, and expect it to work fine?
EDIT: Or even simpler - can I just remove the
guid value altogheter and restart the forwarder - hoping that it will generate a new
Any help appreciated.
It doesn't mean anything, but yes you can just delete it and it will be re-generated.
I tried this on cloned indexers in version 7.3.0 (in my lab) with the same GUID and it worked (deleting instance.cfg)
I have been through this. Just delete the GUID and restart as it will generate another.
Not sure how it is generated though. From what I can recall it has something to do with the
license type the hash is based on.
With license pools it affects your daily indexing volume too. If running two same GUIDS
for example you will have double the indexed volume if you cloned the indexer.
This happened to me when I didn't change the GUID after cloning an instance and starting with a clean index it still doubled the daily indexing volume in license manager.
Hope this helps.
I ran into this issue as well. But running Splunk 5.0.2 the file was not server.conf. The file was instance.cfg under $SPLUNKHOME/etc. Removing the guid and restarting Splunk still fixed the issue.