We have recently upgraded our splunk from 8.0.2 to 9.0.4. The SH cluster members are giving the message - "KV Store is running an old version, service(36). See the Troubleshooting Manual on KV Store upgrades for more information."
We have followed the steps defined the KV migration link -https://docs.splunk.com/Documentation/Splunk/9.0.4/Admin/MigrateKVstore?ref=hk but the version doesn't seem to upgrade. It is always showing the existing version.
Current serverVersion : 3.6.17
storageEngine : wiredTiger
Hi,
We were able to resolve the issue by disabling one SH member at a time from clustering and upgrade the KV store of the clustering disable member as a standalone instance. We then followed the same steps for all members.
Regards
There are two required changes to KVStore when upgrading to Splunk 9 - change the storage engine to wiredTiger and change the software version to 4.2.
See https://docs.splunk.com/Documentation/Splunk/9.0.4/Admin/MigrateKVstore?ref=hk&_ga=2.129126538.74578...for details
Hi,
As i have mentioned in my first comment, we have followed the steps defined in the link.
change the storage engine to wiredTiger is done.
change the software version to 4.2 is not working.
The upgrade doesn't always work. What we did was export data and completely delete the KVStore after stopping splunk. Then when it camp up, it recreated a blank KVStore that was of the upgraded type. Then reimport the data.
Hi
Thanks for the response. reimport the data means the data from $SPLUNK_HOME/var/lib/splunk/kvstore directory. if we restore it wouldn't it have the older version file or datas ?
Regards
NO. I would use this:
https://splunkbase.splunk.com/app/3519
"Is not working" is not a problem description. Please provide the exact steps followed and the results of them. Tell us about your environment (standalone, clustered, etc) so we know if you're using the right instructions.
Hi
The issue is with the SH clustered environment. The KV store server version migration didn't happen as part of splunk upgrade hence followed the below stops.
curl -k -u admin:changeme https://localhost:8089/services/shcluster/captain/kvmigrate/start -d storageEngine=wiredTiger -d isDryRun=true
splunk start-shcluster-migration kvstore -storageEngine wiredTiger -peersList "https://server1:8089,https://server2:8089,https://server3:8089"
Upgrade KV store server to version 4.2
splunk start-shcluster-upgrade kvstore -version 4.2 -isDryRun true
splunk start-shcluster-upgrade kvstore -version 4.2
However when running the command splunk show kvstore-status --verbose, it is still showing as the old version server.
KV store members:
np-sh-1:8191
configVersion : 14
electionDate : Wed Apr 12 06:02:53 2023
electionDateSec : 1681275773
hostAndPort : np-sh-1:8191
lastHeartbeat : Wed Apr 12 13:40:44 2023
lastHeartbeatRecv : Wed Apr 12 13:40:44 2023
lastHeartbeatRecvSec : 1681303244.627
lastHeartbeatSec : 1681303244.586
optimeDate : Wed Apr 12 13:40:32 2023
optimeDateSec : 1681303232
pingMs : 1
replicationStatus : KV store captain
serverVersion : 3.6.17
uptime : 27479
np-sh-2:8191
configVersion : 14
hostAndPort : np-sh-2:8191
lastHeartbeat : Wed Apr 12 13:40:44 2023
lastHeartbeatRecv : Wed Apr 12 13:40:45 2023
lastHeartbeatRecvSec : 1681303245.54
lastHeartbeatSec : 1681303244.565
optimeDate : Wed Apr 12 13:40:32 2023
optimeDateSec : 1681303232
pingMs : 0
replicationStatus : Non-captain KV store member
serverVersion : 3.6.17
uptime : 27475
np-sh-3:8191
configVersion : 14
hostAndPort : np-sh-3:8191
optimeDate : Wed Apr 12 13:40:32 2023
optimeDateSec : 1681303232
replicationStatus : Non-captain KV store member
serverVersion : 3.6.17
uptime : 27483
It looks like you've run the right commands. Was there any output from the splunk start-shcluster-upgrade kvstore -version 4.2 -isDryRun true command? If so, were the issues resolved before re-running the command? Are all nodes in the cluster reporting the same version? Is the cluster in an error-free state?
Consider contacting Splunk Support for assistance with this problem.
Hi
Thanks for the response. We will try recreating the kvstore and also we are getting in touch with support team.
Regards
Should check $SPLUNK_HOME/var/log/splunk/splunkd.log for any ERROR logs around the time these commands are being executed. I've seen issues in the past where KVstore upgrades/migrations won't execute due to things like not enough storage, but you only find out from looking at splunkd.log.
Hi,
We were able to resolve the issue by disabling one SH member at a time from clustering and upgrade the KV store of the clustering disable member as a standalone instance. We then followed the same steps for all members.
Regards