I have a current single instance deployment of Splunk 8.2.3 on Linux Fedora 35, and it keeps encouraging me to update my mmapv1 storageEngine to wiredTiger. However, when I follow the instructions for a current single instance deployment at https://docs.splunk.com/Documentation/Splunk/8.2.3/Admin/MigrateKVstore#Migrate_the_KV_store_after_a... , it always fails after running the migration command. The entire output is:
Starting KV Store storage engine upgrade:
Phase 1 (dump) of 2:
.....ERROR: Failed to migrate to storage engine wiredTiger, reason=
where "reason" is blank. I haven't found anyone posting about getting this error without a reason. How should I complete the migration, or at least do further troubleshooting?
Had this issue in a pretty large cluster (over 40 Splunk servers). KVStore Upgrade would not work on ANY host.
After lots of fiddling and trying to find a cause and solution, we had to give up. At least we found a reliable and easy enough workaround. Sharing it here in case someone else stumbles on this. This workaround is tested with v8.1.9, so should definitely work with versions above as well.
[kvstore]
storageEngine=wiredTiger
Hope it helps some of you.
Same issue, same blank result, and same sad and unhelpful log files. Pretty much business as usual for Splunk. Not even sure why I'm surprised anymore. Miss the good ol' days when Splunk was worth the hefty price tag.
I had the same issue.
For me it was because I didnt have any data in my kvstore to migrate.
What worked for me was creating a kvstore and entering some dummy data. When I tried the migration again it was successful.
Unfortunately, my kvstore isn't empty. It's only about 733k, which isn't much, but I don't want to lose it in any case.
Hi @kserverman looks like there are lot of wiredTiger migration failures in these last few weeks.
and as you know, the kvstore troubleshootings are complex as well.
3 queries:
anything you find at the $SPLUNK_HOME/var/log/splunk/mongod.log ?!?!
do you use SSL certs(default/custom)?!?!
/opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key file permissions please? is it 400?
Hi @inventsekar ,
I had upgraded Splunk to 8.2.4 earlier, and so tried the migration again to see if there's any change. Looking in $SPLUNK_HOME/var/log/splunk/mongod.log, I saw an error (included a couple lines before and after for context)
2022-01-03T01:10:30.062Z I ACCESS [conn1] Successfully authenticated as principal __system on local from client 127.0.0.1:58652
2022-01-03T01:10:30.062Z I NETWORK [conn1] end connection 127.0.0.1:58652 (0 connections now open)
mongodump fatal error: unrecognized DWARF version in .debug_info at 6
mongodump runtime stack:
mongodump panic during panic
mongodump runtime stack:
mongodump stack trace unavailable
2022-01-03T01:10:31.095Z I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2022-01-03T01:10:31.095Z I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...
...
Looks like someone else found this error before: https://community.splunk.com/t5/Knowledge-Management/KV-Store-backup-migration-fail/m-p/573870#M8570 . I saw the same errors in splunkd.log as listed in this post.
Answering your other questions:
- I don't use SSL/TLS certs except for web access - these are valid custom ones, stored in $SPLUNK_HOME/splunkweb-certs/ .
-$SPLUNK_HOME/var/lib/splunk/kvstore/mongo/splunk.key has 600 as permissions.
Same here.
var/log/splunk/mongod.log shows the following:
mongodump linux-vdso.so.1 errno 13
mongodump fatal error: linux-vdso.so.1
mongodump runtime stack:
mongodump linux-vdso.so.1 errno 13
mongodump panic during panic
mongodump runtime stack:
mongodump linux-vdso.so.1 errno 13
mongodump stack trace unavailable
I also have the exact same error.
I have 1 search head and 4 indexers, not clustered.
Search Head and Indexer #1 migrated fine. Indexers 2, 3, and 4 have this error.
I am also getting same error.
Found this is in splunkd.log
ERROR KVStoreConfigurationProvider [63967 MainThread] - Failed to run mongodump, shutting down mongod