Splunk Enterprise

KVstore unable to start after upgrade to Splunk Enterprise 9.4

gloom
Loves-to-Learn

Hi,

After completing the upgrade from Splunk Enterprise version 9.3.2 to v9.4 the KVstore will no longer start. Splunk has yet to do the KVstore upgrade to v7 as the KVstore cannot start. We were already on 4.2 wiredtiger.

The is no [kvstore] stanza in server.conf so everything should be default.

The relavent lines from splunkd.log are:

 

 

INFO  KVStoreConfigurationProvider [9192 MainThread] - Since x509 is not enabled - using a default config from [sslConfig] for Mongod mTLS authentication
WARN  KVStoreConfigurationProvider [9192 MainThread] - Action scheduled, but event loop is not ready yet
INFO  MongodRunner [7668 KVStoreConfigurationThread] - Starting mongod with executable name=mongod-4.2.exe version=kvstore version 4.2
INFO  MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --dbpath C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo 
INFO  MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --storageEngine wiredTiger
INFO  MongodRunner [7668 KVStoreConfigurationThread] - Using cacheSize=1.65GB
INFO  MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --port 8191
INFO  MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --timeStampFormat iso8601-utc
INFO  MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --oplogSize 200
INFO  MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --keyFile C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo\splunk.key
INFO  MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --setParameter enableLocalhostAuthBypass=0
INFO  MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --setParameter oplogFetcherSteadyStateMaxFetcherRestarts=0
INFO  MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --replSet 4EA2F2AF-2584-4BB0-A2C4-414E7CB68BC2
INFO  MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --bind_ip=0.0.0.0 (all ipv4 addresses)
INFO  MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --sslCAFile C:\Program Files\Splunk\etc\auth\cacert.pem
INFO  MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --tlsAllowConnectionsWithoutCertificates for version 4.2
INFO  MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --sslMode requireSSL
INFO  MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --sslAllowInvalidHostnames
WARN  KVStoreConfigurationProvider [9192 MainThread] - Action scheduled, but event loop is not ready yet
INFO  KVStoreConfigurationProvider [9192 MainThread] - "SAML cert db" registration with KVStore successful
INFO  KVStoreConfigurationProvider [9192 MainThread] - "Auth cert db" registration with KVStore successful
INFO  KVStoreConfigurationProvider [9192 MainThread] - "JsonWebToken Manager" registration with KVStore successful
INFO  KVStoreBackupRestore [1436 KVStoreBackupThread] - thread started.
INFO  KVStoreConfigurationProvider [9192 MainThread] - "Certificate Manager" registration with KVStore successful
INFO  MongodRunner [7668 KVStoreConfigurationThread] - Found an existing PFX certificate
INFO  MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --sslCertificateSelector subject=SplunkServerDefaultCert
INFO  MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --sslAllowInvalidCertificates
INFO  MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --tlsDisabledProtocols noTLS1_0,noTLS1_1
INFO  MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --sslCipherConfig ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES128-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:AES128-SHA256
INFO  MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --noscripting
WARN  MongoClient [7668 KVStoreConfigurationThread] - Disabling TLS hostname validation for localhost
ERROR MongodRunner [5692 MongodLogThread] - mongod exited abnormally (exit code 14, status: exited with code 14) - look at mongod.log to investigate.
ERROR KVStoreBulletinBoardManager [5692 MongodLogThread] - KV Store process terminated abnormally (exit code 14, status exited with code 14). See mongod.log and splunkd.log for details.
WARN  KVStoreConfigurationProvider [5692 MongodLogThread] - Action scheduled, but event loop is not ready yet
ERROR KVStoreBulletinBoardManager [5692 MongodLogThread] - KV Store changed status to failed. KVStore process terminated..
ERROR KVStoreConfigurationProvider [7668 KVStoreConfigurationThread] - Failed to start mongod on first attempt reason=KVStore service will not start because kvstore process terminated
ERROR KVStoreConfigurationProvider [7668 KVStoreConfigurationThread] - Could not start mongo instance. Initialization failed.
ERROR KVStoreBulletinBoardManager [7668 KVStoreConfigurationThread] - Failed to start KV Store process. See mongod.log and splunkd.log for details.
INFO  KVStoreConfigurationProvider [7668 KVStoreConfigurationThread] - Mongod service shutting down

 

 

mogod.log contains the following:

 

W  CONTROL  [main] Option: sslMode is deprecated. Please use tlsMode instead.
W  CONTROL  [main] Option: sslCAFile is deprecated. Please use tlsCAFile instead.
W  CONTROL  [main] Option: sslCipherConfig is deprecated. Please use tlsCipherConfig instead.
W  CONTROL  [main] Option: sslAllowInvalidHostnames is deprecated. Please use tlsAllowInvalidHostnames instead.
W  CONTROL  [main] Option: sslAllowInvalidCertificates is deprecated. Please use tlsAllowInvalidCertificates instead.
W  CONTROL  [main] Option: sslCertificateSelector is deprecated. Please use tlsCertificateSelector instead.
W  CONTROL  [main] net.tls.tlsCipherConfig is deprecated. It will be removed in a future release.
W  NETWORK  [main] Mixing certs from the system certificate store and PEM files. This may produced unexpected results.
W  NETWORK  [main] sslCipherConfig parameter is not supported with Windows SChannel and is ignored.
W  NETWORK  [main] sslCipherConfig parameter is not supported with Windows SChannel and is ignored.
W  NETWORK  [main] Server certificate has no compatible Subject Alternative Name. This may prevent TLS clients from connecting
W  ASIO     [main] No TransportLayer configured during NetworkInterface startup
W  NETWORK  [main] sslCipherConfig parameter is not supported with Windows SChannel and is ignored.
W  ASIO     [main] No TransportLayer configured during NetworkInterface startup
W  NETWORK  [main] sslCipherConfig parameter is not supported with Windows SChannel and is ignored.
I  CONTROL  [initandlisten] MongoDB starting : pid=4640 port=8191 dbpath=C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo 64-bit host=[redacted]
I  CONTROL  [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2
I  CONTROL  [initandlisten] db version v4.2.24
I  CONTROL  [initandlisten] git version: 5e4ec1d24431fcdd28b579a024c5c801b8cde4e2
I  CONTROL  [initandlisten] allocator: tcmalloc
I  CONTROL  [initandlisten] modules: enterprise 
I  CONTROL  [initandlisten] build environment:
I  CONTROL  [initandlisten]     distmod: windows-64
I  CONTROL  [initandlisten]     distarch: x86_64
I  CONTROL  [initandlisten]     target_arch: x86_64
I  CONTROL  [initandlisten] options: { net: { bindIp: "0.0.0.0", port: 8191, tls: { CAFile: "C:\Program Files\Splunk\etc\auth\cacert.pem", allowConnectionsWithoutCertificates: true, allowInvalidCertificates: true, allowInvalidHostnames: true, certificateSelector: "subject=SplunkServerDefaultCert", disabledProtocols: "noTLS1_0,noTLS1_1", mode: "requireTLS", tlsCipherConfig: "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RS..." } }, replication: { oplogSizeMB: 200, replSet: "4EA2F2AF-2584-4BB0-A2C4-414E7CB68BC2" }, security: { javascriptEnabled: false, keyFile: "C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo\splunk.key" }, setParameter: { enableLocalhostAuthBypass: "0", oplogFetcherSteadyStateMaxFetcherRestarts: "0" }, storage: { dbPath: "C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo", engine: "wiredTiger", wiredTiger: { engineConfig: { cacheSizeGB: 1.65 } } }, systemLog: { timeStampFormat: "iso8601-utc" } }
W  NETWORK  [initandlisten] sslCipherConfig parameter is not supported with Windows SChannel and is ignored.
W  NETWORK  [initandlisten] sslCipherConfig parameter is not supported with Windows SChannel and is ignored.
I  STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=1689M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],
W  STORAGE  [initandlisten] Failed to start up WiredTiger under any compatibility version.
F  STORAGE  [initandlisten] Reason: 129: Operation not supported
F  -        [initandlisten] Fatal Assertion 28595 at src\mongo\db\storage\wiredtiger\wiredtiger_kv_engine.cpp 928
F  -        [initandlisten] \n\n***aborting after fassert() failure\n\n

 

 Does anyone have any idea how to resolve this?

Thanks,

Labels (1)
0 Karma

morganfw
Path Finder

In our test environment we downgraded to 9.3.2 and KV Store was not started with the same error message in the log file, maybe the mongodb was corrupted.

As reported in the Splunk docs here: https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/MigrateKVstore the mongodb server need to be on 4.2.x version:

You must upgrade to server version 4.2.x before upgrading to Splunk Enterprise 9.4.x or higher. For instructions and information about updating to KV store server version 4.2.x in Splunk Enterprise versions 9.0.x through 9.3.x, see Migrate the KV store storage engine in the Splunk Enterprise 9.3.0 documentation.
 

so, to check, it is strongly suggested to see the following Splunk guide: https://docs.splunk.com/Documentation/Splunk/9.3.2/Admin/MigrateKVstore

After that, we stopped Splunk and issued a

splunk clean kvstore --local

after restarting Splunk everything was back working.

We upgraded again to 9.4.0, after some seconds it starts to upgrade mongodb from 4.2 to 7.0 through 4.4, 5.0 and 6.0 with message in GUI that KV store is updating, we need to wait until update its finished.

After some minutes mongodb was been successfully updated with the following message in the Splunk GUI:

morganfw_0-1738054320248.png

and the Splunk version:

morganfw_1-1738054749724.png

it is strongly suggested to tailing the $SPLUNK_HOME/var/log/splunk/mongodb_upgrade.log and do not operate on Splunk until update is finished.

below the mongodb_upgrade.log

2025-01-28T08:36:25.567Z INFO [mongod_upgrade] Mongod Upgrader Logs
2025-01-28T08:36:25.568Z DEBUG [mongod_upgrade] mongod_upgrade arguments: Args { verbose: Verbosity { verbose: 1, quiet: 0, phantom: PhantomData<clap_verbosity::InfoLevel> }, uri: "mongodb://__system@127.0.0.1:8191/?replicaSet=B99AB2AA-EE93-405A-95CD-89EAC0FCA551&retryWrites=true&authSource=local&ssl=true&connectTimeoutMS=10000&socketTimeoutMS=300000&readPreference=nearest&readPreferenceTags=instance:B99AB2AA-EE93-405A-95CD-89EAC0FCA551&readPreferenceTags=all:all&tlsAllowInvalidCertificates=true", nodes: 1, local_uri: "mongodb://__system@127.0.0.1:8191/?w=majority&journal=true&retryWrites=true&authSource=local&ssl=true&connectTimeoutMS=10000&socketTimeoutMS=300000&tlsAllowInvalidCertificates=true&directConnection=true", backup_disabled: false, ld_linker: "lib/ld-2.26.so", data_directory: "/opt/splunk/var/lib/splunk/kvstore/mongo", backup_path: "/opt/splunk/var/lib/splunk/kvstore/mongo_backup", shadow_mount_dir: "C:\\mongo_shadow\\", backup_volume: "NOT_SPECIFIED", logpath: "/opt/splunk/var/log/splunk/mongod.log", keep_metadata: false, drop_metadata: false, pre_drop_metadata: false, keep_backups: true, metadata_database: "migration_metadata", metadata_collection: "migration_metadata", rsync_retries: 5, max_start_retries: 10, shutdown_mongod: true, rsync_path: "/opt/splunk/bin/rsync", keyfile_path: Some("/opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key"), max_command_time_ms: 60000, time_duration_block_s: 1, polling_interval_ms: 100, polling_max_wait_ms: 26460000, polling_version_max_wait_ms: 4800000, max_retries: 4, health_check_max_retries: 60, use_ld: false, mongod_args: ["--dbpath=/opt/splunk/var/lib/splunk/kvstore/mongo", "--storageEngine=wiredTiger", "--wiredTigerCacheSizeGB=1.050000", "--port=8191", "--timeStampFormat=iso8601-utc", "--oplogSize=200", "--keyFile=/opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key", "--setParameter=enableLocalhostAuthBypass=0", "--setParameter=oplogFetcherSteadyStateMaxFetcherRestarts=0", "--replSet=B99AB2AA-EE93-405A-95CD-89EAC0FCA551", "--bind_ip=0.0.0.0", "--sslCAFile=/opt/splunk/etc/auth/cacert.pem", "--tlsAllowConnectionsWithoutCertificates", "--sslMode=requireSSL", "--sslAllowInvalidHostnames", "--sslPEMKeyFile=/opt/splunk/etc/auth/server.pem", "--sslPEMKeyPassword=password", "--tlsDisabledProtocols=noTLS1_0,noTLS1_1", "--sslCipherConfig=ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES128-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:AES128-SHA256", "--nounixsocket", "--noscripting"] }
2025-01-28T08:36:25.568Z INFO [mongod_upgrade] Executing Preflight Checks
2025-01-28T08:36:25.568Z DEBUG [mongod_upgrade] client_options_primary: ClientOptions { hosts: [Tcp { host: "127.0.0.1", port: Some(8191) }], app_name: None, compressors: None, connect_timeout: Some(10s), credential: Some(Credential("REDACTED")), direct_connection: None, driver_info: None, heartbeat_freq: None, load_balanced: None, local_threshold: None, max_idle_time: None, max_pool_size: None, min_pool_size: None, max_connecting: None, read_concern: None, repl_set_name: Some("B99AB2AA-EE93-405A-95CD-89EAC0FCA551"), retry_reads: None, retry_writes: Some(true), selection_criteria: Some(ReadPreference(Nearest { options: ReadPreferenceOptions { tag_sets: Some([{"instance": "B99AB2AA-EE93-405A-95CD-89EAC0FCA551"}, {"all": "all"}]), max_staleness: None, hedge: None } })), server_api: None, server_selection_timeout: None, default_database: None, tls: Some(Enabled(TlsOptions { allow_invalid_certificates: Some(true), ca_file_path: None, cert_key_file_path: None, allow_invalid_hostnames: None })), write_concern: None, srv_max_hosts: None }
2025-01-28T08:36:25.587Z INFO [mongod_upgrade] Checking intial FCV
2025-01-28T08:36:25.587Z INFO [mongod_upgrade] Feature Compatibility Version is: 4.2
2025-01-28T08:36:25.587Z DEBUG [mongod_upgrade] Hostname set to "127.0.0.1:8191"
2025-01-28T08:36:25.588Z INFO [mongod_upgrade] Preflight completed successfully
2025-01-28T08:36:25.589Z INFO [mongod_upgrade] Executing backup before upgrade
2025-01-28T08:36:25.613Z DEBUG [mongod_upgrade] Backup update doc: Document({"$set": Document({"backup.location": String("/opt/splunk/var/lib/splunk/kvstore/mongo_backup"), "backup.start": DateTime(2025-01-28 8:36:25.613 +00:00:00), "backup.phase1_start": DateTime(2025-01-28 8:36:25.613 +00:00:00)})})
2025-01-28T08:36:25.662Z DEBUG [mongod_upgrade] Backup update result: UpdateResult { matched_count: 0, modified_count: 0, upserted_id: Some(String("BACKUP_127.0.0.1:8191")) }
2025-01-28T08:36:26.426Z DEBUG [mongod_upgrade] Rsync returned successfully.
2025-01-28T08:36:26.426Z DEBUG [mongod_upgrade] Backup update doc: Document({"$set": Document({"backup.phase1_end": DateTime(2025-01-28 8:36:26.426 +00:00:00), "backup.phase2_start": DateTime(2025-01-28 8:36:26.426 +00:00:00)})})
2025-01-28T08:36:26.428Z DEBUG [mongod_upgrade] Backup update result: UpdateResult { matched_count: 1, modified_count: 1, upserted_id: None }
2025-01-28T08:36:26.429Z DEBUG [mongod_upgrade::conditions] "phase1" complete count: 1
2025-01-28T08:36:26.433Z DEBUG [mongod_upgrade::conditions] Replication status doc for node: Document({"_id": Int32(0), "name": String("127.0.0.1:8191"), "health": Double(1.0), "state": Int32(1), "stateStr": String("PRIMARY"), "uptime": Int32(61), "optime": Document({"ts": Timestamp { time: 1738053386, increment: 1 }, "t": Int64(2)}), "optimeDate": DateTime(2025-01-28 8:36:26.0 +00:00:00), "syncingTo": String(""), "syncSourceHost": String(""), "syncSourceId": Int32(-1), "infoMessage": String("could not find member to sync from"), "electionTime": Timestamp { time: 1738053327, increment: 1 }, "electionDate": DateTime(2025-01-28 8:35:27.0 +00:00:00), "configVersion": Int32(1), "self": Boolean(true), "lastHeartbeatMessage": String("")})
2025-01-28T08:36:26.434Z DEBUG [mongod_upgrade::conditions] Hostname from replSetGetStatus: "127.0.0.1:8191"
2025-01-28T08:36:26.434Z DEBUG [mongod_upgrade::conditions] Hostname from preflight: "127.0.0.1:8191"
2025-01-28T08:36:26.434Z INFO [mongod_upgrade] Node identified as Primary, issuing fsyncLock and pausing writes to node
2025-01-28T08:36:26.578Z INFO [mongod_upgrade] Document({"info": String("now locked against writes, use db.fsyncUnlock() to unlock"), "lockCount": Int64(1), "seeAlso": String("http://dochub.mongodb.org/core/fsynccommand"), "ok": Double(1.0), "$clusterTime": Document({"clusterTime": Timestamp { time: 1738053386, increment: 1 }, "signature": Document({"hash": Binary { subtype: Generic, bytes: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] }, "keyId": Int64(0)})}), "operationTime": Timestamp { time: 1738053386, increment: 1 }})
2025-01-28T08:36:26.578Z INFO [mongod_upgrade] Waiting for replication lag to be 0 on all secondary nodes
2025-01-28T08:36:27.759Z INFO [mongod_upgrade] unpausing writes to node
2025-01-28T08:36:27.764Z INFO [mongod_upgrade] Document({"info": String("fsyncUnlock completed"), "lockCount": Int64(0), "ok": Double(1.0), "$clusterTime": Document({"clusterTime": Timestamp { time: 1738053386, increment: 1 }, "signature": Document({"hash": Binary { subtype: Generic, bytes: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] }, "keyId": Int64(0)})}), "operationTime": Timestamp { time: 1738053386, increment: 1 }})
2025-01-28T08:36:27.764Z DEBUG [mongod_upgrade] Second rsync returned successfully.
2025-01-28T08:36:27.764Z DEBUG [mongod_upgrade] Backup update doc: Document({"$set": Document({"backup.phase2_end": DateTime(2025-01-28 8:36:27.764 +00:00:00), "backup.end": DateTime(2025-01-28 8:36:27.764 +00:00:00)})})
2025-01-28T08:36:27.765Z DEBUG [mongod_upgrade] Backup update result: UpdateResult { matched_count: 1, modified_count: 1, upserted_id: None }
2025-01-28T08:36:27.766Z DEBUG [mongod_upgrade::conditions] "phase2" complete count: 1
2025-01-28T08:36:27.766Z INFO [mongod_upgrade] Backup completed successfully
2025-01-28T08:36:27.766Z INFO [mongod_upgrade] Starting rolling update
2025-01-28T08:36:27.785Z INFO [mongod_upgrade::commands] Init results: InsertOneResult { inserted_id: String("127.0.0.1:8191") }
2025-01-28T08:36:27.785Z INFO [mongod_upgrade] Waiting for initialization
2025-01-28T08:36:27.786Z DEBUG [mongod_upgrade::conditions] Init count: 1
2025-01-28T08:36:27.786Z INFO [mongod_upgrade] All initialized
2025-01-28T08:36:27.786Z INFO [mongod_upgrade] Upgrading to 4.4
2025-01-28T08:36:27.787Z INFO [mongod_upgrade] Waiting if primary
2025-01-28T08:36:27.788Z DEBUG [mongod_upgrade::conditions] Replication status doc for node: Document({"_id": Int32(0), "name": String("127.0.0.1:8191"), "health": Double(1.0), "state": Int32(1), "stateStr": String("PRIMARY"), "uptime": Int32(62), "optime": Document({"ts": Timestamp { time: 1738053387, increment: 2 }, "t": Int64(2)}), "optimeDate": DateTime(2025-01-28 8:36:27.0 +00:00:00), "syncingTo": String(""), "syncSourceHost": String(""), "syncSourceId": Int32(-1), "infoMessage": String("could not find member to sync from"), "electionTime": Timestamp { time: 1738053327, increment: 1 }, "electionDate": DateTime(2025-01-28 8:35:27.0 +00:00:00), "configVersion": Int32(1), "self": Boolean(true), "lastHeartbeatMessage": String("")})
2025-01-28T08:36:27.788Z DEBUG [mongod_upgrade::conditions] Hostname from replSetGetStatus: "127.0.0.1:8191"
2025-01-28T08:36:27.788Z DEBUG [mongod_upgrade::conditions] Hostname from preflight: "127.0.0.1:8191"
2025-01-28T08:36:27.788Z DEBUG [mongod_upgrade::conditions] Upgraded count: 0
2025-01-28T08:36:27.788Z INFO [mongod_upgrade] Getting lock
2025-01-28T08:36:27.789Z DEBUG [mongod_upgrade::conditions] Upserting lock
2025-01-28T08:36:27.790Z INFO [mongod_upgrade::conditions] locked
2025-01-28T08:36:27.790Z INFO [mongod_upgrade] Got lock: true
2025-01-28T08:36:27.790Z INFO [mongod_upgrade::commands] Updating 127.0.0.1:8191 to 4.4
2025-01-28T08:36:27.790Z INFO [mongod_upgrade::commands] In update for 4.4
2025-01-28T08:36:29.202Z INFO [mongod_upgrade::commands] Shutting down the database
2025-01-28T08:36:29.736Z WARN [mongod_upgrade::commands] Attempting with force:true due to shutdown failure: Error { kind: Io(Kind(UnexpectedEof)), labels: {}, wire_version: Some(8), source: None }
2025-01-28T08:36:35.754Z INFO [mongod_upgrade::commands] Checking if mongod is online
2025-01-28T08:37:05.755Z INFO [mongod_upgrade::commands] mongod is offline
2025-01-28T08:37:05.755Z INFO [mongod_upgrade::commands] Shutdown output: Document({})
2025-01-28T08:37:07.813Z INFO [mongod_upgrade::commands] UPGRADE_TO_4.4_SUCCESSFUL
2025-01-28T08:37:09.813Z INFO [mongod_upgrade::commands] Attempting to update status
2025-01-28T08:37:09.817Z INFO [mongod_upgrade::commands] Status updated successfully
2025-01-28T08:37:09.823Z INFO [mongod_upgrade] Waiting for other nodes in replica set to upgrade
2025-01-28T08:37:09.824Z DEBUG [mongod_upgrade::conditions] Upgraded count: 1
2025-01-28T08:37:09.824Z INFO [mongod_upgrade] All upgraded to 4.4, proceeding.
2025-01-28T08:37:09.824Z INFO [mongod_upgrade] Setting new FCV Version: 4.4
2025-01-28T08:37:09.838Z INFO [mongod_upgrade] FCV change successful: ()
2025-01-28T08:37:24.838Z INFO [mongod_upgrade] Upgrading to 5.0
2025-01-28T08:37:24.840Z INFO [mongod_upgrade] Waiting if primary
2025-01-28T08:37:24.841Z DEBUG [mongod_upgrade::conditions] Replication status doc for node: Document({"_id": Int32(0), "name": String("127.0.0.1:8191"), "health": Double(1.0), "state": Int32(1), "stateStr": String("PRIMARY"), "uptime": Int32(18), "optime": Document({"ts": Timestamp { time: 1738053429, increment: 4 }, "t": Int64(3)}), "optimeDate": DateTime(2025-01-28 8:37:09.0 +00:00:00), "lastAppliedWallTime": DateTime(2025-01-28 8:37:09.834 +00:00:00), "lastDurableWallTime": DateTime(2025-01-28 8:37:09.834 +00:00:00), "syncSourceHost": String(""), "syncSourceId": Int32(-1), "infoMessage": String("Could not find member to sync from"), "electionTime": Timestamp { time: 1738053427, increment: 1 }, "electionDate": DateTime(2025-01-28 8:37:07.0 +00:00:00), "configVersion": Int32(2), "configTerm": Int32(3), "self": Boolean(true), "lastHeartbeatMessage": String("")})
2025-01-28T08:37:24.841Z DEBUG [mongod_upgrade::conditions] Hostname from replSetGetStatus: "127.0.0.1:8191"
2025-01-28T08:37:24.841Z DEBUG [mongod_upgrade::conditions] Hostname from preflight: "127.0.0.1:8191"
2025-01-28T08:37:24.842Z DEBUG [mongod_upgrade::conditions] Upgraded count: 0
2025-01-28T08:37:24.842Z INFO [mongod_upgrade] Getting lock
2025-01-28T08:37:24.842Z DEBUG [mongod_upgrade::conditions] Upserting lock
2025-01-28T08:37:24.843Z INFO [mongod_upgrade::conditions] locked
2025-01-28T08:37:24.843Z INFO [mongod_upgrade] Got lock: true
2025-01-28T08:37:24.843Z INFO [mongod_upgrade::commands] Updating 127.0.0.1:8191 to 5.0
2025-01-28T08:37:24.843Z INFO [mongod_upgrade::commands] In update for 5.0
2025-01-28T08:37:26.825Z INFO [mongod_upgrade::commands] Shutting down the database
2025-01-28T08:37:27.994Z WARN [mongod_upgrade::commands] Attempting with force:true due to shutdown failure: Error { kind: Io(Kind(UnexpectedEof)), labels: {}, wire_version: Some(9), source: None }
2025-01-28T08:37:34.004Z INFO [mongod_upgrade::commands] Checking if mongod is online
2025-01-28T08:38:04.006Z INFO [mongod_upgrade::commands] mongod is offline
2025-01-28T08:38:04.006Z INFO [mongod_upgrade::commands] Shutdown output: Document({})
2025-01-28T08:38:06.710Z INFO [mongod_upgrade::commands] UPGRADE_TO_5.0_SUCCESSFUL
2025-01-28T08:38:08.710Z INFO [mongod_upgrade::commands] Attempting to update status
2025-01-28T08:38:08.717Z INFO [mongod_upgrade::commands] Status updated successfully
2025-01-28T08:38:08.725Z INFO [mongod_upgrade] Waiting for other nodes in replica set to upgrade
2025-01-28T08:38:08.732Z DEBUG [mongod_upgrade::conditions] Upgraded count: 1
2025-01-28T08:38:08.732Z INFO [mongod_upgrade] All upgraded to 5.0, proceeding.
2025-01-28T08:38:08.732Z INFO [mongod_upgrade] Setting new FCV Version: 5.0
2025-01-28T08:38:08.748Z INFO [mongod_upgrade] FCV change successful: ()
2025-01-28T08:38:23.748Z INFO [mongod_upgrade] Upgrading to 6.0
2025-01-28T08:38:23.751Z INFO [mongod_upgrade] Waiting if primary
2025-01-28T08:38:23.752Z DEBUG [mongod_upgrade::conditions] Replication status doc for node: Document({"_id": Int32(0), "name": String("127.0.0.1:8191"), "health": Double(1.0), "state": Int32(1), "stateStr": String("PRIMARY"), "uptime": Int32(19), "optime": Document({"ts": Timestamp { time: 1738053488, increment: 5 }, "t": Int64(4)}), "optimeDate": DateTime(2025-01-28 8:38:08.0 +00:00:00), "lastAppliedWallTime": DateTime(2025-01-28 8:38:08.745 +00:00:00), "lastDurableWallTime": DateTime(2025-01-28 8:38:08.745 +00:00:00), "syncSourceHost": String(""), "syncSourceId": Int32(-1), "infoMessage": String("Could not find member to sync from"), "electionTime": Timestamp { time: 1738053486, increment: 1 }, "electionDate": DateTime(2025-01-28 8:38:06.0 +00:00:00), "configVersion": Int32(3), "configTerm": Int32(4), "self": Boolean(true), "lastHeartbeatMessage": String("")})
2025-01-28T08:38:23.752Z DEBUG [mongod_upgrade::conditions] Hostname from replSetGetStatus: "127.0.0.1:8191"
2025-01-28T08:38:23.752Z DEBUG [mongod_upgrade::conditions] Hostname from preflight: "127.0.0.1:8191"
2025-01-28T08:38:23.752Z DEBUG [mongod_upgrade::conditions] Upgraded count: 0
2025-01-28T08:38:23.752Z INFO [mongod_upgrade] Getting lock
2025-01-28T08:38:23.753Z DEBUG [mongod_upgrade::conditions] Upserting lock
2025-01-28T08:38:23.757Z INFO [mongod_upgrade::conditions] locked
2025-01-28T08:38:23.757Z INFO [mongod_upgrade] Got lock: true
2025-01-28T08:38:23.758Z INFO [mongod_upgrade::commands] Updating 127.0.0.1:8191 to 6.0
2025-01-28T08:38:23.758Z INFO [mongod_upgrade::commands] In update for 6.0
2025-01-28T08:38:25.544Z INFO [mongod_upgrade::commands] Shutting down the database
2025-01-28T08:38:25.845Z WARN [mongod_upgrade::commands] Attempting with force:true due to shutdown failure: Error { kind: Io(Kind(UnexpectedEof)), labels: {}, wire_version: Some(13), source: None }
2025-01-28T08:38:31.854Z INFO [mongod_upgrade::commands] Checking if mongod is online
2025-01-28T08:39:01.856Z INFO [mongod_upgrade::commands] mongod is offline
2025-01-28T08:39:01.856Z INFO [mongod_upgrade::commands] Shutdown output: Document({})
2025-01-28T08:39:03.281Z INFO [mongod_upgrade::commands] UPGRADE_TO_6.0_SUCCESSFUL
2025-01-28T08:39:05.281Z INFO [mongod_upgrade::commands] Attempting to update status
2025-01-28T08:39:05.285Z INFO [mongod_upgrade::commands] Status updated successfully
2025-01-28T08:39:05.297Z INFO [mongod_upgrade] Waiting for other nodes in replica set to upgrade
2025-01-28T08:39:05.299Z DEBUG [mongod_upgrade::conditions] Upgraded count: 1
2025-01-28T08:39:05.299Z INFO [mongod_upgrade] All upgraded to 6.0, proceeding.
2025-01-28T08:39:05.299Z INFO [mongod_upgrade] Setting new FCV Version: 6.0
2025-01-28T08:39:05.460Z INFO [mongod_upgrade] FCV change successful: ()
2025-01-28T08:39:20.460Z INFO [mongod_upgrade] Upgrading to 7.0
2025-01-28T08:39:20.462Z INFO [mongod_upgrade] Waiting if primary
2025-01-28T08:39:20.464Z DEBUG [mongod_upgrade::conditions] Replication status doc for node: Document({"_id": Int32(0), "name": String("127.0.0.1:8191"), "health": Double(1.0), "state": Int32(1), "stateStr": String("PRIMARY"), "uptime": Int32(19), "optime": Document({"ts": Timestamp { time: 1738053545, increment: 10 }, "t": Int64(5)}), "optimeDate": DateTime(2025-01-28 8:39:05.0 +00:00:00), "lastAppliedWallTime": DateTime(2025-01-28 8:39:05.451 +00:00:00), "lastDurableWallTime": DateTime(2025-01-28 8:39:05.451 +00:00:00), "syncSourceHost": String(""), "syncSourceId": Int32(-1), "infoMessage": String("Could not find member to sync from"), "electionTime": Timestamp { time: 1738053543, increment: 1 }, "electionDate": DateTime(2025-01-28 8:39:03.0 +00:00:00), "configVersion": Int32(3), "configTerm": Int32(5), "self": Boolean(true), "lastHeartbeatMessage": String("")})
2025-01-28T08:39:20.464Z DEBUG [mongod_upgrade::conditions] Hostname from replSetGetStatus: "127.0.0.1:8191"
2025-01-28T08:39:20.464Z DEBUG [mongod_upgrade::conditions] Hostname from preflight: "127.0.0.1:8191"
2025-01-28T08:39:20.465Z DEBUG [mongod_upgrade::conditions] Upgraded count: 0
2025-01-28T08:39:20.465Z INFO [mongod_upgrade] Getting lock
2025-01-28T08:39:20.466Z DEBUG [mongod_upgrade::conditions] Upserting lock
2025-01-28T08:39:20.469Z INFO [mongod_upgrade::conditions] locked
2025-01-28T08:39:20.469Z INFO [mongod_upgrade] Got lock: true
2025-01-28T08:39:20.470Z INFO [mongod_upgrade::commands] Updating 127.0.0.1:8191 to 7.0
2025-01-28T08:39:20.470Z INFO [mongod_upgrade::commands] In update for 7.0
2025-01-28T08:39:21.724Z INFO [mongod_upgrade::commands] Shutting down the database
2025-01-28T08:39:22.519Z WARN [mongod_upgrade::commands] Attempting with force:true due to shutdown failure: Error { kind: Io(Kind(UnexpectedEof)), labels: {}, wire_version: Some(17), source: None }
2025-01-28T08:39:28.529Z INFO [mongod_upgrade::commands] Checking if mongod is online
2025-01-28T08:39:58.531Z INFO [mongod_upgrade::commands] mongod is offline
2025-01-28T08:39:58.531Z INFO [mongod_upgrade::commands] Shutdown output: Document({})
2025-01-28T08:40:00.234Z INFO [mongod_upgrade::commands] UPGRADE_TO_7.0_SUCCESSFUL
2025-01-28T08:40:02.234Z INFO [mongod_upgrade::commands] Attempting to update status
2025-01-28T08:40:02.240Z INFO [mongod_upgrade::commands] Status updated successfully
2025-01-28T08:40:02.253Z INFO [mongod_upgrade] Waiting for other nodes in replica set to upgrade
2025-01-28T08:40:02.258Z DEBUG [mongod_upgrade::conditions] Upgraded count: 1
2025-01-28T08:40:02.258Z INFO [mongod_upgrade] All upgraded to 7.0, proceeding.
2025-01-28T08:40:02.259Z INFO [mongod_upgrade] Setting new FCV Version: 7.0
2025-01-28T08:40:02.281Z INFO [mongod_upgrade] FCV change successful: ()
2025-01-28T08:40:17.281Z INFO [mongod_upgrade] Upgrades completed
2025-01-28T08:40:17.287Z INFO [mongod_upgrade] Waiting for completion
2025-01-28T08:40:17.289Z DEBUG [mongod_upgrade::conditions] Completed count: 1
2025-01-28T08:40:17.289Z DEBUG [mongod_upgrade::conditions] Hostname count: 1
2025-01-28T08:40:17.289Z INFO [mongod_upgrade] All completed
2025-01-28T08:40:17.394Z INFO [mongod_upgrade] Dropped migration metadata database.
2025-01-28T08:40:17.403Z INFO [mongod_upgrade::commands] Shutting down the database
2025-01-28T08:40:18.363Z WARN [mongod_upgrade] mongod failed to shut down before exiting: Kind: I/O error: unexpected end of file, labels: {}

 and below the correct mongodb version

splunk show kvstore-status --verbose | grep -i serverversion
serverVersion : 7.0.14

Since ours was a test server, and other than the default Splunk server we don't use anything that uses the KV Store, we didn't restore the backed up KV Store.

Splunk does it automatically when upgrading the version, but it is recommended to do it manually and then restore the KV Store in case something goes wrong.

Furthermore, it is never recommended to upgrade Splunk to version .0, but at least wait for the next versions, for example .3 because many bugs are solved that are often present in the .0 version (eg: 9.4.3 version).

isoutamo
SplunkTrust
SplunkTrust

Exactly that way.

Never use any x.0.0 version and avoid also x.y.0 if you can! If you cannot, then test everything what you have and need on test environment first and fix your findings.

Then do full backup from all nodes, kvstores etc. Prefer offline if possible.

Be sure that you have rollback plan and resources on place when something goes wrong and you cannot use your new version.

Look from logs that update has finished before start any components and ensure that logs didn't contains any errors and if those must fix before start.

Also join Splunk Slack and look what other has already found and if there is any fix for those. Here is direct link into https://splunk-usergroups.slack.com/archives/C03M9ENE6AD splunk_9_upgrade_issues channel.

0 Karma

morganfw
Path Finder

Same issue here on a VM with Ubuntu 22.04 LTS, after upgrade from Splunk 9.3.2 to Splunk 9.4.0, mongodb fail to start.

The "avx" and "avx2" vCPU flags are ok:

$ lscpu | grep -i avx
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid rdseed adx smap xsaveopt arat md_clear flush_l1d arch_capabilities

below latest mongod_upgrade.log lines

2025-01-27T08:38:58.162Z DEBUG [mongod_upgrade::conditions] Hostname from replSetGetStatus: "127.0.0.1:8191"
2025-01-27T08:38:58.162Z DEBUG [mongod_upgrade::conditions] Hostname from preflight: "127.0.0.1:8191"
2025-01-27T08:38:58.163Z DEBUG [mongod_upgrade::conditions] Upgraded count: 0
2025-01-27T08:38:58.163Z INFO [mongod_upgrade] Getting lock
2025-01-27T08:38:58.163Z DEBUG [mongod_upgrade::conditions] Upserting lock
2025-01-27T08:38:58.164Z INFO [mongod_upgrade::conditions] locked
2025-01-27T08:38:58.164Z INFO [mongod_upgrade] Got lock: true
2025-01-27T08:38:58.164Z INFO [mongod_upgrade::commands] Updating 127.0.0.1:8191 to 4.4
2025-01-27T08:38:58.164Z INFO [mongod_upgrade::commands] In update for 4.4
2025-01-27T08:38:59.046Z INFO [mongod_upgrade::commands] Shutting down the database
2025-01-27T08:38:59.822Z WARN [mongod_upgrade::commands] Attempting with force:true due to shutdown failure: Error { kind: Io(Kind(UnexpectedEof)), labels: {}, wire_version: Some(8), source: None }
2025-01-27T08:39:05.832Z INFO [mongod_upgrade::commands] Checking if mongod is online
2025-01-27T08:39:35.834Z INFO [mongod_upgrade::commands] mongod is offline
2025-01-27T08:39:35.834Z INFO [mongod_upgrade::commands] Shutdown output: Document({})
2025-01-27T08:39:37.263Z INFO [mongod_upgrade::commands] UPGRADE_TO_4.4_SUCCESSFUL
2025-01-27T08:39:39.263Z INFO [mongod_upgrade::commands] Attempting to update status
2025-01-27T08:39:39.265Z INFO [mongod_upgrade::commands] Status updated successfully
2025-01-27T08:39:39.271Z INFO [mongod_upgrade] Waiting for other nodes in replica set to upgrade
2025-01-27T08:39:39.272Z DEBUG [mongod_upgrade::conditions] Upgraded count: 1
2025-01-27T08:39:39.272Z INFO [mongod_upgrade] All upgraded to 4.4, proceeding.
2025-01-27T08:39:39.272Z INFO [mongod_upgrade] Setting new FCV Version: 4.4
2025-01-27T08:39:39.284Z INFO [mongod_upgrade] FCV change successful: ()
2025-01-27T08:39:54.284Z INFO [mongod_upgrade] Upgrading to 5.0
2025-01-27T08:39:54.286Z INFO [mongod_upgrade] Waiting if primary
2025-01-27T08:39:54.287Z DEBUG [mongod_upgrade::conditions] Replication status doc for node: Document({"_id": Int32(0), "name": String("127.0.0.1:8191"), "health": Double(1.0), "state": Int32(1), "stateStr": String("PRIMARY"), "uptime": Int32(19), "optime": Document({"ts": Timestamp { time: 1737967179, increment: 4 }, "t": Int64(58)}), "optimeDate": DateTime(2025-01-27 8:39:39.0 +00:00:00), "lastAppliedWallTime": DateTime(2025-01-27 8:39:39.278 +00:00:00), "lastDurableWallTime": DateTime(2025-01-27 8:39:39.278 +00:00:00), "syncSourceHost": String(""), "syncSourceId": Int32(-1), "infoMessage": String("Could not find member to sync from"), "electionTime": Timestamp { time: 1737967177, increment: 1 }, "electionDate": DateTime(2025-01-27 8:39:37.0 +00:00:00), "configVersion": Int32(2), "configTerm": Int32(58), "self": Boolean(true), "lastHeartbeatMessage": String("")})
2025-01-27T08:39:54.287Z DEBUG [mongod_upgrade::conditions] Hostname from replSetGetStatus: "127.0.0.1:8191"
2025-01-27T08:39:54.287Z DEBUG [mongod_upgrade::conditions] Hostname from preflight: "127.0.0.1:8191"
2025-01-27T08:39:54.287Z DEBUG [mongod_upgrade::conditions] Upgraded count: 0
2025-01-27T08:39:54.287Z INFO [mongod_upgrade] Getting lock
2025-01-27T08:39:54.288Z DEBUG [mongod_upgrade::conditions] Upserting lock
2025-01-27T08:39:54.288Z INFO [mongod_upgrade::conditions] locked
2025-01-27T08:39:54.288Z INFO [mongod_upgrade] Got lock: true
2025-01-27T08:39:54.289Z INFO [mongod_upgrade::commands] Updating 127.0.0.1:8191 to 5.0
2025-01-27T08:39:54.289Z INFO [mongod_upgrade::commands] In update for 5.0
2025-01-27T08:39:54.555Z INFO [mongod_upgrade::commands] Shutting down the database
2025-01-27T08:39:55.409Z WARN [mongod_upgrade::commands] Attempting with force:true due to shutdown failure: Error { kind: Io(Kind(UnexpectedEof)), labels: {}, wire_version: Some(9), source: None }

 and below the mongod.log error after a splunk start/restart

2025-01-27T15:56:52.142Z I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2025-01-27T15:56:52.142Z I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
2025-01-27T15:56:52.142Z I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=1075M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],
2025-01-27T15:56:53.003Z E STORAGE [initandlisten] WiredTiger error (-31802) [1737993413:3955][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1737993413:3955][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error
2025-01-27T15:56:53.027Z E STORAGE [initandlisten] WiredTiger error (-31802) [1737993413:27009][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1737993413:27009][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error
2025-01-27T15:56:53.049Z E STORAGE [initandlisten] WiredTiger error (-31802) [1737993413:49077][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1737993413:49077][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error
2025-01-27T15:56:53.080Z E STORAGE [initandlisten] WiredTiger error (-31802) [1737993413:80951][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1737993413:80951][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error
2025-01-27T15:56:53.096Z E STORAGE [initandlisten] WiredTiger error (-31802) [1737993413:96579][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1737993413:96579][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error
2025-01-27T15:56:53.103Z W STORAGE [initandlisten] Failed to start up WiredTiger under any compatibility version.
2025-01-27T15:56:53.103Z F STORAGE [initandlisten] Reason: -31802: WT_ERROR: non-specific WiredTiger error
2025-01-27T15:56:53.103Z F - [initandlisten] Fatal Assertion 28595 at src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp 928
2025-01-27T15:56:53.103Z F - [initandlisten] \n\n***aborting after fassert() failure\n\n

 it seems that was stuck on 5.0 mongodb upgrade after 9.4.0 Splunk upgrade.

Any suggestion how to resolve the issue?

Regards.

0 Karma

morganfw
Path Finder

Same issue here on an Ubuntu 22.04 LTS VM, tried to upgrade on test environment from 9.3.2 to 9.4.0 and mongodb fail to start.

Below last mongod_upgrade.log lines

2025-01-27T08:38:58.160Z INFO [mongod_upgrade] All initialized
2025-01-27T08:38:58.160Z INFO [mongod_upgrade] Upgrading to 4.4
2025-01-27T08:38:58.161Z INFO [mongod_upgrade] Waiting if primary
2025-01-27T08:38:58.162Z DEBUG [mongod_upgrade::conditions] Replication status doc for node: Document({"_id": Int32(0), "name": String("127.0.0.1:8191"), "health": Double(1.0), "state": Int32(1), "stateStr": String("PRIMARY"), "uptime": Int32(83), "optime": Document({"ts": Timestamp { time: 1737967138, increment: 2 }, "t": Int64(57)}), "optimeDate": DateTime(2025-01-27 8:38:58.0 +00:00:00), "syncingTo": String(""), "syncSourceHost": String(""), "syncSourceId": Int32(-1), "infoMessage": String("could not find member to sync from"), "electionTime": Timestamp { time: 1737967059, increment: 1 }, "electionDate": DateTime(2025-01-27 8:37:39.0 +00:00:00), "configVersion": Int32(1), "self": Boolean(true), "lastHeartbeatMessage": String("")})
2025-01-27T08:38:58.162Z DEBUG [mongod_upgrade::conditions] Hostname from replSetGetStatus: "127.0.0.1:8191"
2025-01-27T08:38:58.162Z DEBUG [mongod_upgrade::conditions] Hostname from preflight: "127.0.0.1:8191"
2025-01-27T08:38:58.163Z DEBUG [mongod_upgrade::conditions] Upgraded count: 0
2025-01-27T08:38:58.163Z INFO [mongod_upgrade] Getting lock
2025-01-27T08:38:58.163Z DEBUG [mongod_upgrade::conditions] Upserting lock
2025-01-27T08:38:58.164Z INFO [mongod_upgrade::conditions] locked
2025-01-27T08:38:58.164Z INFO [mongod_upgrade] Got lock: true
2025-01-27T08:38:58.164Z INFO [mongod_upgrade::commands] Updating 127.0.0.1:8191 to 4.4
2025-01-27T08:38:58.164Z INFO [mongod_upgrade::commands] In update for 4.4
2025-01-27T08:38:59.046Z INFO [mongod_upgrade::commands] Shutting down the database
2025-01-27T08:38:59.822Z WARN [mongod_upgrade::commands] Attempting with force:true due to shutdown failure: Error { kind: Io(Kind(UnexpectedEof)), labels: {}, wire_version: Some(8), source: None }
2025-01-27T08:39:05.832Z INFO [mongod_upgrade::commands] Checking if mongod is online
2025-01-27T08:39:35.834Z INFO [mongod_upgrade::commands] mongod is offline
2025-01-27T08:39:35.834Z INFO [mongod_upgrade::commands] Shutdown output: Document({})
2025-01-27T08:39:37.263Z INFO [mongod_upgrade::commands] UPGRADE_TO_4.4_SUCCESSFUL
2025-01-27T08:39:39.263Z INFO [mongod_upgrade::commands] Attempting to update status
2025-01-27T08:39:39.265Z INFO [mongod_upgrade::commands] Status updated successfully
2025-01-27T08:39:39.271Z INFO [mongod_upgrade] Waiting for other nodes in replica set to upgrade
2025-01-27T08:39:39.272Z DEBUG [mongod_upgrade::conditions] Upgraded count: 1
2025-01-27T08:39:39.272Z INFO [mongod_upgrade] All upgraded to 4.4, proceeding.
2025-01-27T08:39:39.272Z INFO [mongod_upgrade] Setting new FCV Version: 4.4
2025-01-27T08:39:39.284Z INFO [mongod_upgrade] FCV change successful: ()
2025-01-27T08:39:54.284Z INFO [mongod_upgrade] Upgrading to 5.0
2025-01-27T08:39:54.286Z INFO [mongod_upgrade] Waiting if primary
2025-01-27T08:39:54.287Z DEBUG [mongod_upgrade::conditions] Replication status doc for node: Document({"_id": Int32(0), "name": String("127.0.0.1:8191"), "health": Double(1.0), "state": Int32(1), "stateStr": String("PRIMARY"), "uptime": Int32(19), "optime": Document({"ts": Timestamp { time: 1737967179, increment: 4 }, "t": Int64(58)}), "optimeDate": DateTime(2025-01-27 8:39:39.0 +00:00:00), "lastAppliedWallTime": DateTime(2025-01-27 8:39:39.278 +00:00:00), "lastDurableWallTime": DateTime(2025-01-27 8:39:39.278 +00:00:00), "syncSourceHost": String(""), "syncSourceId": Int32(-1), "infoMessage": String("Could not find member to sync from"), "electionTime": Timestamp { time: 1737967177, increment: 1 }, "electionDate": DateTime(2025-01-27 8:39:37.0 +00:00:00), "configVersion": Int32(2), "configTerm": Int32(58), "self": Boolean(true), "lastHeartbeatMessage": String("")})
2025-01-27T08:39:54.287Z DEBUG [mongod_upgrade::conditions] Hostname from replSetGetStatus: "127.0.0.1:8191"
2025-01-27T08:39:54.287Z DEBUG [mongod_upgrade::conditions] Hostname from preflight: "127.0.0.1:8191"
2025-01-27T08:39:54.287Z DEBUG [mongod_upgrade::conditions] Upgraded count: 0
2025-01-27T08:39:54.287Z INFO [mongod_upgrade] Getting lock
2025-01-27T08:39:54.288Z DEBUG [mongod_upgrade::conditions] Upserting lock
2025-01-27T08:39:54.288Z INFO [mongod_upgrade::conditions] locked
2025-01-27T08:39:54.288Z INFO [mongod_upgrade] Got lock: true
2025-01-27T08:39:54.289Z INFO [mongod_upgrade::commands] Updating 127.0.0.1:8191 to 5.0
2025-01-27T08:39:54.289Z INFO [mongod_upgrade::commands] In update for 5.0
2025-01-27T08:39:54.555Z INFO [mongod_upgrade::commands] Shutting down the database
2025-01-27T08:39:55.409Z WARN [mongod_upgrade::commands] Attempting with force:true due to shutdown failure: Error { kind: Io(Kind(UnexpectedEof)), labels: {}, wire_version: Some(9), source: None }

and below mongod.log after Splunk start/restart

 2025-01-27T15:56:52.142Z I STORAGE [initandlisten] 
2025-01-27T15:56:52.142Z I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2025-01-27T15:56:52.142Z I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
2025-01-27T15:56:52.142Z I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=1075M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],
2025-01-27T15:56:53.003Z E STORAGE [initandlisten] WiredTiger error (-31802) [1737993413:3955][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1737993413:3955][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error
2025-01-27T15:56:53.027Z E STORAGE [initandlisten] WiredTiger error (-31802) [1737993413:27009][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1737993413:27009][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error
2025-01-27T15:56:53.049Z E STORAGE [initandlisten] WiredTiger error (-31802) [1737993413:49077][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1737993413:49077][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error
2025-01-27T15:56:53.080Z E STORAGE [initandlisten] WiredTiger error (-31802) [1737993413:80951][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1737993413:80951][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error
2025-01-27T15:56:53.096Z E STORAGE [initandlisten] WiredTiger error (-31802) [1737993413:96579][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1737993413:96579][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error
2025-01-27T15:56:53.103Z W STORAGE [initandlisten] Failed to start up WiredTiger under any compatibility version.
2025-01-27T15:56:53.103Z F STORAGE [initandlisten] Reason: -31802: WT_ERROR: non-specific WiredTiger error
2025-01-27T15:56:53.103Z F - [initandlisten] Fatal Assertion 28595 at src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp 928
2025-01-27T15:56:53.103Z F - [initandlisten] \n\n***aborting after fassert() failure\n\n

we have avx and avx2 CPU Flags

$ lscpu | grep -i avx
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid rdseed adx smap xsaveopt arat md_clear flush_l1d arch_capabilities

it seems that mongodb is in stuck state and did not completed the upgrade correctly.

Any suggestion how to resolve?

Regards

0 Karma

apietersen
Contributor

Hi,

We have the same issue here. Upgraded from Splunk Ent. v9.3.2 to V9.40 , running Windows 2019 server. The Kvstore process not running also effect on Splunk Secure Gateway (SSG/Splunk Mobile), Dastboard Studio, (and i think Edge Hub etc). 😞
Yes, looked in mongod.log and splunkd.log but not a bit wiser!

apietersen_0-1736943083713.png

apietersen_1-1736945667034.png

See below some lines in my mongod.log :

targetMinOS: Windows 7/Windows Server 2008 R2 - ???
this build only supports versions up to 4, and the file is version 5: - ??

 

 2025-01-15T14:45:22.046Z I  CONTROL  [initandlisten] MongoDB starting : pid=2224 port=8191 dbpath=D:\Program Files\Splunk\var\lib\splunk\kvstore\mongo 64-bit host=Gozer2
 2025-01-15T14:45:22.046Z I  CONTROL  [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2
 2025-01-15T14:45:22.046Z I  CONTROL  [initandlisten] db version v4.2.24
 2025-01-15T14:45:22.046Z I  CONTROL  [initandlisten] git version: 5e4ec1d24431fcdd28b579a024c5c801b8cde4e2
 2025-01-15T14:45:22.046Z I  CONTROL  [initandlisten] allocator: tcmalloc
 2025-01-15T14:45:22.046Z I  CONTROL  [initandlisten] modules: enterprise 
 2025-01-15T14:45:22.046Z I  CONTROL  [initandlisten] build environment:
 2025-01-15T14:45:22.046Z I  CONTROL  [initandlisten]     distmod: windows-64
 2025-01-15T14:45:22.046Z I  CONTROL  [initandlisten]     distarch: x86_64
 2025-01-15T14:45:22.046Z I  CONTROL  [initandlisten]     target_arch: x86_64
 2025-01-15T14:45:22.047Z I  CONTROL  [initandlisten] options: { net: { bindIp: "0.0.0.0", port: 8191, tls: { CAFile: "D:\Program Files\Splunk\etc\auth\cacert.pem", allowConnectionsWithoutCertificates: true, allowInvalidCertificates: true, allowInvalidHostnames: true, certificateSelector: "subject=SplunkServerDefaultCert", disabledProtocols: "noTLS1_0,noTLS1_1", mode: "requireTLS", tlsCipherConfig: "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RS..." } }, replication: { oplogSizeMB: 200, replSet: "102D93C2-E5B9-4347-88CA-59FB829D92E1" }, security: { javascriptEnabled: false, keyFile: "D:\Program Files\Splunk\var\lib\splunk\kvstore\mongo\splunk.key" }, setParameter: { enableLocalhostAuthBypass: "0", oplogFetcherSteadyStateMaxFetcherRestarts: "0" }, storage: { dbPath: "D:\Program Files\Splunk\var\lib\splunk\kvstore\mongo", engine: "wiredTiger", wiredTiger: { engineConfig: { cacheSizeGB: 4.65 } } }, systemLog: { timeStampFormat: "iso8601-utc" } }
 2025-01-15T14:45:22.048Z W  NETWORK  [initandlisten] sslCipherConfig parameter is not supported with Windows SChannel and is ignored.
 2025-01-15T14:45:22.048Z W  NETWORK  [initandlisten] sslCipherConfig parameter is not supported with Windows SChannel and is ignored.
 2025-01-15T14:45:22.049Z I  STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=4761M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],
 2025-01-15T14:45:22.083Z E  STORAGE  [initandlisten] WiredTiger error (-31802) [1736952322:82769][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1736952322:82769][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error
 2025-01-15T14:45:22.100Z E  STORAGE  [initandlisten] WiredTiger error (-31802) [1736952322:100690][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1736952322:100690][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error
 2025-01-15T14:45:22.116Z E  STORAGE  [initandlisten] WiredTiger error (-31802) [1736952322:115624][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1736952322:115624][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error
 2025-01-15T14:45:22.150Z E  STORAGE  [initandlisten] WiredTiger error (-31802) [1736952322:149476][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1736952322:149476][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error
 2025-01-15T14:45:22.175Z E  STORAGE  [initandlisten] WiredTiger error (-31802) [1736952322:175362][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1736952322:175362][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error
 2025-01-15T14:45:22.179Z W  STORAGE  [initandlisten] Failed to start up WiredTiger under any compatibility version.
 2025-01-15T14:45:22.179Z F  STORAGE  [initandlisten] Reason: -31802: WT_ERROR: non-specific WiredTiger error
 2025-01-15T14:45:22.179Z F  -        [initandlisten] Fatal Assertion 28595 at src\mongo\db\storage\wiredtiger\wiredtiger_kv_engine.cpp 928
 2025-01-15T14:45:22.179Z F  -        [initandlisten] \n\n***aborting after fassert() failure\n\n

 

Some lines from my Slunkd.log:

01-15-2025 15:57:57.139 +0100 INFO  TailReader [7248 tailreader0] - Batch input finished reading file='D:\Program Files\Splunk\var\spool\splunk\tracker.log'
01-15-2025 15:57:57.467 +0100 ERROR KVStorageProvider [5552 TcpChannelThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191']
01-15-2025 15:57:57.467 +0100 ERROR KVStoreAdminHandler [5552 TcpChannelThread] - An error occurred.
01-15-2025 15:58:03.592 +0100 ERROR KVStorageProvider [924 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191']
01-15-2025 15:58:10.645 +0100 ERROR KVStorageProvider [924 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191']
01-15-2025 15:58:17.723 +0100 ERROR KVStorageProvider [924 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191']
01-15-2025 15:58:24.745 +0100 WARN  ExecProcessor [10156 ExecProcessor] - message from ""D:\Program Files\Splunk\bin\splunk-regmon.exe""  BundlesUtil - D:\Program Files\Splunk\etc\system\metadata\local.meta already exists but with different casing: D:\Program Files\splunk\etc\system\metadata\local.meta
01-15-2025 15:58:24.792 +0100 ERROR KVStorageProvider [924 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191']
01-15-2025 15:58:27.307 +0100 INFO  TailReader [7248 tailreader0] - Batch input finished reading file='D:\Program Files\Splunk\var\spool\splunk\tracker.log'
01-15-2025 15:58:31.865 +0100 ERROR KVStorageProvider [924 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191']
01-15-2025 15:58:38.929 +0100 ERROR KVStorageProvider [924 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191']
01-15-2025 15:58:46.000 +0100 ERROR KVStorageProvider [924 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191']
01-15-2025 15:58:53.049 +0100 ERROR KVStorageProvider [924 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191']
01-15-2025 15:58:56.617 +0100 INFO  TailReader [7248 tailreader0] - Batch input finished reading file='D:\Program Files\Splunk\var\spool\splunk\tracker.log'
01-15-2025 15:59:00.117 +0100 ERROR KVStorageProvider [924 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191']
01-15-2025 15:59:01.460 +0100 ERROR KVStorageProvider [5608 TcpChannelThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191']
01-15-2025 15:59:01.460 +0100 ERROR KVStoreAdminHandler [5608 TcpChannelThread] - An error occurred.
0 Karma

MaverickT
Communicator
0 Karma

gloom
Loves-to-Learn

This seems confusing, as Splunk hasn't attempted to do the mongodb upgrade yet, I would expect it to fail after the upgrade if this was the case?

 

Edit: I ran HWinfo on the box, its showing AVX, AVX2 and AVX-512 supported, so I don't think this is the issue.

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

Based on log you are running unsupported OS.

 

 CONTROL  [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2

 

On Windows operating systems oldest supported version is Win 2019 or Win 10.

r. Ismo 

0 Karma

gloom
Loves-to-Learn

Thats incorrect, its a server 2022 box.

0 Karma
Get Updates on the Splunk Community!

Stay Connected: Your Guide to February Tech Talks, Office Hours, and Webinars!

&#x1f48c; Keep the new year’s momentum going with our February lineup of Community Office Hours, Tech Talks, ...

Preparing your Splunk Environment for OpenSSL3

The Splunk platform will transition to OpenSSL version 3 in a future release. Actions are required to prepare ...

Incident Response: Reduce Incident Recurrence with Automated Ticket Creation

Culture extends beyond work experience and coffee roast preferences on software engineering teams. Team ...