All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

In our test environment we downgraded to 9.3.2 and KV Store was not started with the same error message in the log file, maybe the mongodb was corrupted. As reported in the Splunk docs here: https:/... See more...
In our test environment we downgraded to 9.3.2 and KV Store was not started with the same error message in the log file, maybe the mongodb was corrupted. As reported in the Splunk docs here: https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/MigrateKVstore the mongodb server need to be on 4.2.x version: You must upgrade to server version 4.2.x before upgrading to Splunk Enterprise 9.4.x or higher. For instructions and information about updating to KV store server version 4.2.x in Splunk Enterprise versions 9.0.x through 9.3.x, see Migrate the KV store storage engine in the Splunk Enterprise 9.3.0 documentation.   so, to check, it is strongly suggested to see the following Splunk guide: https://docs.splunk.com/Documentation/Splunk/9.3.2/Admin/MigrateKVstore After that, we stopped Splunk and issued a splunk clean kvstore --local after restarting Splunk everything was back working. We upgraded again to 9.4.0, after some seconds it starts to upgrade mongodb from 4.2 to 7.0 through 4.4, 5.0 and 6.0 with message in GUI that KV store is updating, we need to wait until update its finished. After some minutes mongodb was been successfully updated with the following message in the Splunk GUI: and the Splunk version: it is strongly suggested to tailing the $SPLUNK_HOME/var/log/splunk/mongodb_upgrade.log and do not operate on Splunk until update is finished. below the mongodb_upgrade.log 2025-01-28T08:36:25.567Z INFO [mongod_upgrade] Mongod Upgrader Logs 2025-01-28T08:36:25.568Z DEBUG [mongod_upgrade] mongod_upgrade arguments: Args { verbose: Verbosity { verbose: 1, quiet: 0, phantom: PhantomData<clap_verbosity::InfoLevel> }, uri: "mongodb://__system@127.0.0.1:8191/?replicaSet=B99AB2AA-EE93-405A-95CD-89EAC0FCA551&retryWrites=true&authSource=local&ssl=true&connectTimeoutMS=10000&socketTimeoutMS=300000&readPreference=nearest&readPreferenceTags=instance:B99AB2AA-EE93-405A-95CD-89EAC0FCA551&readPreferenceTags=all:all&tlsAllowInvalidCertificates=true", nodes: 1, local_uri: "mongodb://__system@127.0.0.1:8191/?w=majority&journal=true&retryWrites=true&authSource=local&ssl=true&connectTimeoutMS=10000&socketTimeoutMS=300000&tlsAllowInvalidCertificates=true&directConnection=true", backup_disabled: false, ld_linker: "lib/ld-2.26.so", data_directory: "/opt/splunk/var/lib/splunk/kvstore/mongo", backup_path: "/opt/splunk/var/lib/splunk/kvstore/mongo_backup", shadow_mount_dir: "C:\\mongo_shadow\\", backup_volume: "NOT_SPECIFIED", logpath: "/opt/splunk/var/log/splunk/mongod.log", keep_metadata: false, drop_metadata: false, pre_drop_metadata: false, keep_backups: true, metadata_database: "migration_metadata", metadata_collection: "migration_metadata", rsync_retries: 5, max_start_retries: 10, shutdown_mongod: true, rsync_path: "/opt/splunk/bin/rsync", keyfile_path: Some("/opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key"), max_command_time_ms: 60000, time_duration_block_s: 1, polling_interval_ms: 100, polling_max_wait_ms: 26460000, polling_version_max_wait_ms: 4800000, max_retries: 4, health_check_max_retries: 60, use_ld: false, mongod_args: ["--dbpath=/opt/splunk/var/lib/splunk/kvstore/mongo", "--storageEngine=wiredTiger", "--wiredTigerCacheSizeGB=1.050000", "--port=8191", "--timeStampFormat=iso8601-utc", "--oplogSize=200", "--keyFile=/opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key", "--setParameter=enableLocalhostAuthBypass=0", "--setParameter=oplogFetcherSteadyStateMaxFetcherRestarts=0", "--replSet=B99AB2AA-EE93-405A-95CD-89EAC0FCA551", "--bind_ip=0.0.0.0", "--sslCAFile=/opt/splunk/etc/auth/cacert.pem", "--tlsAllowConnectionsWithoutCertificates", "--sslMode=requireSSL", "--sslAllowInvalidHostnames", "--sslPEMKeyFile=/opt/splunk/etc/auth/server.pem", "--sslPEMKeyPassword=password", "--tlsDisabledProtocols=noTLS1_0,noTLS1_1", "--sslCipherConfig=ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES128-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:AES128-SHA256", "--nounixsocket", "--noscripting"] } 2025-01-28T08:36:25.568Z INFO [mongod_upgrade] Executing Preflight Checks 2025-01-28T08:36:25.568Z DEBUG [mongod_upgrade] client_options_primary: ClientOptions { hosts: [Tcp { host: "127.0.0.1", port: Some(8191) }], app_name: None, compressors: None, connect_timeout: Some(10s), credential: Some(Credential("REDACTED")), direct_connection: None, driver_info: None, heartbeat_freq: None, load_balanced: None, local_threshold: None, max_idle_time: None, max_pool_size: None, min_pool_size: None, max_connecting: None, read_concern: None, repl_set_name: Some("B99AB2AA-EE93-405A-95CD-89EAC0FCA551"), retry_reads: None, retry_writes: Some(true), selection_criteria: Some(ReadPreference(Nearest { options: ReadPreferenceOptions { tag_sets: Some([{"instance": "B99AB2AA-EE93-405A-95CD-89EAC0FCA551"}, {"all": "all"}]), max_staleness: None, hedge: None } })), server_api: None, server_selection_timeout: None, default_database: None, tls: Some(Enabled(TlsOptions { allow_invalid_certificates: Some(true), ca_file_path: None, cert_key_file_path: None, allow_invalid_hostnames: None })), write_concern: None, srv_max_hosts: None } 2025-01-28T08:36:25.587Z INFO [mongod_upgrade] Checking intial FCV 2025-01-28T08:36:25.587Z INFO [mongod_upgrade] Feature Compatibility Version is: 4.2 2025-01-28T08:36:25.587Z DEBUG [mongod_upgrade] Hostname set to "127.0.0.1:8191" 2025-01-28T08:36:25.588Z INFO [mongod_upgrade] Preflight completed successfully 2025-01-28T08:36:25.589Z INFO [mongod_upgrade] Executing backup before upgrade 2025-01-28T08:36:25.613Z DEBUG [mongod_upgrade] Backup update doc: Document({"$set": Document({"backup.location": String("/opt/splunk/var/lib/splunk/kvstore/mongo_backup"), "backup.start": DateTime(2025-01-28 8:36:25.613 +00:00:00), "backup.phase1_start": DateTime(2025-01-28 8:36:25.613 +00:00:00)})}) 2025-01-28T08:36:25.662Z DEBUG [mongod_upgrade] Backup update result: UpdateResult { matched_count: 0, modified_count: 0, upserted_id: Some(String("BACKUP_127.0.0.1:8191")) } 2025-01-28T08:36:26.426Z DEBUG [mongod_upgrade] Rsync returned successfully. 2025-01-28T08:36:26.426Z DEBUG [mongod_upgrade] Backup update doc: Document({"$set": Document({"backup.phase1_end": DateTime(2025-01-28 8:36:26.426 +00:00:00), "backup.phase2_start": DateTime(2025-01-28 8:36:26.426 +00:00:00)})}) 2025-01-28T08:36:26.428Z DEBUG [mongod_upgrade] Backup update result: UpdateResult { matched_count: 1, modified_count: 1, upserted_id: None } 2025-01-28T08:36:26.429Z DEBUG [mongod_upgrade::conditions] "phase1" complete count: 1 2025-01-28T08:36:26.433Z DEBUG [mongod_upgrade::conditions] Replication status doc for node: Document({"_id": Int32(0), "name": String("127.0.0.1:8191"), "health": Double(1.0), "state": Int32(1), "stateStr": String("PRIMARY"), "uptime": Int32(61), "optime": Document({"ts": Timestamp { time: 1738053386, increment: 1 }, "t": Int64(2)}), "optimeDate": DateTime(2025-01-28 8:36:26.0 +00:00:00), "syncingTo": String(""), "syncSourceHost": String(""), "syncSourceId": Int32(-1), "infoMessage": String("could not find member to sync from"), "electionTime": Timestamp { time: 1738053327, increment: 1 }, "electionDate": DateTime(2025-01-28 8:35:27.0 +00:00:00), "configVersion": Int32(1), "self": Boolean(true), "lastHeartbeatMessage": String("")}) 2025-01-28T08:36:26.434Z DEBUG [mongod_upgrade::conditions] Hostname from replSetGetStatus: "127.0.0.1:8191" 2025-01-28T08:36:26.434Z DEBUG [mongod_upgrade::conditions] Hostname from preflight: "127.0.0.1:8191" 2025-01-28T08:36:26.434Z INFO [mongod_upgrade] Node identified as Primary, issuing fsyncLock and pausing writes to node 2025-01-28T08:36:26.578Z INFO [mongod_upgrade] Document({"info": String("now locked against writes, use db.fsyncUnlock() to unlock"), "lockCount": Int64(1), "seeAlso": String("http://dochub.mongodb.org/core/fsynccommand"), "ok": Double(1.0), "$clusterTime": Document({"clusterTime": Timestamp { time: 1738053386, increment: 1 }, "signature": Document({"hash": Binary { subtype: Generic, bytes: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] }, "keyId": Int64(0)})}), "operationTime": Timestamp { time: 1738053386, increment: 1 }}) 2025-01-28T08:36:26.578Z INFO [mongod_upgrade] Waiting for replication lag to be 0 on all secondary nodes 2025-01-28T08:36:27.759Z INFO [mongod_upgrade] unpausing writes to node 2025-01-28T08:36:27.764Z INFO [mongod_upgrade] Document({"info": String("fsyncUnlock completed"), "lockCount": Int64(0), "ok": Double(1.0), "$clusterTime": Document({"clusterTime": Timestamp { time: 1738053386, increment: 1 }, "signature": Document({"hash": Binary { subtype: Generic, bytes: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] }, "keyId": Int64(0)})}), "operationTime": Timestamp { time: 1738053386, increment: 1 }}) 2025-01-28T08:36:27.764Z DEBUG [mongod_upgrade] Second rsync returned successfully. 2025-01-28T08:36:27.764Z DEBUG [mongod_upgrade] Backup update doc: Document({"$set": Document({"backup.phase2_end": DateTime(2025-01-28 8:36:27.764 +00:00:00), "backup.end": DateTime(2025-01-28 8:36:27.764 +00:00:00)})}) 2025-01-28T08:36:27.765Z DEBUG [mongod_upgrade] Backup update result: UpdateResult { matched_count: 1, modified_count: 1, upserted_id: None } 2025-01-28T08:36:27.766Z DEBUG [mongod_upgrade::conditions] "phase2" complete count: 1 2025-01-28T08:36:27.766Z INFO [mongod_upgrade] Backup completed successfully 2025-01-28T08:36:27.766Z INFO [mongod_upgrade] Starting rolling update 2025-01-28T08:36:27.785Z INFO [mongod_upgrade::commands] Init results: InsertOneResult { inserted_id: String("127.0.0.1:8191") } 2025-01-28T08:36:27.785Z INFO [mongod_upgrade] Waiting for initialization 2025-01-28T08:36:27.786Z DEBUG [mongod_upgrade::conditions] Init count: 1 2025-01-28T08:36:27.786Z INFO [mongod_upgrade] All initialized 2025-01-28T08:36:27.786Z INFO [mongod_upgrade] Upgrading to 4.4 2025-01-28T08:36:27.787Z INFO [mongod_upgrade] Waiting if primary 2025-01-28T08:36:27.788Z DEBUG [mongod_upgrade::conditions] Replication status doc for node: Document({"_id": Int32(0), "name": String("127.0.0.1:8191"), "health": Double(1.0), "state": Int32(1), "stateStr": String("PRIMARY"), "uptime": Int32(62), "optime": Document({"ts": Timestamp { time: 1738053387, increment: 2 }, "t": Int64(2)}), "optimeDate": DateTime(2025-01-28 8:36:27.0 +00:00:00), "syncingTo": String(""), "syncSourceHost": String(""), "syncSourceId": Int32(-1), "infoMessage": String("could not find member to sync from"), "electionTime": Timestamp { time: 1738053327, increment: 1 }, "electionDate": DateTime(2025-01-28 8:35:27.0 +00:00:00), "configVersion": Int32(1), "self": Boolean(true), "lastHeartbeatMessage": String("")}) 2025-01-28T08:36:27.788Z DEBUG [mongod_upgrade::conditions] Hostname from replSetGetStatus: "127.0.0.1:8191" 2025-01-28T08:36:27.788Z DEBUG [mongod_upgrade::conditions] Hostname from preflight: "127.0.0.1:8191" 2025-01-28T08:36:27.788Z DEBUG [mongod_upgrade::conditions] Upgraded count: 0 2025-01-28T08:36:27.788Z INFO [mongod_upgrade] Getting lock 2025-01-28T08:36:27.789Z DEBUG [mongod_upgrade::conditions] Upserting lock 2025-01-28T08:36:27.790Z INFO [mongod_upgrade::conditions] locked 2025-01-28T08:36:27.790Z INFO [mongod_upgrade] Got lock: true 2025-01-28T08:36:27.790Z INFO [mongod_upgrade::commands] Updating 127.0.0.1:8191 to 4.4 2025-01-28T08:36:27.790Z INFO [mongod_upgrade::commands] In update for 4.4 2025-01-28T08:36:29.202Z INFO [mongod_upgrade::commands] Shutting down the database 2025-01-28T08:36:29.736Z WARN [mongod_upgrade::commands] Attempting with force:true due to shutdown failure: Error { kind: Io(Kind(UnexpectedEof)), labels: {}, wire_version: Some(8), source: None } 2025-01-28T08:36:35.754Z INFO [mongod_upgrade::commands] Checking if mongod is online 2025-01-28T08:37:05.755Z INFO [mongod_upgrade::commands] mongod is offline 2025-01-28T08:37:05.755Z INFO [mongod_upgrade::commands] Shutdown output: Document({}) 2025-01-28T08:37:07.813Z INFO [mongod_upgrade::commands] UPGRADE_TO_4.4_SUCCESSFUL 2025-01-28T08:37:09.813Z INFO [mongod_upgrade::commands] Attempting to update status 2025-01-28T08:37:09.817Z INFO [mongod_upgrade::commands] Status updated successfully 2025-01-28T08:37:09.823Z INFO [mongod_upgrade] Waiting for other nodes in replica set to upgrade 2025-01-28T08:37:09.824Z DEBUG [mongod_upgrade::conditions] Upgraded count: 1 2025-01-28T08:37:09.824Z INFO [mongod_upgrade] All upgraded to 4.4, proceeding. 2025-01-28T08:37:09.824Z INFO [mongod_upgrade] Setting new FCV Version: 4.4 2025-01-28T08:37:09.838Z INFO [mongod_upgrade] FCV change successful: () 2025-01-28T08:37:24.838Z INFO [mongod_upgrade] Upgrading to 5.0 2025-01-28T08:37:24.840Z INFO [mongod_upgrade] Waiting if primary 2025-01-28T08:37:24.841Z DEBUG [mongod_upgrade::conditions] Replication status doc for node: Document({"_id": Int32(0), "name": String("127.0.0.1:8191"), "health": Double(1.0), "state": Int32(1), "stateStr": String("PRIMARY"), "uptime": Int32(18), "optime": Document({"ts": Timestamp { time: 1738053429, increment: 4 }, "t": Int64(3)}), "optimeDate": DateTime(2025-01-28 8:37:09.0 +00:00:00), "lastAppliedWallTime": DateTime(2025-01-28 8:37:09.834 +00:00:00), "lastDurableWallTime": DateTime(2025-01-28 8:37:09.834 +00:00:00), "syncSourceHost": String(""), "syncSourceId": Int32(-1), "infoMessage": String("Could not find member to sync from"), "electionTime": Timestamp { time: 1738053427, increment: 1 }, "electionDate": DateTime(2025-01-28 8:37:07.0 +00:00:00), "configVersion": Int32(2), "configTerm": Int32(3), "self": Boolean(true), "lastHeartbeatMessage": String("")}) 2025-01-28T08:37:24.841Z DEBUG [mongod_upgrade::conditions] Hostname from replSetGetStatus: "127.0.0.1:8191" 2025-01-28T08:37:24.841Z DEBUG [mongod_upgrade::conditions] Hostname from preflight: "127.0.0.1:8191" 2025-01-28T08:37:24.842Z DEBUG [mongod_upgrade::conditions] Upgraded count: 0 2025-01-28T08:37:24.842Z INFO [mongod_upgrade] Getting lock 2025-01-28T08:37:24.842Z DEBUG [mongod_upgrade::conditions] Upserting lock 2025-01-28T08:37:24.843Z INFO [mongod_upgrade::conditions] locked 2025-01-28T08:37:24.843Z INFO [mongod_upgrade] Got lock: true 2025-01-28T08:37:24.843Z INFO [mongod_upgrade::commands] Updating 127.0.0.1:8191 to 5.0 2025-01-28T08:37:24.843Z INFO [mongod_upgrade::commands] In update for 5.0 2025-01-28T08:37:26.825Z INFO [mongod_upgrade::commands] Shutting down the database 2025-01-28T08:37:27.994Z WARN [mongod_upgrade::commands] Attempting with force:true due to shutdown failure: Error { kind: Io(Kind(UnexpectedEof)), labels: {}, wire_version: Some(9), source: None } 2025-01-28T08:37:34.004Z INFO [mongod_upgrade::commands] Checking if mongod is online 2025-01-28T08:38:04.006Z INFO [mongod_upgrade::commands] mongod is offline 2025-01-28T08:38:04.006Z INFO [mongod_upgrade::commands] Shutdown output: Document({}) 2025-01-28T08:38:06.710Z INFO [mongod_upgrade::commands] UPGRADE_TO_5.0_SUCCESSFUL 2025-01-28T08:38:08.710Z INFO [mongod_upgrade::commands] Attempting to update status 2025-01-28T08:38:08.717Z INFO [mongod_upgrade::commands] Status updated successfully 2025-01-28T08:38:08.725Z INFO [mongod_upgrade] Waiting for other nodes in replica set to upgrade 2025-01-28T08:38:08.732Z DEBUG [mongod_upgrade::conditions] Upgraded count: 1 2025-01-28T08:38:08.732Z INFO [mongod_upgrade] All upgraded to 5.0, proceeding. 2025-01-28T08:38:08.732Z INFO [mongod_upgrade] Setting new FCV Version: 5.0 2025-01-28T08:38:08.748Z INFO [mongod_upgrade] FCV change successful: () 2025-01-28T08:38:23.748Z INFO [mongod_upgrade] Upgrading to 6.0 2025-01-28T08:38:23.751Z INFO [mongod_upgrade] Waiting if primary 2025-01-28T08:38:23.752Z DEBUG [mongod_upgrade::conditions] Replication status doc for node: Document({"_id": Int32(0), "name": String("127.0.0.1:8191"), "health": Double(1.0), "state": Int32(1), "stateStr": String("PRIMARY"), "uptime": Int32(19), "optime": Document({"ts": Timestamp { time: 1738053488, increment: 5 }, "t": Int64(4)}), "optimeDate": DateTime(2025-01-28 8:38:08.0 +00:00:00), "lastAppliedWallTime": DateTime(2025-01-28 8:38:08.745 +00:00:00), "lastDurableWallTime": DateTime(2025-01-28 8:38:08.745 +00:00:00), "syncSourceHost": String(""), "syncSourceId": Int32(-1), "infoMessage": String("Could not find member to sync from"), "electionTime": Timestamp { time: 1738053486, increment: 1 }, "electionDate": DateTime(2025-01-28 8:38:06.0 +00:00:00), "configVersion": Int32(3), "configTerm": Int32(4), "self": Boolean(true), "lastHeartbeatMessage": String("")}) 2025-01-28T08:38:23.752Z DEBUG [mongod_upgrade::conditions] Hostname from replSetGetStatus: "127.0.0.1:8191" 2025-01-28T08:38:23.752Z DEBUG [mongod_upgrade::conditions] Hostname from preflight: "127.0.0.1:8191" 2025-01-28T08:38:23.752Z DEBUG [mongod_upgrade::conditions] Upgraded count: 0 2025-01-28T08:38:23.752Z INFO [mongod_upgrade] Getting lock 2025-01-28T08:38:23.753Z DEBUG [mongod_upgrade::conditions] Upserting lock 2025-01-28T08:38:23.757Z INFO [mongod_upgrade::conditions] locked 2025-01-28T08:38:23.757Z INFO [mongod_upgrade] Got lock: true 2025-01-28T08:38:23.758Z INFO [mongod_upgrade::commands] Updating 127.0.0.1:8191 to 6.0 2025-01-28T08:38:23.758Z INFO [mongod_upgrade::commands] In update for 6.0 2025-01-28T08:38:25.544Z INFO [mongod_upgrade::commands] Shutting down the database 2025-01-28T08:38:25.845Z WARN [mongod_upgrade::commands] Attempting with force:true due to shutdown failure: Error { kind: Io(Kind(UnexpectedEof)), labels: {}, wire_version: Some(13), source: None } 2025-01-28T08:38:31.854Z INFO [mongod_upgrade::commands] Checking if mongod is online 2025-01-28T08:39:01.856Z INFO [mongod_upgrade::commands] mongod is offline 2025-01-28T08:39:01.856Z INFO [mongod_upgrade::commands] Shutdown output: Document({}) 2025-01-28T08:39:03.281Z INFO [mongod_upgrade::commands] UPGRADE_TO_6.0_SUCCESSFUL 2025-01-28T08:39:05.281Z INFO [mongod_upgrade::commands] Attempting to update status 2025-01-28T08:39:05.285Z INFO [mongod_upgrade::commands] Status updated successfully 2025-01-28T08:39:05.297Z INFO [mongod_upgrade] Waiting for other nodes in replica set to upgrade 2025-01-28T08:39:05.299Z DEBUG [mongod_upgrade::conditions] Upgraded count: 1 2025-01-28T08:39:05.299Z INFO [mongod_upgrade] All upgraded to 6.0, proceeding. 2025-01-28T08:39:05.299Z INFO [mongod_upgrade] Setting new FCV Version: 6.0 2025-01-28T08:39:05.460Z INFO [mongod_upgrade] FCV change successful: () 2025-01-28T08:39:20.460Z INFO [mongod_upgrade] Upgrading to 7.0 2025-01-28T08:39:20.462Z INFO [mongod_upgrade] Waiting if primary 2025-01-28T08:39:20.464Z DEBUG [mongod_upgrade::conditions] Replication status doc for node: Document({"_id": Int32(0), "name": String("127.0.0.1:8191"), "health": Double(1.0), "state": Int32(1), "stateStr": String("PRIMARY"), "uptime": Int32(19), "optime": Document({"ts": Timestamp { time: 1738053545, increment: 10 }, "t": Int64(5)}), "optimeDate": DateTime(2025-01-28 8:39:05.0 +00:00:00), "lastAppliedWallTime": DateTime(2025-01-28 8:39:05.451 +00:00:00), "lastDurableWallTime": DateTime(2025-01-28 8:39:05.451 +00:00:00), "syncSourceHost": String(""), "syncSourceId": Int32(-1), "infoMessage": String("Could not find member to sync from"), "electionTime": Timestamp { time: 1738053543, increment: 1 }, "electionDate": DateTime(2025-01-28 8:39:03.0 +00:00:00), "configVersion": Int32(3), "configTerm": Int32(5), "self": Boolean(true), "lastHeartbeatMessage": String("")}) 2025-01-28T08:39:20.464Z DEBUG [mongod_upgrade::conditions] Hostname from replSetGetStatus: "127.0.0.1:8191" 2025-01-28T08:39:20.464Z DEBUG [mongod_upgrade::conditions] Hostname from preflight: "127.0.0.1:8191" 2025-01-28T08:39:20.465Z DEBUG [mongod_upgrade::conditions] Upgraded count: 0 2025-01-28T08:39:20.465Z INFO [mongod_upgrade] Getting lock 2025-01-28T08:39:20.466Z DEBUG [mongod_upgrade::conditions] Upserting lock 2025-01-28T08:39:20.469Z INFO [mongod_upgrade::conditions] locked 2025-01-28T08:39:20.469Z INFO [mongod_upgrade] Got lock: true 2025-01-28T08:39:20.470Z INFO [mongod_upgrade::commands] Updating 127.0.0.1:8191 to 7.0 2025-01-28T08:39:20.470Z INFO [mongod_upgrade::commands] In update for 7.0 2025-01-28T08:39:21.724Z INFO [mongod_upgrade::commands] Shutting down the database 2025-01-28T08:39:22.519Z WARN [mongod_upgrade::commands] Attempting with force:true due to shutdown failure: Error { kind: Io(Kind(UnexpectedEof)), labels: {}, wire_version: Some(17), source: None } 2025-01-28T08:39:28.529Z INFO [mongod_upgrade::commands] Checking if mongod is online 2025-01-28T08:39:58.531Z INFO [mongod_upgrade::commands] mongod is offline 2025-01-28T08:39:58.531Z INFO [mongod_upgrade::commands] Shutdown output: Document({}) 2025-01-28T08:40:00.234Z INFO [mongod_upgrade::commands] UPGRADE_TO_7.0_SUCCESSFUL 2025-01-28T08:40:02.234Z INFO [mongod_upgrade::commands] Attempting to update status 2025-01-28T08:40:02.240Z INFO [mongod_upgrade::commands] Status updated successfully 2025-01-28T08:40:02.253Z INFO [mongod_upgrade] Waiting for other nodes in replica set to upgrade 2025-01-28T08:40:02.258Z DEBUG [mongod_upgrade::conditions] Upgraded count: 1 2025-01-28T08:40:02.258Z INFO [mongod_upgrade] All upgraded to 7.0, proceeding. 2025-01-28T08:40:02.259Z INFO [mongod_upgrade] Setting new FCV Version: 7.0 2025-01-28T08:40:02.281Z INFO [mongod_upgrade] FCV change successful: () 2025-01-28T08:40:17.281Z INFO [mongod_upgrade] Upgrades completed 2025-01-28T08:40:17.287Z INFO [mongod_upgrade] Waiting for completion 2025-01-28T08:40:17.289Z DEBUG [mongod_upgrade::conditions] Completed count: 1 2025-01-28T08:40:17.289Z DEBUG [mongod_upgrade::conditions] Hostname count: 1 2025-01-28T08:40:17.289Z INFO [mongod_upgrade] All completed 2025-01-28T08:40:17.394Z INFO [mongod_upgrade] Dropped migration metadata database. 2025-01-28T08:40:17.403Z INFO [mongod_upgrade::commands] Shutting down the database 2025-01-28T08:40:18.363Z WARN [mongod_upgrade] mongod failed to shut down before exiting: Kind: I/O error: unexpected end of file, labels: {}  and below the correct mongodb version splunk show kvstore-status --verbose | grep -i serverversion serverVersion : 7.0.14 Since ours was a test server, and other than the default Splunk server we don't use anything that uses the KV Store, we didn't restore the backed up KV Store. Splunk does it automatically when upgrading the version, but it is recommended to do it manually and then restore the KV Store in case something goes wrong. Furthermore, it is never recommended to upgrade Splunk to version .0, but at least wait for the next versions, for example .3 because many bugs are solved that are often present in the .0 version (eg: 9.4.3 version).
HI @gcusello    Thanks for the quick response.   Can you guide me to the any official documentation where they explain about ES migration. I assume we have to create a custom app for search,... See more...
HI @gcusello    Thanks for the quick response.   Can you guide me to the any official documentation where they explain about ES migration. I assume we have to create a custom app for search,ES and then move all the configs related to the app and then once the ES and cluster is built will copy the configs. Am i on the right track
Given that a transaction_id would either not exist if a user never calls service 1, or it doesn't matter to your problem, service1_status is superfluous.  Is this correct?  I also do not see service ... See more...
Given that a transaction_id would either not exist if a user never calls service 1, or it doesn't matter to your problem, service1_status is superfluous.  Is this correct?  I also do not see service URL as part of required output.  As such, @gcusello's solution can be further simplified.  More than that, I'm not sure if service is an existing field.  On the other hand, the two services are probably logged into different sources or different sourcetypes or both different.  I will assume that service 1 logs into source1 and service 2 logs into source 2.   source IN (source1, source2) | stats dc(source) AS service_count BY transaction_id | eval status1 = "yes", status2=if(service_count > 1,"yes","no") | table transaction_id status1 status2    
I would like to be able to keep the top 5 peaks per day of the last x days. Be careful.  I suspect that you really mean to keep the top 5 peak-per-day of the last x days (based on your use of de... See more...
I would like to be able to keep the top 5 peaks per day of the last x days. Be careful.  I suspect that you really mean to keep the top 5 peak-per-day of the last x days (based on your use of dedup Day). Something like _time MaxMIPSParMinute 2025-01-15 00:27:00 2583 2025-01-07 23:08:00 2129 2025-01-25 22:15:00 2069 2025-01-22 13:58:00 1222 2025-01-18 08:35:00 990 Is this correct?  The basic solution is the same as @gcusello suggested, just add by Day Hour to eventstats.   index=myindex | bin span=1m _time | stats sum(MIPS) as MIPSParMinute by _time | eval Hour = strftime(_time, "%H"), Day = strftime(_time, "%F") | eventstats max(MIPSParMinute) as MaxMIPSParMinute by Day Hour | where MIPSParMinute == MaxMIPSParMinute | sort - MaxMIPSParMinute Day | dedup Day | head 5   I will leave formating to you. Here is an emulation you can play with and compare with real data:   index=_internal earliest=-25d@d latest=-0d@d | bin span=1m _time | stats count as MIPSParMinute by _time ``` the above emulates index=myindex | bin span=1m _time | stats sum(MIPS) as MIPSParMinute by _time ```  
Please share the event for which this is not working
Hi @krishna63032 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @kzjbry1 , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking. Giuseppe P.S.: Karma Poi... See more...
Hi @kzjbry1 , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking. Giuseppe P.S.: Karma Points are appreciated
Hi @krishna63032 , where do you located apps to deploy? It seems that you located the apps to deploy in two folders. they must be located only in manager-apps and not in master-apps, this location... See more...
Hi @krishna63032 , where do you located apps to deploy? It seems that you located the apps to deploy in two folders. they must be located only in manager-apps and not in master-apps, this location is deprecated and not present in the last versions. Ciao. Giuseppe
Hi @rahulkumar , adpat my hint to your requirements: in props.conf: [source::http:logstash] TRANSFORMS-00 = securelog_set_default_metadata TRANSFORMS-01 = securelog_override_raw in transforms.con... See more...
Hi @rahulkumar , adpat my hint to your requirements: in props.conf: [source::http:logstash] TRANSFORMS-00 = securelog_set_default_metadata TRANSFORMS-01 = securelog_override_raw in transforms.conf: [securelog_set_default_metadata] INGEST_EVAL = host := json_extract(_raw, "host.name") [securelog_override_raw] INGEST_EVAL = _raw := json_extract(_raw, "message")  Ciao. Giuseppe
When i push configuration bundle through cluster master getting below error.. Please suggest on this.    
Hi @onthakur , you can use something like this: <your_search> | stats dc(service) AS service_count values(service) AS service values(url) AS url BY transaction_id | eval ... See more...
Hi @onthakur , you can use something like this: <your_search> | stats dc(service) AS service_count values(service) AS service values(url) AS url BY transaction_id | eval status1=if(service_count=2 OR service_count=1 AND service="service1","yes","not"), status2=if(service_count=2 OR service_count=1 AND service="service2","yes","not") | table transaction_id url status1 status2 Ciao. Giuseppe
Hi @arunkuriakose , you have to migrate stand alone SHs to a cluster you have to follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.4.0/DistSearch/Migratefromstandalonesearchh... See more...
Hi @arunkuriakose , you have to migrate stand alone SHs to a cluster you have to follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.4.0/DistSearch/Migratefromstandalonesearchheads My special hint is to put much attention to the ES, because it requires a special installation on an SH Cluster: install and configure the Deployer, take all the apps from the SHs and put them on the Deployer, install ES on the Deployer, configure SHs as cluster, deploy apps from the Deployer. the best approach is that you did all the configurations in ES in a dedicated custom app, not in the ES apps, so you could install from scratch the ES on the Deployer and then deploy all the customization contained in the custom app. Ciao. Giuseppe
Currently I am adopting the same deployer to two different search head cluster and would like to remove it from one of the clusters. However, I cannot find any official documentation related to it. C... See more...
Currently I am adopting the same deployer to two different search head cluster and would like to remove it from one of the clusters. However, I cannot find any official documentation related to it. Could anyone tell me how to do it? Thank you so much
I think I understand the essence of the challenge.  Data analytics solution all depends on data characteristics.  Can you describe data further?  For example, the alternative field names, do they app... See more...
I think I understand the essence of the challenge.  Data analytics solution all depends on data characteristics.  Can you describe data further?  For example, the alternative field names, do they appear in the two different sources?  In other words, is there a relationship like this? index=email source=/var/logs/esa_0.log index=cyber source=/varlogs/fe01.log sender, recipient, subject, ...  suser, duser, msg, ... Such relationship can improve search by not using too many OR, which usually decreases efficiency.  On the other hand, even if such relationships exist, if suser, duser, subject, ... do not always exist in the same event, your search will not satisfy all filters.  As @PickleRick says, in that case you will have to sacrifice efficiency and fetch all events then filter. However, you have already clarified that except attachments,  sender, recipient, subject, etc., always exist, so do suser, duser, msg, and so on.  This means you can take advantage of those always-on fields. Now, to the bottom of the challenge.  Yes, you can do that.  But you need to change token strategy a little.  For this, we will single out the token for attachments from the rest. Just to distinguish this token, I call it attachments_tok, and set up Name-Value pairs (Label-Value in Dashboard Studio parler) like these: Name Value Any * filename1 attachments = filename1 filename2 attachments = filename2 ...   Once attachment_tok is set up, reorganize the search like this:   (index=email source=/var/logs/esa_0.log ($attachments_tok$) sha256=$hash$ sender="$sender$" recipient="$recipient$" subject="$subject$" message-id="$email_id$" from-header="$reply_add$") OR (index=cyber source=/varlogs/fe01.log suser="$sender$" duser="$recipient$" msg="$subject$" id="'<$email_id$>'" ReplyAddress="$reply_add$")    Hope this helps.
Hello, I am currently trying to deploy a single deployer across two different search head clusters but am having trouble finding detailed steps on how to do this. I have used the same cluster label a... See more...
Hello, I am currently trying to deploy a single deployer across two different search head clusters but am having trouble finding detailed steps on how to do this. I have used the same cluster label and secret for both clusters. To differentiate the clusters, I attempted to assign different captains as follows: For Cluster A bootstrap shcluster-captain -servers_list "https://cluster_A_IP:8089, https://cluster_A_IP:8089, https://cluster_A_IP:8089" For Cluster B bootstrap shcluster-captain -servers_list "https://cluster_B_IP:8089, https://cluster_B_IP:8089, https://cluster_B_IP:8089" I am unsure if this setup correctly separates the two clusters while using the same deployer. Could you provide guidance on whether this approach is effective or suggest an alternative method? Thank you so much
Hi Team We have a deployment with 3 standalone search heads . One of them have ES running on it. We are planning to introduce a new server as a deployer and make this 3 search head clustered.  Ques... See more...
Hi Team We have a deployment with 3 standalone search heads . One of them have ES running on it. We are planning to introduce a new server as a deployer and make this 3 search head clustered.  Question: 1. Is it possible to add these exisitng search heads to a cluster or should we copy all configs then create new search heads and copy the configs to all? If this is the only possibility what are the recommendations and challenges ? Can we take a backup of  full /etc/apps  and then deploy new search heads-> add to cluster-> replicate /etc/apps. Is this approach?   Any heads up will be appreciated 
We tried below confs     Navigate to $SPLUNK_HOME/etc/system/local/. Edit (or create) server.conf   [general] http_proxy = http://myinternetserver01.mydomain.com:4443 https_proxy = http... See more...
We tried below confs     Navigate to $SPLUNK_HOME/etc/system/local/. Edit (or create) server.conf   [general] http_proxy = http://myinternetserver01.mydomain.com:4443 https_proxy = https://myinternetserver01.mydomain.com:4443 proxy_user = username proxy_password = mysecurepassword   Also tried below conf [general] http_proxy = http://username:mysecurepassword@myinternetserver01.mydomain.com:4443 https_proxy = https://username:mysecurepassword@myinternetserver01.mydomain.com:4443 But both are not working.    
I have tried this in following way index="index1" | search "slot" | rex field=msg "(?<action>added|removed)" | eval added_time=if(action="added",strftime(_time, "%H:%M:%S"),null()) | eval removed_... See more...
I have tried this in following way index="index1" | search "slot" | rex field=msg "(?<action>added|removed)" | eval added_time=if(action="added",strftime(_time, "%H:%M:%S"),null()) | eval removed_time=if(action="removed",strftime(_time, "%H:%M:%S"),null()) | sort 0 _time | streamstats max(added_time) as added_time latest(removed_time) as removed_time by host slot | eval downtime=if(isnotnull(added_time) AND isnotnull(removed_time), strptime(removed_time, "%H:%M:%S") - strptime(added_time, "%H:%M:%S"), 0)   but the issue is, downtime is not getting calculated and its printing 0 always.   need help in fixing this.
Thanks @ITWhisperer  for the reply.   the downtime field is not getting populated only. I tried converting it to epoch time and still same.   can you please look into it once?
Did anyone ever come up with an answer to this question? In our installation, the app names show up in the "Apps" pull-down list. Some of those apps have custom icons which also show up in the in bot... See more...
Did anyone ever come up with an answer to this question? In our installation, the app names show up in the "Apps" pull-down list. Some of those apps have custom icons which also show up in the in both the "Apps" pull-down list and in the app navigation bar. But under no circumstances does an app name ever appear in the app navigation bar. Custom icons do, for those apps which have one. For everything else, we get the default green and white "App" icon on the far right side of the app navigation bar. But no app name text — ever. Something is broken in our environment that's preventing app names from showing in the app navigation bar. It's been that way for years, and I'd really like to know how to fix that.