All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Nawab , did you installed the SQL-Server Add-On https://splunkbase.splunk.com/app/2648 on the Search Heads and on the Indexers or (if present) on the Heavy Forwarders? Ciao. Giuseppe
Keeping this post since it may help others, It appears to me lately that the filed hostname has to be selected under the fields. 
and how to renew Splunk license with license number and GUID provided by Splunk team? unable to login to license number.
The MSSQL Add-On has installation and configuration docs. Did you read them? https://docs.splunk.com/Documentation/AddOns/released/MSSQLServer/About
We need to integrate MSSQL standard edition with splunk, so we tried sending logs to Windows Event Viewer application channel. Now we are getting logs, but the issue is logs are not parsed and we are... See more...
We need to integrate MSSQL standard edition with splunk, so we tried sending logs to Windows Event Viewer application channel. Now we are getting logs, but the issue is logs are not parsed and we are getting all logs. My question is if someone has integrated MSSQL standard edition with splunk. how you did it and is data parsed
Hi @richgalloway , thanks for the reply.  what about this - and do we need to push it to all other nodes or is it already configured? where to check is it configured or not? Please clarify.
Hi @Sankar , only training on ES: you must define a search to extract assets and identities from AD logs or from Servicenow. these items must be formatted (field names) using the names that you ca... See more...
Hi @Sankar , only training on ES: you must define a search to extract assets and identities from AD logs or from Servicenow. these items must be formatted (field names) using the names that you can find in assets and identities management in ES. When you created this search, you can schedule it adding the information about priority (e.g. Domain Controllers have a critical Priority, pcs of CEO and managers have a critical priority, if you are an eCommerce company, payment services are critical and so on based on your Business Impact Analysis. Ciao. Giuseppe
Hello all, I have an ask to create a sample dashboard with the data present. Hence I have created following panels with dropdowns available: Total Traffic vs Attack Traffic -  | stats count as "T... See more...
Hello all, I have an ask to create a sample dashboard with the data present. Hence I have created following panels with dropdowns available: Total Traffic vs Attack Traffic -  | stats count as "Total Traffic" count(eval(isnotnull(attack_type))) as "Attack Traffic". Top 10 Hostnames / FQDN Targeted - |stats count by fqdn No of Error logs - |search severity = Error |stats count No of Critical logs - |search severity = Critical |stats count Attack Classification by % - (Num of Attacks) - |top limit=10 attack_type Top 10 IP Addresses - | top ip_client limit=10 Daily Attack Trend - |timechart count(attack_type) as count span=1d Weekly Attack Trend - |timechart count(attack_type) as count span=1w Status Codes Trend - |stats count by response_code HTTP Method Used - |stats count by method Log Details - |table _time, ip_client, method, policy_name, response_code, support_id, severity, violations, sub_violations, violation_rating, uri All searches followed by base search. Please let me know if any panel needs to be modified or more detailed than this basic ones. Also please suggest if any new panel can be added. Please suggest any drilldowns as well.
Exactly that way. Never use any x.0.0 version and avoid also x.y.0 if you can! If you cannot, then test everything what you have and need on test environment first and fix your findings. Then do fu... See more...
Exactly that way. Never use any x.0.0 version and avoid also x.y.0 if you can! If you cannot, then test everything what you have and need on test environment first and fix your findings. Then do full backup from all nodes, kvstores etc. Prefer offline if possible. Be sure that you have rollback plan and resources on place when something goes wrong and you cannot use your new version. Look from logs that update has finished before start any components and ensure that logs didn't contains any errors and if those must fix before start. Also join Splunk Slack and look what other has already found and if there is any fix for those. Here is direct link into https://splunk-usergroups.slack.com/archives/C03M9ENE6AD splunk_9_upgrade_issues channel.
Hi @gcusello  do we have any reference guide from splunk? or servicenow?
I have data that contains the LOGINDate, UserName, and USERID. I need to use the MLTK to detect user behavior individually, not for all users together. The goal is to use Machine Learning to detect... See more...
I have data that contains the LOGINDate, UserName, and USERID. I need to use the MLTK to detect user behavior individually, not for all users together. The goal is to use Machine Learning to detect the normal behavior of each user based on hours and days. Additionally, I need to detect: If the user logs in on off days, to determine whether it’s normal behavior. If the user logs in at abnormal hours on any day, this should also be detected. I successfully implemented this using Python, but the customer requires it to be done using Splunk MLTK without any static values.
Hi @arunkuriakose , it's always a best practice to have all the ES customizations in a custom app, in this way it's easier to migrate it. In your case, the best approach I hint is to move all of th... See more...
Hi @arunkuriakose , it's always a best practice to have all the ES customizations in a custom app, in this way it's easier to migrate it. In your case, the best approach I hint is to move all of them in a custom app, otherwise, you could copy all the folders of the ES installation on the Deployer, but I'm not so sure that't the correct approach, I'd prefer to use the custom app. Anyway, the migration process should be: back-up the ES Search Head, configure the Deployer, move all custom configurations (Correlation Searches, Reports, Dashboards, field extractions, custom eventtypes, etc...) in a custom app called e.g. SA-SOC where SA means Supporting Add-On, install ES on the Deployer, copy the SA-SOC app in the Deployer, configure the Search Heads in the Cluster, deploy apps, test your evnvironment. this is a long job, so it would be the best, if you don't use the ES Search Head in the Cluster but a new machine and you use the stand-alone ES SH it in the meantime you migrate your environment, then at the end you can disable it. Ciao. Giuseppe
In our test environment we downgraded to 9.3.2 and KV Store was not started with the same error message in the log file, maybe the mongodb was corrupted. As reported in the Splunk docs here: https:/... See more...
In our test environment we downgraded to 9.3.2 and KV Store was not started with the same error message in the log file, maybe the mongodb was corrupted. As reported in the Splunk docs here: https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/MigrateKVstore the mongodb server need to be on 4.2.x version: You must upgrade to server version 4.2.x before upgrading to Splunk Enterprise 9.4.x or higher. For instructions and information about updating to KV store server version 4.2.x in Splunk Enterprise versions 9.0.x through 9.3.x, see Migrate the KV store storage engine in the Splunk Enterprise 9.3.0 documentation.   so, to check, it is strongly suggested to see the following Splunk guide: https://docs.splunk.com/Documentation/Splunk/9.3.2/Admin/MigrateKVstore After that, we stopped Splunk and issued a splunk clean kvstore --local after restarting Splunk everything was back working. We upgraded again to 9.4.0, after some seconds it starts to upgrade mongodb from 4.2 to 7.0 through 4.4, 5.0 and 6.0 with message in GUI that KV store is updating, we need to wait until update its finished. After some minutes mongodb was been successfully updated with the following message in the Splunk GUI: and the Splunk version: it is strongly suggested to tailing the $SPLUNK_HOME/var/log/splunk/mongodb_upgrade.log and do not operate on Splunk until update is finished. below the mongodb_upgrade.log 2025-01-28T08:36:25.567Z INFO [mongod_upgrade] Mongod Upgrader Logs 2025-01-28T08:36:25.568Z DEBUG [mongod_upgrade] mongod_upgrade arguments: Args { verbose: Verbosity { verbose: 1, quiet: 0, phantom: PhantomData<clap_verbosity::InfoLevel> }, uri: "mongodb://__system@127.0.0.1:8191/?replicaSet=B99AB2AA-EE93-405A-95CD-89EAC0FCA551&retryWrites=true&authSource=local&ssl=true&connectTimeoutMS=10000&socketTimeoutMS=300000&readPreference=nearest&readPreferenceTags=instance:B99AB2AA-EE93-405A-95CD-89EAC0FCA551&readPreferenceTags=all:all&tlsAllowInvalidCertificates=true", nodes: 1, local_uri: "mongodb://__system@127.0.0.1:8191/?w=majority&journal=true&retryWrites=true&authSource=local&ssl=true&connectTimeoutMS=10000&socketTimeoutMS=300000&tlsAllowInvalidCertificates=true&directConnection=true", backup_disabled: false, ld_linker: "lib/ld-2.26.so", data_directory: "/opt/splunk/var/lib/splunk/kvstore/mongo", backup_path: "/opt/splunk/var/lib/splunk/kvstore/mongo_backup", shadow_mount_dir: "C:\\mongo_shadow\\", backup_volume: "NOT_SPECIFIED", logpath: "/opt/splunk/var/log/splunk/mongod.log", keep_metadata: false, drop_metadata: false, pre_drop_metadata: false, keep_backups: true, metadata_database: "migration_metadata", metadata_collection: "migration_metadata", rsync_retries: 5, max_start_retries: 10, shutdown_mongod: true, rsync_path: "/opt/splunk/bin/rsync", keyfile_path: Some("/opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key"), max_command_time_ms: 60000, time_duration_block_s: 1, polling_interval_ms: 100, polling_max_wait_ms: 26460000, polling_version_max_wait_ms: 4800000, max_retries: 4, health_check_max_retries: 60, use_ld: false, mongod_args: ["--dbpath=/opt/splunk/var/lib/splunk/kvstore/mongo", "--storageEngine=wiredTiger", "--wiredTigerCacheSizeGB=1.050000", "--port=8191", "--timeStampFormat=iso8601-utc", "--oplogSize=200", "--keyFile=/opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key", "--setParameter=enableLocalhostAuthBypass=0", "--setParameter=oplogFetcherSteadyStateMaxFetcherRestarts=0", "--replSet=B99AB2AA-EE93-405A-95CD-89EAC0FCA551", "--bind_ip=0.0.0.0", "--sslCAFile=/opt/splunk/etc/auth/cacert.pem", "--tlsAllowConnectionsWithoutCertificates", "--sslMode=requireSSL", "--sslAllowInvalidHostnames", "--sslPEMKeyFile=/opt/splunk/etc/auth/server.pem", "--sslPEMKeyPassword=password", "--tlsDisabledProtocols=noTLS1_0,noTLS1_1", "--sslCipherConfig=ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES128-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:AES128-SHA256", "--nounixsocket", "--noscripting"] } 2025-01-28T08:36:25.568Z INFO [mongod_upgrade] Executing Preflight Checks 2025-01-28T08:36:25.568Z DEBUG [mongod_upgrade] client_options_primary: ClientOptions { hosts: [Tcp { host: "127.0.0.1", port: Some(8191) }], app_name: None, compressors: None, connect_timeout: Some(10s), credential: Some(Credential("REDACTED")), direct_connection: None, driver_info: None, heartbeat_freq: None, load_balanced: None, local_threshold: None, max_idle_time: None, max_pool_size: None, min_pool_size: None, max_connecting: None, read_concern: None, repl_set_name: Some("B99AB2AA-EE93-405A-95CD-89EAC0FCA551"), retry_reads: None, retry_writes: Some(true), selection_criteria: Some(ReadPreference(Nearest { options: ReadPreferenceOptions { tag_sets: Some([{"instance": "B99AB2AA-EE93-405A-95CD-89EAC0FCA551"}, {"all": "all"}]), max_staleness: None, hedge: None } })), server_api: None, server_selection_timeout: None, default_database: None, tls: Some(Enabled(TlsOptions { allow_invalid_certificates: Some(true), ca_file_path: None, cert_key_file_path: None, allow_invalid_hostnames: None })), write_concern: None, srv_max_hosts: None } 2025-01-28T08:36:25.587Z INFO [mongod_upgrade] Checking intial FCV 2025-01-28T08:36:25.587Z INFO [mongod_upgrade] Feature Compatibility Version is: 4.2 2025-01-28T08:36:25.587Z DEBUG [mongod_upgrade] Hostname set to "127.0.0.1:8191" 2025-01-28T08:36:25.588Z INFO [mongod_upgrade] Preflight completed successfully 2025-01-28T08:36:25.589Z INFO [mongod_upgrade] Executing backup before upgrade 2025-01-28T08:36:25.613Z DEBUG [mongod_upgrade] Backup update doc: Document({"$set": Document({"backup.location": String("/opt/splunk/var/lib/splunk/kvstore/mongo_backup"), "backup.start": DateTime(2025-01-28 8:36:25.613 +00:00:00), "backup.phase1_start": DateTime(2025-01-28 8:36:25.613 +00:00:00)})}) 2025-01-28T08:36:25.662Z DEBUG [mongod_upgrade] Backup update result: UpdateResult { matched_count: 0, modified_count: 0, upserted_id: Some(String("BACKUP_127.0.0.1:8191")) } 2025-01-28T08:36:26.426Z DEBUG [mongod_upgrade] Rsync returned successfully. 2025-01-28T08:36:26.426Z DEBUG [mongod_upgrade] Backup update doc: Document({"$set": Document({"backup.phase1_end": DateTime(2025-01-28 8:36:26.426 +00:00:00), "backup.phase2_start": DateTime(2025-01-28 8:36:26.426 +00:00:00)})}) 2025-01-28T08:36:26.428Z DEBUG [mongod_upgrade] Backup update result: UpdateResult { matched_count: 1, modified_count: 1, upserted_id: None } 2025-01-28T08:36:26.429Z DEBUG [mongod_upgrade::conditions] "phase1" complete count: 1 2025-01-28T08:36:26.433Z DEBUG [mongod_upgrade::conditions] Replication status doc for node: Document({"_id": Int32(0), "name": String("127.0.0.1:8191"), "health": Double(1.0), "state": Int32(1), "stateStr": String("PRIMARY"), "uptime": Int32(61), "optime": Document({"ts": Timestamp { time: 1738053386, increment: 1 }, "t": Int64(2)}), "optimeDate": DateTime(2025-01-28 8:36:26.0 +00:00:00), "syncingTo": String(""), "syncSourceHost": String(""), "syncSourceId": Int32(-1), "infoMessage": String("could not find member to sync from"), "electionTime": Timestamp { time: 1738053327, increment: 1 }, "electionDate": DateTime(2025-01-28 8:35:27.0 +00:00:00), "configVersion": Int32(1), "self": Boolean(true), "lastHeartbeatMessage": String("")}) 2025-01-28T08:36:26.434Z DEBUG [mongod_upgrade::conditions] Hostname from replSetGetStatus: "127.0.0.1:8191" 2025-01-28T08:36:26.434Z DEBUG [mongod_upgrade::conditions] Hostname from preflight: "127.0.0.1:8191" 2025-01-28T08:36:26.434Z INFO [mongod_upgrade] Node identified as Primary, issuing fsyncLock and pausing writes to node 2025-01-28T08:36:26.578Z INFO [mongod_upgrade] Document({"info": String("now locked against writes, use db.fsyncUnlock() to unlock"), "lockCount": Int64(1), "seeAlso": String("http://dochub.mongodb.org/core/fsynccommand"), "ok": Double(1.0), "$clusterTime": Document({"clusterTime": Timestamp { time: 1738053386, increment: 1 }, "signature": Document({"hash": Binary { subtype: Generic, bytes: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] }, "keyId": Int64(0)})}), "operationTime": Timestamp { time: 1738053386, increment: 1 }}) 2025-01-28T08:36:26.578Z INFO [mongod_upgrade] Waiting for replication lag to be 0 on all secondary nodes 2025-01-28T08:36:27.759Z INFO [mongod_upgrade] unpausing writes to node 2025-01-28T08:36:27.764Z INFO [mongod_upgrade] Document({"info": String("fsyncUnlock completed"), "lockCount": Int64(0), "ok": Double(1.0), "$clusterTime": Document({"clusterTime": Timestamp { time: 1738053386, increment: 1 }, "signature": Document({"hash": Binary { subtype: Generic, bytes: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] }, "keyId": Int64(0)})}), "operationTime": Timestamp { time: 1738053386, increment: 1 }}) 2025-01-28T08:36:27.764Z DEBUG [mongod_upgrade] Second rsync returned successfully. 2025-01-28T08:36:27.764Z DEBUG [mongod_upgrade] Backup update doc: Document({"$set": Document({"backup.phase2_end": DateTime(2025-01-28 8:36:27.764 +00:00:00), "backup.end": DateTime(2025-01-28 8:36:27.764 +00:00:00)})}) 2025-01-28T08:36:27.765Z DEBUG [mongod_upgrade] Backup update result: UpdateResult { matched_count: 1, modified_count: 1, upserted_id: None } 2025-01-28T08:36:27.766Z DEBUG [mongod_upgrade::conditions] "phase2" complete count: 1 2025-01-28T08:36:27.766Z INFO [mongod_upgrade] Backup completed successfully 2025-01-28T08:36:27.766Z INFO [mongod_upgrade] Starting rolling update 2025-01-28T08:36:27.785Z INFO [mongod_upgrade::commands] Init results: InsertOneResult { inserted_id: String("127.0.0.1:8191") } 2025-01-28T08:36:27.785Z INFO [mongod_upgrade] Waiting for initialization 2025-01-28T08:36:27.786Z DEBUG [mongod_upgrade::conditions] Init count: 1 2025-01-28T08:36:27.786Z INFO [mongod_upgrade] All initialized 2025-01-28T08:36:27.786Z INFO [mongod_upgrade] Upgrading to 4.4 2025-01-28T08:36:27.787Z INFO [mongod_upgrade] Waiting if primary 2025-01-28T08:36:27.788Z DEBUG [mongod_upgrade::conditions] Replication status doc for node: Document({"_id": Int32(0), "name": String("127.0.0.1:8191"), "health": Double(1.0), "state": Int32(1), "stateStr": String("PRIMARY"), "uptime": Int32(62), "optime": Document({"ts": Timestamp { time: 1738053387, increment: 2 }, "t": Int64(2)}), "optimeDate": DateTime(2025-01-28 8:36:27.0 +00:00:00), "syncingTo": String(""), "syncSourceHost": String(""), "syncSourceId": Int32(-1), "infoMessage": String("could not find member to sync from"), "electionTime": Timestamp { time: 1738053327, increment: 1 }, "electionDate": DateTime(2025-01-28 8:35:27.0 +00:00:00), "configVersion": Int32(1), "self": Boolean(true), "lastHeartbeatMessage": String("")}) 2025-01-28T08:36:27.788Z DEBUG [mongod_upgrade::conditions] Hostname from replSetGetStatus: "127.0.0.1:8191" 2025-01-28T08:36:27.788Z DEBUG [mongod_upgrade::conditions] Hostname from preflight: "127.0.0.1:8191" 2025-01-28T08:36:27.788Z DEBUG [mongod_upgrade::conditions] Upgraded count: 0 2025-01-28T08:36:27.788Z INFO [mongod_upgrade] Getting lock 2025-01-28T08:36:27.789Z DEBUG [mongod_upgrade::conditions] Upserting lock 2025-01-28T08:36:27.790Z INFO [mongod_upgrade::conditions] locked 2025-01-28T08:36:27.790Z INFO [mongod_upgrade] Got lock: true 2025-01-28T08:36:27.790Z INFO [mongod_upgrade::commands] Updating 127.0.0.1:8191 to 4.4 2025-01-28T08:36:27.790Z INFO [mongod_upgrade::commands] In update for 4.4 2025-01-28T08:36:29.202Z INFO [mongod_upgrade::commands] Shutting down the database 2025-01-28T08:36:29.736Z WARN [mongod_upgrade::commands] Attempting with force:true due to shutdown failure: Error { kind: Io(Kind(UnexpectedEof)), labels: {}, wire_version: Some(8), source: None } 2025-01-28T08:36:35.754Z INFO [mongod_upgrade::commands] Checking if mongod is online 2025-01-28T08:37:05.755Z INFO [mongod_upgrade::commands] mongod is offline 2025-01-28T08:37:05.755Z INFO [mongod_upgrade::commands] Shutdown output: Document({}) 2025-01-28T08:37:07.813Z INFO [mongod_upgrade::commands] UPGRADE_TO_4.4_SUCCESSFUL 2025-01-28T08:37:09.813Z INFO [mongod_upgrade::commands] Attempting to update status 2025-01-28T08:37:09.817Z INFO [mongod_upgrade::commands] Status updated successfully 2025-01-28T08:37:09.823Z INFO [mongod_upgrade] Waiting for other nodes in replica set to upgrade 2025-01-28T08:37:09.824Z DEBUG [mongod_upgrade::conditions] Upgraded count: 1 2025-01-28T08:37:09.824Z INFO [mongod_upgrade] All upgraded to 4.4, proceeding. 2025-01-28T08:37:09.824Z INFO [mongod_upgrade] Setting new FCV Version: 4.4 2025-01-28T08:37:09.838Z INFO [mongod_upgrade] FCV change successful: () 2025-01-28T08:37:24.838Z INFO [mongod_upgrade] Upgrading to 5.0 2025-01-28T08:37:24.840Z INFO [mongod_upgrade] Waiting if primary 2025-01-28T08:37:24.841Z DEBUG [mongod_upgrade::conditions] Replication status doc for node: Document({"_id": Int32(0), "name": String("127.0.0.1:8191"), "health": Double(1.0), "state": Int32(1), "stateStr": String("PRIMARY"), "uptime": Int32(18), "optime": Document({"ts": Timestamp { time: 1738053429, increment: 4 }, "t": Int64(3)}), "optimeDate": DateTime(2025-01-28 8:37:09.0 +00:00:00), "lastAppliedWallTime": DateTime(2025-01-28 8:37:09.834 +00:00:00), "lastDurableWallTime": DateTime(2025-01-28 8:37:09.834 +00:00:00), "syncSourceHost": String(""), "syncSourceId": Int32(-1), "infoMessage": String("Could not find member to sync from"), "electionTime": Timestamp { time: 1738053427, increment: 1 }, "electionDate": DateTime(2025-01-28 8:37:07.0 +00:00:00), "configVersion": Int32(2), "configTerm": Int32(3), "self": Boolean(true), "lastHeartbeatMessage": String("")}) 2025-01-28T08:37:24.841Z DEBUG [mongod_upgrade::conditions] Hostname from replSetGetStatus: "127.0.0.1:8191" 2025-01-28T08:37:24.841Z DEBUG [mongod_upgrade::conditions] Hostname from preflight: "127.0.0.1:8191" 2025-01-28T08:37:24.842Z DEBUG [mongod_upgrade::conditions] Upgraded count: 0 2025-01-28T08:37:24.842Z INFO [mongod_upgrade] Getting lock 2025-01-28T08:37:24.842Z DEBUG [mongod_upgrade::conditions] Upserting lock 2025-01-28T08:37:24.843Z INFO [mongod_upgrade::conditions] locked 2025-01-28T08:37:24.843Z INFO [mongod_upgrade] Got lock: true 2025-01-28T08:37:24.843Z INFO [mongod_upgrade::commands] Updating 127.0.0.1:8191 to 5.0 2025-01-28T08:37:24.843Z INFO [mongod_upgrade::commands] In update for 5.0 2025-01-28T08:37:26.825Z INFO [mongod_upgrade::commands] Shutting down the database 2025-01-28T08:37:27.994Z WARN [mongod_upgrade::commands] Attempting with force:true due to shutdown failure: Error { kind: Io(Kind(UnexpectedEof)), labels: {}, wire_version: Some(9), source: None } 2025-01-28T08:37:34.004Z INFO [mongod_upgrade::commands] Checking if mongod is online 2025-01-28T08:38:04.006Z INFO [mongod_upgrade::commands] mongod is offline 2025-01-28T08:38:04.006Z INFO [mongod_upgrade::commands] Shutdown output: Document({}) 2025-01-28T08:38:06.710Z INFO [mongod_upgrade::commands] UPGRADE_TO_5.0_SUCCESSFUL 2025-01-28T08:38:08.710Z INFO [mongod_upgrade::commands] Attempting to update status 2025-01-28T08:38:08.717Z INFO [mongod_upgrade::commands] Status updated successfully 2025-01-28T08:38:08.725Z INFO [mongod_upgrade] Waiting for other nodes in replica set to upgrade 2025-01-28T08:38:08.732Z DEBUG [mongod_upgrade::conditions] Upgraded count: 1 2025-01-28T08:38:08.732Z INFO [mongod_upgrade] All upgraded to 5.0, proceeding. 2025-01-28T08:38:08.732Z INFO [mongod_upgrade] Setting new FCV Version: 5.0 2025-01-28T08:38:08.748Z INFO [mongod_upgrade] FCV change successful: () 2025-01-28T08:38:23.748Z INFO [mongod_upgrade] Upgrading to 6.0 2025-01-28T08:38:23.751Z INFO [mongod_upgrade] Waiting if primary 2025-01-28T08:38:23.752Z DEBUG [mongod_upgrade::conditions] Replication status doc for node: Document({"_id": Int32(0), "name": String("127.0.0.1:8191"), "health": Double(1.0), "state": Int32(1), "stateStr": String("PRIMARY"), "uptime": Int32(19), "optime": Document({"ts": Timestamp { time: 1738053488, increment: 5 }, "t": Int64(4)}), "optimeDate": DateTime(2025-01-28 8:38:08.0 +00:00:00), "lastAppliedWallTime": DateTime(2025-01-28 8:38:08.745 +00:00:00), "lastDurableWallTime": DateTime(2025-01-28 8:38:08.745 +00:00:00), "syncSourceHost": String(""), "syncSourceId": Int32(-1), "infoMessage": String("Could not find member to sync from"), "electionTime": Timestamp { time: 1738053486, increment: 1 }, "electionDate": DateTime(2025-01-28 8:38:06.0 +00:00:00), "configVersion": Int32(3), "configTerm": Int32(4), "self": Boolean(true), "lastHeartbeatMessage": String("")}) 2025-01-28T08:38:23.752Z DEBUG [mongod_upgrade::conditions] Hostname from replSetGetStatus: "127.0.0.1:8191" 2025-01-28T08:38:23.752Z DEBUG [mongod_upgrade::conditions] Hostname from preflight: "127.0.0.1:8191" 2025-01-28T08:38:23.752Z DEBUG [mongod_upgrade::conditions] Upgraded count: 0 2025-01-28T08:38:23.752Z INFO [mongod_upgrade] Getting lock 2025-01-28T08:38:23.753Z DEBUG [mongod_upgrade::conditions] Upserting lock 2025-01-28T08:38:23.757Z INFO [mongod_upgrade::conditions] locked 2025-01-28T08:38:23.757Z INFO [mongod_upgrade] Got lock: true 2025-01-28T08:38:23.758Z INFO [mongod_upgrade::commands] Updating 127.0.0.1:8191 to 6.0 2025-01-28T08:38:23.758Z INFO [mongod_upgrade::commands] In update for 6.0 2025-01-28T08:38:25.544Z INFO [mongod_upgrade::commands] Shutting down the database 2025-01-28T08:38:25.845Z WARN [mongod_upgrade::commands] Attempting with force:true due to shutdown failure: Error { kind: Io(Kind(UnexpectedEof)), labels: {}, wire_version: Some(13), source: None } 2025-01-28T08:38:31.854Z INFO [mongod_upgrade::commands] Checking if mongod is online 2025-01-28T08:39:01.856Z INFO [mongod_upgrade::commands] mongod is offline 2025-01-28T08:39:01.856Z INFO [mongod_upgrade::commands] Shutdown output: Document({}) 2025-01-28T08:39:03.281Z INFO [mongod_upgrade::commands] UPGRADE_TO_6.0_SUCCESSFUL 2025-01-28T08:39:05.281Z INFO [mongod_upgrade::commands] Attempting to update status 2025-01-28T08:39:05.285Z INFO [mongod_upgrade::commands] Status updated successfully 2025-01-28T08:39:05.297Z INFO [mongod_upgrade] Waiting for other nodes in replica set to upgrade 2025-01-28T08:39:05.299Z DEBUG [mongod_upgrade::conditions] Upgraded count: 1 2025-01-28T08:39:05.299Z INFO [mongod_upgrade] All upgraded to 6.0, proceeding. 2025-01-28T08:39:05.299Z INFO [mongod_upgrade] Setting new FCV Version: 6.0 2025-01-28T08:39:05.460Z INFO [mongod_upgrade] FCV change successful: () 2025-01-28T08:39:20.460Z INFO [mongod_upgrade] Upgrading to 7.0 2025-01-28T08:39:20.462Z INFO [mongod_upgrade] Waiting if primary 2025-01-28T08:39:20.464Z DEBUG [mongod_upgrade::conditions] Replication status doc for node: Document({"_id": Int32(0), "name": String("127.0.0.1:8191"), "health": Double(1.0), "state": Int32(1), "stateStr": String("PRIMARY"), "uptime": Int32(19), "optime": Document({"ts": Timestamp { time: 1738053545, increment: 10 }, "t": Int64(5)}), "optimeDate": DateTime(2025-01-28 8:39:05.0 +00:00:00), "lastAppliedWallTime": DateTime(2025-01-28 8:39:05.451 +00:00:00), "lastDurableWallTime": DateTime(2025-01-28 8:39:05.451 +00:00:00), "syncSourceHost": String(""), "syncSourceId": Int32(-1), "infoMessage": String("Could not find member to sync from"), "electionTime": Timestamp { time: 1738053543, increment: 1 }, "electionDate": DateTime(2025-01-28 8:39:03.0 +00:00:00), "configVersion": Int32(3), "configTerm": Int32(5), "self": Boolean(true), "lastHeartbeatMessage": String("")}) 2025-01-28T08:39:20.464Z DEBUG [mongod_upgrade::conditions] Hostname from replSetGetStatus: "127.0.0.1:8191" 2025-01-28T08:39:20.464Z DEBUG [mongod_upgrade::conditions] Hostname from preflight: "127.0.0.1:8191" 2025-01-28T08:39:20.465Z DEBUG [mongod_upgrade::conditions] Upgraded count: 0 2025-01-28T08:39:20.465Z INFO [mongod_upgrade] Getting lock 2025-01-28T08:39:20.466Z DEBUG [mongod_upgrade::conditions] Upserting lock 2025-01-28T08:39:20.469Z INFO [mongod_upgrade::conditions] locked 2025-01-28T08:39:20.469Z INFO [mongod_upgrade] Got lock: true 2025-01-28T08:39:20.470Z INFO [mongod_upgrade::commands] Updating 127.0.0.1:8191 to 7.0 2025-01-28T08:39:20.470Z INFO [mongod_upgrade::commands] In update for 7.0 2025-01-28T08:39:21.724Z INFO [mongod_upgrade::commands] Shutting down the database 2025-01-28T08:39:22.519Z WARN [mongod_upgrade::commands] Attempting with force:true due to shutdown failure: Error { kind: Io(Kind(UnexpectedEof)), labels: {}, wire_version: Some(17), source: None } 2025-01-28T08:39:28.529Z INFO [mongod_upgrade::commands] Checking if mongod is online 2025-01-28T08:39:58.531Z INFO [mongod_upgrade::commands] mongod is offline 2025-01-28T08:39:58.531Z INFO [mongod_upgrade::commands] Shutdown output: Document({}) 2025-01-28T08:40:00.234Z INFO [mongod_upgrade::commands] UPGRADE_TO_7.0_SUCCESSFUL 2025-01-28T08:40:02.234Z INFO [mongod_upgrade::commands] Attempting to update status 2025-01-28T08:40:02.240Z INFO [mongod_upgrade::commands] Status updated successfully 2025-01-28T08:40:02.253Z INFO [mongod_upgrade] Waiting for other nodes in replica set to upgrade 2025-01-28T08:40:02.258Z DEBUG [mongod_upgrade::conditions] Upgraded count: 1 2025-01-28T08:40:02.258Z INFO [mongod_upgrade] All upgraded to 7.0, proceeding. 2025-01-28T08:40:02.259Z INFO [mongod_upgrade] Setting new FCV Version: 7.0 2025-01-28T08:40:02.281Z INFO [mongod_upgrade] FCV change successful: () 2025-01-28T08:40:17.281Z INFO [mongod_upgrade] Upgrades completed 2025-01-28T08:40:17.287Z INFO [mongod_upgrade] Waiting for completion 2025-01-28T08:40:17.289Z DEBUG [mongod_upgrade::conditions] Completed count: 1 2025-01-28T08:40:17.289Z DEBUG [mongod_upgrade::conditions] Hostname count: 1 2025-01-28T08:40:17.289Z INFO [mongod_upgrade] All completed 2025-01-28T08:40:17.394Z INFO [mongod_upgrade] Dropped migration metadata database. 2025-01-28T08:40:17.403Z INFO [mongod_upgrade::commands] Shutting down the database 2025-01-28T08:40:18.363Z WARN [mongod_upgrade] mongod failed to shut down before exiting: Kind: I/O error: unexpected end of file, labels: {}  and below the correct mongodb version splunk show kvstore-status --verbose | grep -i serverversion serverVersion : 7.0.14 Since ours was a test server, and other than the default Splunk server we don't use anything that uses the KV Store, we didn't restore the backed up KV Store. Splunk does it automatically when upgrading the version, but it is recommended to do it manually and then restore the KV Store in case something goes wrong. Furthermore, it is never recommended to upgrade Splunk to version .0, but at least wait for the next versions, for example .3 because many bugs are solved that are often present in the .0 version (eg: 9.4.3 version).
HI @gcusello    Thanks for the quick response.   Can you guide me to the any official documentation where they explain about ES migration. I assume we have to create a custom app for search,... See more...
HI @gcusello    Thanks for the quick response.   Can you guide me to the any official documentation where they explain about ES migration. I assume we have to create a custom app for search,ES and then move all the configs related to the app and then once the ES and cluster is built will copy the configs. Am i on the right track
Given that a transaction_id would either not exist if a user never calls service 1, or it doesn't matter to your problem, service1_status is superfluous.  Is this correct?  I also do not see service ... See more...
Given that a transaction_id would either not exist if a user never calls service 1, or it doesn't matter to your problem, service1_status is superfluous.  Is this correct?  I also do not see service URL as part of required output.  As such, @gcusello's solution can be further simplified.  More than that, I'm not sure if service is an existing field.  On the other hand, the two services are probably logged into different sources or different sourcetypes or both different.  I will assume that service 1 logs into source1 and service 2 logs into source 2.   source IN (source1, source2) | stats dc(source) AS service_count BY transaction_id | eval status1 = "yes", status2=if(service_count > 1,"yes","no") | table transaction_id status1 status2    
I would like to be able to keep the top 5 peaks per day of the last x days. Be careful.  I suspect that you really mean to keep the top 5 peak-per-day of the last x days (based on your use of de... See more...
I would like to be able to keep the top 5 peaks per day of the last x days. Be careful.  I suspect that you really mean to keep the top 5 peak-per-day of the last x days (based on your use of dedup Day). Something like _time MaxMIPSParMinute 2025-01-15 00:27:00 2583 2025-01-07 23:08:00 2129 2025-01-25 22:15:00 2069 2025-01-22 13:58:00 1222 2025-01-18 08:35:00 990 Is this correct?  The basic solution is the same as @gcusello suggested, just add by Day Hour to eventstats.   index=myindex | bin span=1m _time | stats sum(MIPS) as MIPSParMinute by _time | eval Hour = strftime(_time, "%H"), Day = strftime(_time, "%F") | eventstats max(MIPSParMinute) as MaxMIPSParMinute by Day Hour | where MIPSParMinute == MaxMIPSParMinute | sort - MaxMIPSParMinute Day | dedup Day | head 5   I will leave formating to you. Here is an emulation you can play with and compare with real data:   index=_internal earliest=-25d@d latest=-0d@d | bin span=1m _time | stats count as MIPSParMinute by _time ``` the above emulates index=myindex | bin span=1m _time | stats sum(MIPS) as MIPSParMinute by _time ```  
Please share the event for which this is not working
Hi @krishna63032 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @kzjbry1 , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking. Giuseppe P.S.: Karma Poi... See more...
Hi @kzjbry1 , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking. Giuseppe P.S.: Karma Points are appreciated
Hi @krishna63032 , where do you located apps to deploy? It seems that you located the apps to deploy in two folders. they must be located only in manager-apps and not in master-apps, this location... See more...
Hi @krishna63032 , where do you located apps to deploy? It seems that you located the apps to deploy in two folders. they must be located only in manager-apps and not in master-apps, this location is deprecated and not present in the last versions. Ciao. Giuseppe