All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

So basically I need the total number of files I uploaded in a 24 hour period once I get that figure extracted
So I have an Index Index= xxxxxx "Stopping iteration" I have the rex for getting the unique Id Event Sample : Stopping iteration - 1900000000: 2000 Files accepted so my current REX is rex "Stoppi... See more...
So I have an Index Index= xxxxxx "Stopping iteration" I have the rex for getting the unique Id Event Sample : Stopping iteration - 1900000000: 2000 Files accepted so my current REX is rex "Stopping\siteration[\s\-]+(?<stop_reg_id>[^:\s]+)" and it extracts the 1900000000 I want to extract the 2000 number and then do a count for 24 hours. Any help would be great
on HEC - I tried the following by moving the TIME definitions under the source (for all 3 sources) in props.conf and removed them from sourcetype.  Restarted Splunk, but still did not work.   [sour... See more...
on HEC - I tried the following by moving the TIME definitions under the source (for all 3 sources) in props.conf and removed them from sourcetype.  Restarted Splunk, but still did not work.   [source::http:aws-lblogs] EXTRACT-elb = ^\s*(?P<type>\S+)(\s+(?P<timestamp>\S+))(\s+(?P<elb>\S+))(\s+(?P<client_ip>[\d.]+):(?P<client_port>\d+))(\s+(?P<target>\S+))(\s+(?P<request_processing_time>\S+))(\s+(?P<target_processing_time>\S+))(\s+(?P<response_processing_time>\S+))(\s+(?P<elb_status_code>\S+))(\s+(?P<target_status_code>\S+))(\s+(?P<received_bytes>\d+))(\s+(?P<sent_bytes>\d+))(\s+"(?P<request>[^"]+)")(\s+"(?P<user_agent>[^"]+)")(\s+(?P<ssl_cipher>\S+))(\s+(?P<ssl_protocol>\S+))(\s+(?P<target_group_arn>\S+))(\s+"(?P<trace_id>[^"]+)")(\s+"(?P<domain_name>[^"]+)")?(\s+"(?P<chosen_cert_arn>[^"]+)")?(\s+(?P<matched_rule_priority>\S+))?(\s+(?P<request_creation_time>\S+))?(\s+"(?P<actions_executed>[^"]+)")?(\s+"(?P<redirect_url>[^"]+)")?(\s+"(?P<error_reason>[^"]+)")? EVAL-rtt = request_processing_time + target_processing_time + response_processing_time priority = 1 SHOULD_LINEMERGE = false TIME_PREFIX = ^.*?(?=20\d\d-\d\d) TIME_FORMAT = MAX_TIMESTAMP_LOOKAHEAD = 28 [aws:elb:accesslogs]  
Hi, this https://docs.splunk.com/Documentation/Splunk/9.4.0/DashStudio/trellisLayout.  Specifly like this sample.  With each month, cost and the trend line comparing to previous month.  
I don't like that this add-on is using INDEXED_EXTRACTIONS by default, with no seemingly easy way to switch from using them with the way that the scripted input works... Hopefully this will be improv... See more...
I don't like that this add-on is using INDEXED_EXTRACTIONS by default, with no seemingly easy way to switch from using them with the way that the scripted input works... Hopefully this will be improved now that Cisco owns Splunk...
Hi @Uma.Boppana, Thanks for asking your question on Community. Did you happen to find any new information on your question or even a solution you can share here? If you're still looking for help... See more...
Hi @Uma.Boppana, Thanks for asking your question on Community. Did you happen to find any new information on your question or even a solution you can share here? If you're still looking for help, you can contact AppDynamics Support: How do I open a case with AppDynamics Support? 
"Trellis"? Where did that come from? Please clarify what you are actually trying to do?
Keep getting "Select a valid trellis split by field", used "_time" and tried "monthly_cost" . 
| chart count by transaction_id, url Where the count for the url is greater than zero, the service has been called
Team,   I have a situation where user is calling service 1 and then service1 calls service2 using same transaction_id sometime it happens that user is calling service 1 but it is not calling servi... See more...
Team,   I have a situation where user is calling service 1 and then service1 calls service2 using same transaction_id sometime it happens that user is calling service 1 but it is not calling service2 and vise versa. I need a query which will show result in table format and show yes/No if service 1/2 called or not. transaction_id , service1_status , service2_status. 1234 , yes ,yes 5678, yes, No Ex :- log of service 1 :- <timestamp> <transaction_id> <service1 URL> log of service 2 :- <timestamp> <transaction_id> <service2 URL>  
{"input":{"type":"container"},"message":"2025-01-27 18:45:51.546GMT+0000 gm [com.bootserver.runtim] DEBUG Stability run result : com.bootserver.runtime.internal.api.RStability@6373","kubernetes":{"na... See more...
{"input":{"type":"container"},"message":"2025-01-27 18:45:51.546GMT+0000 gm [com.bootserver.runtim] DEBUG Stability run result : com.bootserver.runtime.internal.api.RStability@6373","kubernetes":{"namespace":"mg-prd","labels":{"service_app_my_com/gm-mongodb":"client","app_kubernetes_io/name":"gm-core","app_kubernetes_io/instance":"mg-prd-release-gm-core","service_app_my_com/gm-external-https":"server","service_app_my_com/gm-internal-https":"both","pod-template-hash":"5889b666","app_my_com/chart":"gm-core-0.1.1","app_kubernetes_io/part-of":"my-JKT","app_my_com/service":"mela","app_kubernetes_io/managed-by":"mela","app_kubernetes_io/component":"my-race","app_kubernetes_io/version":"2.0.002333","app_my_com/release":"mg-prod-release","service_app_my_com/fm-internal-bridge":"client","app_my_com/name":"gm-core"},"container":{"name":"gm-core"}},"host":{"name":"mg-prd"},"@timestamp":"2025-01-:25:31","environment":"pr","event":{"original":"2025-01-27 18:25:31.426GMT+0000 FM [com.my.bootserver.runtim] DEBUG Stability run result : com.my.bootserver.runtime.internal.api.RStability@6373"}}
| bin span=1mon _time | chart sum(cost) as monthly_cost over _time
Same issue here on a VM with Ubuntu 22.04 LTS, after upgrade from Splunk 9.3.2 to Splunk 9.4.0, mongodb fail to start. The "avx" and "avx2" vCPU flags are ok: $ lscpu | grep -i avx Flags: fpu vme ... See more...
Same issue here on a VM with Ubuntu 22.04 LTS, after upgrade from Splunk 9.3.2 to Splunk 9.4.0, mongodb fail to start. The "avx" and "avx2" vCPU flags are ok: $ lscpu | grep -i avx Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid rdseed adx smap xsaveopt arat md_clear flush_l1d arch_capabilities below latest mongod_upgrade.log lines 2025-01-27T08:38:58.162Z DEBUG [mongod_upgrade::conditions] Hostname from replSetGetStatus: "127.0.0.1:8191" 2025-01-27T08:38:58.162Z DEBUG [mongod_upgrade::conditions] Hostname from preflight: "127.0.0.1:8191" 2025-01-27T08:38:58.163Z DEBUG [mongod_upgrade::conditions] Upgraded count: 0 2025-01-27T08:38:58.163Z INFO [mongod_upgrade] Getting lock 2025-01-27T08:38:58.163Z DEBUG [mongod_upgrade::conditions] Upserting lock 2025-01-27T08:38:58.164Z INFO [mongod_upgrade::conditions] locked 2025-01-27T08:38:58.164Z INFO [mongod_upgrade] Got lock: true 2025-01-27T08:38:58.164Z INFO [mongod_upgrade::commands] Updating 127.0.0.1:8191 to 4.4 2025-01-27T08:38:58.164Z INFO [mongod_upgrade::commands] In update for 4.4 2025-01-27T08:38:59.046Z INFO [mongod_upgrade::commands] Shutting down the database 2025-01-27T08:38:59.822Z WARN [mongod_upgrade::commands] Attempting with force:true due to shutdown failure: Error { kind: Io(Kind(UnexpectedEof)), labels: {}, wire_version: Some(8), source: None } 2025-01-27T08:39:05.832Z INFO [mongod_upgrade::commands] Checking if mongod is online 2025-01-27T08:39:35.834Z INFO [mongod_upgrade::commands] mongod is offline 2025-01-27T08:39:35.834Z INFO [mongod_upgrade::commands] Shutdown output: Document({}) 2025-01-27T08:39:37.263Z INFO [mongod_upgrade::commands] UPGRADE_TO_4.4_SUCCESSFUL 2025-01-27T08:39:39.263Z INFO [mongod_upgrade::commands] Attempting to update status 2025-01-27T08:39:39.265Z INFO [mongod_upgrade::commands] Status updated successfully 2025-01-27T08:39:39.271Z INFO [mongod_upgrade] Waiting for other nodes in replica set to upgrade 2025-01-27T08:39:39.272Z DEBUG [mongod_upgrade::conditions] Upgraded count: 1 2025-01-27T08:39:39.272Z INFO [mongod_upgrade] All upgraded to 4.4, proceeding. 2025-01-27T08:39:39.272Z INFO [mongod_upgrade] Setting new FCV Version: 4.4 2025-01-27T08:39:39.284Z INFO [mongod_upgrade] FCV change successful: () 2025-01-27T08:39:54.284Z INFO [mongod_upgrade] Upgrading to 5.0 2025-01-27T08:39:54.286Z INFO [mongod_upgrade] Waiting if primary 2025-01-27T08:39:54.287Z DEBUG [mongod_upgrade::conditions] Replication status doc for node: Document({"_id": Int32(0), "name": String("127.0.0.1:8191"), "health": Double(1.0), "state": Int32(1), "stateStr": String("PRIMARY"), "uptime": Int32(19), "optime": Document({"ts": Timestamp { time: 1737967179, increment: 4 }, "t": Int64(58)}), "optimeDate": DateTime(2025-01-27 8:39:39.0 +00:00:00), "lastAppliedWallTime": DateTime(2025-01-27 8:39:39.278 +00:00:00), "lastDurableWallTime": DateTime(2025-01-27 8:39:39.278 +00:00:00), "syncSourceHost": String(""), "syncSourceId": Int32(-1), "infoMessage": String("Could not find member to sync from"), "electionTime": Timestamp { time: 1737967177, increment: 1 }, "electionDate": DateTime(2025-01-27 8:39:37.0 +00:00:00), "configVersion": Int32(2), "configTerm": Int32(58), "self": Boolean(true), "lastHeartbeatMessage": String("")}) 2025-01-27T08:39:54.287Z DEBUG [mongod_upgrade::conditions] Hostname from replSetGetStatus: "127.0.0.1:8191" 2025-01-27T08:39:54.287Z DEBUG [mongod_upgrade::conditions] Hostname from preflight: "127.0.0.1:8191" 2025-01-27T08:39:54.287Z DEBUG [mongod_upgrade::conditions] Upgraded count: 0 2025-01-27T08:39:54.287Z INFO [mongod_upgrade] Getting lock 2025-01-27T08:39:54.288Z DEBUG [mongod_upgrade::conditions] Upserting lock 2025-01-27T08:39:54.288Z INFO [mongod_upgrade::conditions] locked 2025-01-27T08:39:54.288Z INFO [mongod_upgrade] Got lock: true 2025-01-27T08:39:54.289Z INFO [mongod_upgrade::commands] Updating 127.0.0.1:8191 to 5.0 2025-01-27T08:39:54.289Z INFO [mongod_upgrade::commands] In update for 5.0 2025-01-27T08:39:54.555Z INFO [mongod_upgrade::commands] Shutting down the database 2025-01-27T08:39:55.409Z WARN [mongod_upgrade::commands] Attempting with force:true due to shutdown failure: Error { kind: Io(Kind(UnexpectedEof)), labels: {}, wire_version: Some(9), source: None }  and below the mongod.log error after a splunk start/restart 2025-01-27T15:56:52.142Z I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine 2025-01-27T15:56:52.142Z I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem 2025-01-27T15:56:52.142Z I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=1075M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress], 2025-01-27T15:56:53.003Z E STORAGE [initandlisten] WiredTiger error (-31802) [1737993413:3955][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1737993413:3955][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error 2025-01-27T15:56:53.027Z E STORAGE [initandlisten] WiredTiger error (-31802) [1737993413:27009][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1737993413:27009][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error 2025-01-27T15:56:53.049Z E STORAGE [initandlisten] WiredTiger error (-31802) [1737993413:49077][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1737993413:49077][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error 2025-01-27T15:56:53.080Z E STORAGE [initandlisten] WiredTiger error (-31802) [1737993413:80951][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1737993413:80951][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error 2025-01-27T15:56:53.096Z E STORAGE [initandlisten] WiredTiger error (-31802) [1737993413:96579][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1737993413:96579][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error 2025-01-27T15:56:53.103Z W STORAGE [initandlisten] Failed to start up WiredTiger under any compatibility version. 2025-01-27T15:56:53.103Z F STORAGE [initandlisten] Reason: -31802: WT_ERROR: non-specific WiredTiger error 2025-01-27T15:56:53.103Z F - [initandlisten] Fatal Assertion 28595 at src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp 928 2025-01-27T15:56:53.103Z F - [initandlisten] \n\n***aborting after fassert() failure\n\n  it seems that was stuck on 5.0 mongodb upgrade after 9.4.0 Splunk upgrade. Any suggestion how to resolve the issue? Regards.
Same issue here on an Ubuntu 22.04 LTS VM, tried to upgrade on test environment from 9.3.2 to 9.4.0 and mongodb fail to start. Below last mongod_upgrade.log lines 2025-01-27T08:38:58.160Z INFO [mon... See more...
Same issue here on an Ubuntu 22.04 LTS VM, tried to upgrade on test environment from 9.3.2 to 9.4.0 and mongodb fail to start. Below last mongod_upgrade.log lines 2025-01-27T08:38:58.160Z INFO [mongod_upgrade] All initialized 2025-01-27T08:38:58.160Z INFO [mongod_upgrade] Upgrading to 4.4 2025-01-27T08:38:58.161Z INFO [mongod_upgrade] Waiting if primary 2025-01-27T08:38:58.162Z DEBUG [mongod_upgrade::conditions] Replication status doc for node: Document({"_id": Int32(0), "name": String("127.0.0.1:8191"), "health": Double(1.0), "state": Int32(1), "stateStr": String("PRIMARY"), "uptime": Int32(83), "optime": Document({"ts": Timestamp { time: 1737967138, increment: 2 }, "t": Int64(57)}), "optimeDate": DateTime(2025-01-27 8:38:58.0 +00:00:00), "syncingTo": String(""), "syncSourceHost": String(""), "syncSourceId": Int32(-1), "infoMessage": String("could not find member to sync from"), "electionTime": Timestamp { time: 1737967059, increment: 1 }, "electionDate": DateTime(2025-01-27 8:37:39.0 +00:00:00), "configVersion": Int32(1), "self": Boolean(true), "lastHeartbeatMessage": String("")}) 2025-01-27T08:38:58.162Z DEBUG [mongod_upgrade::conditions] Hostname from replSetGetStatus: "127.0.0.1:8191" 2025-01-27T08:38:58.162Z DEBUG [mongod_upgrade::conditions] Hostname from preflight: "127.0.0.1:8191" 2025-01-27T08:38:58.163Z DEBUG [mongod_upgrade::conditions] Upgraded count: 0 2025-01-27T08:38:58.163Z INFO [mongod_upgrade] Getting lock 2025-01-27T08:38:58.163Z DEBUG [mongod_upgrade::conditions] Upserting lock 2025-01-27T08:38:58.164Z INFO [mongod_upgrade::conditions] locked 2025-01-27T08:38:58.164Z INFO [mongod_upgrade] Got lock: true 2025-01-27T08:38:58.164Z INFO [mongod_upgrade::commands] Updating 127.0.0.1:8191 to 4.4 2025-01-27T08:38:58.164Z INFO [mongod_upgrade::commands] In update for 4.4 2025-01-27T08:38:59.046Z INFO [mongod_upgrade::commands] Shutting down the database 2025-01-27T08:38:59.822Z WARN [mongod_upgrade::commands] Attempting with force:true due to shutdown failure: Error { kind: Io(Kind(UnexpectedEof)), labels: {}, wire_version: Some(8), source: None } 2025-01-27T08:39:05.832Z INFO [mongod_upgrade::commands] Checking if mongod is online 2025-01-27T08:39:35.834Z INFO [mongod_upgrade::commands] mongod is offline 2025-01-27T08:39:35.834Z INFO [mongod_upgrade::commands] Shutdown output: Document({}) 2025-01-27T08:39:37.263Z INFO [mongod_upgrade::commands] UPGRADE_TO_4.4_SUCCESSFUL 2025-01-27T08:39:39.263Z INFO [mongod_upgrade::commands] Attempting to update status 2025-01-27T08:39:39.265Z INFO [mongod_upgrade::commands] Status updated successfully 2025-01-27T08:39:39.271Z INFO [mongod_upgrade] Waiting for other nodes in replica set to upgrade 2025-01-27T08:39:39.272Z DEBUG [mongod_upgrade::conditions] Upgraded count: 1 2025-01-27T08:39:39.272Z INFO [mongod_upgrade] All upgraded to 4.4, proceeding. 2025-01-27T08:39:39.272Z INFO [mongod_upgrade] Setting new FCV Version: 4.4 2025-01-27T08:39:39.284Z INFO [mongod_upgrade] FCV change successful: () 2025-01-27T08:39:54.284Z INFO [mongod_upgrade] Upgrading to 5.0 2025-01-27T08:39:54.286Z INFO [mongod_upgrade] Waiting if primary 2025-01-27T08:39:54.287Z DEBUG [mongod_upgrade::conditions] Replication status doc for node: Document({"_id": Int32(0), "name": String("127.0.0.1:8191"), "health": Double(1.0), "state": Int32(1), "stateStr": String("PRIMARY"), "uptime": Int32(19), "optime": Document({"ts": Timestamp { time: 1737967179, increment: 4 }, "t": Int64(58)}), "optimeDate": DateTime(2025-01-27 8:39:39.0 +00:00:00), "lastAppliedWallTime": DateTime(2025-01-27 8:39:39.278 +00:00:00), "lastDurableWallTime": DateTime(2025-01-27 8:39:39.278 +00:00:00), "syncSourceHost": String(""), "syncSourceId": Int32(-1), "infoMessage": String("Could not find member to sync from"), "electionTime": Timestamp { time: 1737967177, increment: 1 }, "electionDate": DateTime(2025-01-27 8:39:37.0 +00:00:00), "configVersion": Int32(2), "configTerm": Int32(58), "self": Boolean(true), "lastHeartbeatMessage": String("")}) 2025-01-27T08:39:54.287Z DEBUG [mongod_upgrade::conditions] Hostname from replSetGetStatus: "127.0.0.1:8191" 2025-01-27T08:39:54.287Z DEBUG [mongod_upgrade::conditions] Hostname from preflight: "127.0.0.1:8191" 2025-01-27T08:39:54.287Z DEBUG [mongod_upgrade::conditions] Upgraded count: 0 2025-01-27T08:39:54.287Z INFO [mongod_upgrade] Getting lock 2025-01-27T08:39:54.288Z DEBUG [mongod_upgrade::conditions] Upserting lock 2025-01-27T08:39:54.288Z INFO [mongod_upgrade::conditions] locked 2025-01-27T08:39:54.288Z INFO [mongod_upgrade] Got lock: true 2025-01-27T08:39:54.289Z INFO [mongod_upgrade::commands] Updating 127.0.0.1:8191 to 5.0 2025-01-27T08:39:54.289Z INFO [mongod_upgrade::commands] In update for 5.0 2025-01-27T08:39:54.555Z INFO [mongod_upgrade::commands] Shutting down the database 2025-01-27T08:39:55.409Z WARN [mongod_upgrade::commands] Attempting with force:true due to shutdown failure: Error { kind: Io(Kind(UnexpectedEof)), labels: {}, wire_version: Some(9), source: None } and below mongod.log after Splunk start/restart 2025-01-27T15:56:52.142Z I STORAGE [initandlisten] 2025-01-27T15:56:52.142Z I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine 2025-01-27T15:56:52.142Z I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem 2025-01-27T15:56:52.142Z I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=1075M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress], 2025-01-27T15:56:53.003Z E STORAGE [initandlisten] WiredTiger error (-31802) [1737993413:3955][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1737993413:3955][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error 2025-01-27T15:56:53.027Z E STORAGE [initandlisten] WiredTiger error (-31802) [1737993413:27009][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1737993413:27009][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error 2025-01-27T15:56:53.049Z E STORAGE [initandlisten] WiredTiger error (-31802) [1737993413:49077][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1737993413:49077][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error 2025-01-27T15:56:53.080Z E STORAGE [initandlisten] WiredTiger error (-31802) [1737993413:80951][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1737993413:80951][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error 2025-01-27T15:56:53.096Z E STORAGE [initandlisten] WiredTiger error (-31802) [1737993413:96579][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1737993413:96579][338955:0x7f36dba58b40], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error 2025-01-27T15:56:53.103Z W STORAGE [initandlisten] Failed to start up WiredTiger under any compatibility version. 2025-01-27T15:56:53.103Z F STORAGE [initandlisten] Reason: -31802: WT_ERROR: non-specific WiredTiger error 2025-01-27T15:56:53.103Z F - [initandlisten] Fatal Assertion 28595 at src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp 928 2025-01-27T15:56:53.103Z F - [initandlisten] \n\n***aborting after fassert() failure\n\n we have avx and avx2 CPU Flags $ lscpu | grep -i avx Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid rdseed adx smap xsaveopt arat md_clear flush_l1d arch_capabilities it seems that mongodb is in stuck state and did not completed the upgrade correctly. Any suggestion how to resolve? Regards
You could set in alert e.g. ..... earliest=-1h@m-5m latest=@m-5m And run  this alert once in hour. Just run it as separate job and update those earliest + latest and use e.g. 6h span and run it 4 ... See more...
You could set in alert e.g. ..... earliest=-1h@m-5m latest=@m-5m And run  this alert once in hour. Just run it as separate job and update those earliest + latest and use e.g. 6h span and run it 4 times per day. Of course this depends on your alerts and needs.
Hi gcusello! That worked just the way I wanted to. Thanks for the support. Steve
Hi @gcusello , Thank you, this is a start. Indeed, I find the time but I only have 1 value displayed. I would like to be able to keep the top 5 peaks per day of the last x days. Thanks!
@isoutamo This worked perfectly!  Thank you for your input.  Seems the `source` monitor stanza was the way to go.  Here is my final configuration for future Splunkers that want to accomplish the same... See more...
@isoutamo This worked perfectly!  Thank you for your input.  Seems the `source` monitor stanza was the way to go.  Here is my final configuration for future Splunkers that want to accomplish the same. [source::.../var/log/splunk/splunkd*] SEDCMD-url = s/https?:\/\/www.domain.com\/(.*)/https:\/\/www.domain.com\/XXXX-XXXX-XXXX/g
Hi @Splunked_Kid , you could try something like this: index=myindex | bin span=1m _time | stats sum(MIPS) as MIPSParMinute by _time | eventstats max(MIPS) AS max_MIPS | where MIPSParMinute=max_MIP... See more...
Hi @Splunked_Kid , you could try something like this: index=myindex | bin span=1m _time | stats sum(MIPS) as MIPSParMinute by _time | eventstats max(MIPS) AS max_MIPS | where MIPSParMinute=max_MIPS | eval Day=strftime(_time,"%Y/%m/%d") | eval Hour=strftime(_time,"%H:%M") | table Day Hour MaxMIPSParMinute Ciao. Giuseppe