All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hey guys, so I was wondering if anyone had any idea how to optimize this query to minimize the sub searches.  My brain hurts just looking at it honestly, for all the SPL Pros please lend a hand if ... See more...
Hey guys, so I was wondering if anyone had any idea how to optimize this query to minimize the sub searches.  My brain hurts just looking at it honestly, for all the SPL Pros please lend a hand if possible.    index=efg* * | search EVENT_TYPE=FG_EVENTATTR AND ((NAME=ConsumerName AND VALUE=OneStream) OR NAME=ProducerFilename OR NAME=OneStreamSubmissionID OR NAME=ConsumerFileSize OR NAME=RouteID) | search | where trim(VALUE)!="" | eval keyValuePair=mvzip(NAME,VALUE,"=") | eval efgTime=min(MODIFYTS) ```We need to convert EDT/EST timestamps to UTC time.``` | eval EST_time=strptime(efgTime,"%Y-%m-%d %H:%M:%S.%N") ```IMPORTANT STEP: During EDT you add 14400 to convert to UTC; during EST you add 18000. (We need to automate this step in the code.)``` | eval tempTime = EST_time | eval UTC_time=strftime(tempTime, "%Y-%m-%d %H:%M:%S.%1N") | stats values(*) as * by ARRIVEDFILE_KEY | eval temptime3=min(UTC_time) | eval keyValuePair=mvappend("EFG_Delivery_Time=".temptime3, keyValuePair) | eval keyValuePair=mvsort(keyValuePair) ```Let's extract our values now.``` | eval tempStr_1 = mvfilter(LIKE(keyValuePair, "%ConsumerFileSize=%")) | eval tempStr_2 = mvfilter(LIKE(keyValuePair, "%EFG_Delivery_Time=%")) | eval tempStr_3 = mvfilter(LIKE(keyValuePair, "%OneStreamSubmissionID=%")) | eval tempStr_4 = mvfilter(LIKE(keyValuePair, "%ProducerFilename=%")) | eval tempStr_5 = mvfilter(LIKE(keyValuePair, "%RouteID=%")) ```Now, let's assign the values to the right field name.``` | eval "File Size"=ltrim(tempStr_1,"ConsumerFileSize=") | eval "EFG Delivery Time"=ltrim(tempStr_2,"EFG_Delivery_Time=") | eval "Submission ID"=substr(tempStr_3, -38) | eval "Source File Name"=ltrim(tempStr_4,"ProducerFilename=") | eval "Route ID"=ltrim(tempStr_5,"RouteID=") ```Bring it all together! (Join EFG data to the data in the OS lookup table.``` | search keyValuePair="*OneStreamSubmissionID*" | rename "Submission ID" as Submission_ID | rename "Source File Name" as Source_File_Name | join type=left max=0 Source_File_Name [ search index=asvsdp* source=Watcher_Delivery_Status sourcetype=c1_json event_code=SINK_DELIVERY_COMPLETION (sink_name=onelake-delta-table-sink OR sink_name=onelake-table-sink OR onelake-direct-sink) | eval test0=session_id | eval test1=substr(test0, 6) | eval o=len(test1) | eval Quick_Check=substr(test1, o-33, o) | eval p=if(like(Quick_Check, "%-%"), 35, 33) | eval File_Name_From_Session_ID=substr(test1, 1, o-p) | rename File_Name_From_Session_ID as Source_File_Name ```| lookup DFS-EFG-SDP-lookup_table_03.csv local=true Source_File_Name AS Source_File_Name OUTPUT Submission_ID, OS_time, BAP, Status``` | join type=left max=0 Source_File_Name [ search index=asvexternalfilegateway_summary * | table Source_File_Name, Submission_ID, Processed_time, OS_time, BAP, Status ] | table event_code, event_timestamp, session_id, sink_name, _time, Source_File_Name, Submission_ID, OS_time, BAP, Status | search "Source_File_Name" IN (*OS.AIS.COF.DataOne.PROD*, *fgmulti_985440_GHR.COF.PROD.USPS.CARD*, *COF-DFS*) ] ```| lookup DFS-EFG-SDP-lookup_table_03.csv Submission_ID AS Submission_ID OUTPUT Processed_time, OS_time, BAP, Status``` | join type=left max=0 Submission_ID [ search index=asvexternalfilegateway_summary * | table Submission_ID, Processed_time, OS_time, BAP, Status ] | eval "Delivery Status"=if(event_code="SINK_DELIVERY_COMPLETION","DELIVERED","FAILED") | eval BAP = upper(BAP) ```| rename Processed_time as "OL Delivery Time" | eval "OL Delivery Time"=if('Delivery Status'="FAILED","Failed at OneStream",'OL Delivery Time')``` | rename OS_time as "OS Delivery Time" ```Display consolidated data in tabular format.``` | eval "OL Delivery Time"=strftime(event_timestamp/1000, "%Y-%m-%d %H:%M:%S.%3N") ``` Convert OS timestamp from UTC EST/EDT ``` | eval OS_TC='OS Delivery Time' | eval OS_UTC_time=strptime(OS_TC,"%Y-%m-%d %H:%M:%S.%3N") ```IMPORTANT STEP: During EDT you add 14400 to convert to UTC; during EST you add 18000. (We need to automate this step in the code.)``` | eval tempTime_2 = OS_UTC_time - 18000 ```| eval tempTime = EST_time``` | eval "OS Delivery Time"=strftime(tempTime_2, "%Y-%m-%d %H:%M:%S.%3N") ``` Convert OL timestamp from UTC EST/EDT ``` | eval OL_UTC_time=strptime('OL Delivery Time',"%Y-%m-%d %H:%M:%S.%3N") ```IMPORTANT STEP: During EDT you add 14400 to convert to UTC; during EST you add 18000. (We need to automate this step in the code.)``` | eval tempTime_3 = OL_UTC_time - 18000 ```| eval tempTime = EST_time``` | eval "OL Delivery Time"=strftime(tempTime_3, "%Y-%m-%d %H:%M:%S.%3N") | rename Source_File_Name as "Source File Name" | rename Submission_ID as "Submission ID" | fields BAP "Route ID" "Source File Name" "File Size" "EFG Delivery Time" "OS Delivery Time" "OL Delivery Time" "Delivery Status" "Submission ID" ``` | search Source_File_Name IN (*COF-DFS*)``` | append [ search index=efg* source=efg_prod_summary sourcetype=stash STATUS_MESSAGE=Failed ConsumerName=OneStream | eval BAP=upper("badiscoverdatasupport") | eval "Delivery Status"="FAILED", "Submission ID"="--" | rename RouteID as "Route ID", SourceFilename as "Source File Name", FILE_SIZE as "File Size", ArrivalTime as "EFG Delivery Time" | table BAP "Route ID" "Source File Name" "File Size" "EFG Delivery Time" "OS Delivery Time" "OL Delivery Time" "Delivery Status" "Submission ID" | search "Source File Name" IN (*OS.AIS.COF.DataOne.PROD*, *fgmulti_985440_GHR.COF.PROD.USPS.CARD*, *COF-DFS*) ] | sort -"EFG Delivery Time" | search "Source File Name" IN (*OS.AIS.COF.DataOne.PROD*, *fgmulti_985440_GHR.COF.PROD.USPS.CARD*, *COF-DFS*) | dedup "Submission ID"
Hi all, Do any of you all run into issues where the bundle replication keeps timing out and splunkd.log references increasing the sendRcvTimeout parameter, in a previous ticket with support, they su... See more...
Hi all, Do any of you all run into issues where the bundle replication keeps timing out and splunkd.log references increasing the sendRcvTimeout parameter, in a previous ticket with support, they supplied a Golden Configuration that says that this value should be around 180. Based on: https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/Distsearchconf Under, 'classic' REPLICATION-SPECIFIC SETTINGS connectionTimeout = <integer> * The maximum amount of time to wait, in seconds, before a search head's initial connection to a peer times out. * Default: 60 sendRcvTimeout = <integer> * The maximum amount of time to wait, in seconds, when a search head is sending a full replication to a peer. * Default: 60   Should these two values be adjusted and kept in-sync? I am considering adding another 30 seconds to each. Or, if there is something else I should be verifying first, it would be helpful to get some direction here.
Hi all, Was wondering if there was a way to manually grab the threat intelligence updates for Splunk ES (we are on 7.3.1.) Specifically:  Intelligence download of "mitre_attack" - threatlist downlo... See more...
Hi all, Was wondering if there was a way to manually grab the threat intelligence updates for Splunk ES (we are on 7.3.1.) Specifically:  Intelligence download of "mitre_attack" - threatlist download Our Splunk environment is on-prem and air-gapped, so there is not really any way to create an external connection to the internet. Any ideas or advice would be appreciated.
  index="uhcportals-prod-logs" sourcetype=kubernetes container_name="myuhc-sso" logger="com.uhg.myuhc.log.SplunkLog" message.ssoType="Inbound" | chart count by "message.backendCalls{}.responseCode"... See more...
  index="uhcportals-prod-logs" sourcetype=kubernetes container_name="myuhc-sso" logger="com.uhg.myuhc.log.SplunkLog" message.ssoType="Inbound" | chart count by "message.backendCalls{}.responseCode", "message.incomingRequest.lob" Issue is there is no response for value NULL  Under field "message.incomingRequest.lob" but its giving NULL in above shared result, Any idea? or any instruction for debugging so that we can find the root cause. Let me know if more details is needed.  
Hi, I am using a search Mysearch |eval Guest=if(sid=22,BOT,Others) | convert timeformat="%Y-%m-%d" ctime(_time) AS date |chart count over Guest by date And the results is like below. Gue... See more...
Hi, I am using a search Mysearch |eval Guest=if(sid=22,BOT,Others) | convert timeformat="%Y-%m-%d" ctime(_time) AS date |chart count over Guest by date And the results is like below. Guest                                               2024-12-18                                       2024-12-19 BOT                                                            10                                                            20 Others                                                       90                                                            80 Now I want to display the percentage of activity by Guest over date Maybe something like below Guest                                                       2024-12-18                                                  2024-12-19 BOT                                                            10 (10%)                                                           200(20%) Others                                                       90   (90%)                                                         800(80%) Could someone possible help here? Many thanks   
Hi All, I have designed a splunk query: | inputlookup Expiry_details_list.csv | lookup SupportTeamEmails.csv Application_name OUTPUT Owner_Email_address Ops_Leads_Email_address Escalation_Contact... See more...
Hi All, I have designed a splunk query: | inputlookup Expiry_details_list.csv | lookup SupportTeamEmails.csv Application_name OUTPUT Owner_Email_address Ops_Leads_Email_address Escalation_Contacts_Email_address | eval Expiry_Date = strptime(Expiry_date, "%m/%d/%Y") | eval Current_Time = now() | eval Expiry_Date_Timestamp = strftime(Expiry_Date, "%Y/%m/%d %H:%M:%S") | eval Days_until_expiry = round((Expiry_Date - Current_Time) / 86400, 0) | eval alert_type = case( Days_until_expiry <= 7, "Owner", Days_until_expiry <= 15, "Support", Days_until_expiry < 1, "Expired", Days_until_expiry > 15, "Others", true(), "None") | search alert_type != "None" | eval email_list = case( alert_type == "Owner", Escalation_Contacts_Email_address, alert_type == "Support", Ops_Leads_Email_address, alert_type == "Expired", mvappend(Owner_Email_address, Ops_Leads_Email_address, Escalation_Contacts_Email_address), true(), "None") | eval email_list = split(mvjoin(email_list, ","), ",") | eval cc_email_list = case( alert_type == "Owner", Owner_Email_address, alert_type == "Support", Owner_Email_address, true(), "None") | eval cc_email_list = split(mvjoin(cc_email_list, ","), ",") | dedup Application_name Environment email_list | eval email_recipient = mvdedup(email_list) | eval email_recipient = mvjoin(email_recipient, ",") | eval email_cc = mvdedup(cc_email_list) | eval email_cc = mvjoin(email_cc, ",") | table Application_name, Environment, Type, Sub_Type, Expiry_Date_Timestamp, Days_until_expiry, email_recipient, email_cc | fields - alert_type, Owner_Email_address, Ops_Leads_Email_address, Escalation_Contacts_Email_address Now this is returning output as provided in the attached file, what I am expecting is in email_list it should return only Escalation_contacts_email_address and in cc_email_list it should merge the email address of Owner_Email_address and Ops_Leads_Email_address seperated by a comma when the alert_type == "Expired" How do I get this using splunk query?
We are utilizing Splunk Ingest actions to copy data to an S3 bucket. After reviewing various articles and conducting some tests, I've successfully forwarded data to the S3 bucket, where it's currentl... See more...
We are utilizing Splunk Ingest actions to copy data to an S3 bucket. After reviewing various articles and conducting some tests, I've successfully forwarded data to the S3 bucket, where it's currently being stored with the Sourcetype name. However, there's a requirement to store these logs using the hostname instead of the Sourcetype for improved visibility and operational efficiency. Although there isn't a direct method to accomplish this through the Ingest actions GUI, I believe it can be achieved using props and transforms. Can someone assist me with this?
Hello SplunkCommunity, After configuring SSL, when I execute the following command: openssl s_client -showcerts -connect host:port I am encountering the following error: 803BEC33F07F0000:error:0A... See more...
Hello SplunkCommunity, After configuring SSL, when I execute the following command: openssl s_client -showcerts -connect host:port I am encountering the following error: 803BEC33F07F0000:error:0A000126:SSL routines:ssl3_read_n:unexpected eof while reading:../ssl/record/rec_layer_s3.c:317: Could anyone help me understand why I am seeing this error and assist me in resolving it? Thank you in advance for your help. Best regards,  
We have deployed splunk enterprise on huawei cloud. After conducting baseline checking, we have discovered several risk items targeting mongodb with the following: Rule:Use a Secure TLS Version Rul... See more...
We have deployed splunk enterprise on huawei cloud. After conducting baseline checking, we have discovered several risk items targeting mongodb with the following: Rule:Use a Secure TLS Version Rule:Disable Listening on the Unix Socket Rule:Set the Background Startup Mode Rule:Disable the HTTP Status Interface Rule:Configure bind_ip Rule:Disable Internal Command Test Rule:Do Not Omit Server Name Verification Rule:Enable the Log Appending Mode Rule:Restrict the Permission on the Home Directory of MongoDB Rule:Restrict the Permission on the Bin Directory of MongoDB Rule:Check the FIPS Mode Option I have checked if there is any related documentation but I cannot find any of them. I am wondering if I should create a mongodb.conf for it. Thanksss
Hi, I have an app that is used for all the configurations that we have in Splunk Cloud. Quite a lot of users on our instance are admin (for good reasons that I don't want to get into ). Now becau... See more...
Hi, I have an app that is used for all the configurations that we have in Splunk Cloud. Quite a lot of users on our instance are admin (for good reasons that I don't want to get into ). Now because not all of those users are really "developer enthusiasts" they tend to sometimes make configuration changes through the GUI. For example disable a search in the GUI instead of nicely in the app (with pipeline etc) when they don't need it anymore. To try to make this impossible I changed the default.meta to:     [] access = read : [ * ], write : [] export = system   But this doesn't seem to work and people can still disable savedsearches (and many other things). Is there any way to disable write entirely for any content in the app?  
I Will try to use add-on Simple SNMP Getter but the response like the picture , please advice   this is the input    
Hi folks I am trying to schedule a dashboard to send mail with its details periodically, I can able to do it but the output is not that legible i.e the attachment either .pdf or .pmg format is blurr... See more...
Hi folks I am trying to schedule a dashboard to send mail with its details periodically, I can able to do it but the output is not that legible i.e the attachment either .pdf or .pmg format is blurry when zoomed in at slightest to view the readable things like table data, graph points and so on. the Dashboard contains multiple panel like table,bar graph, pie chat and different kinds of graph as well. does increasing font size of the data helps? I dont want to alter the dashboard and make a mess unless it will surely solve the matter.  I am not very good at dashboarding and new to Splunk as well.  Please advise 
Hey team, I have one requirement i.e have to Create a splunk dashboard to report the # of Logins , # of Logouts The input for the Splunk report should be as follows :  Input dropdown - Time Picker... See more...
Hey team, I have one requirement i.e have to Create a splunk dashboard to report the # of Logins , # of Logouts The input for the Splunk report should be as follows :  Input dropdown - Time Picker, Customer, Host Name Either identify using probe data or Splunk Command metrics Output for the following metrics should be shown as a timegraph with # of logins, logouts , the graph should consists of time,which host and which customer we are using.and the query also should have the tokens when i ran the query can you give me the search query for this requirement.I used multiple queries but am not getting the exact data. Can you help me with the query.Thanks.
I am sending syslog data to Splunk from Cisco FMC. I am using UCAPL compliance and therefore cannot use eStreamer. The data is being ingested into Splunk and the dashboard is showing some basic event... See more...
I am sending syslog data to Splunk from Cisco FMC. I am using UCAPL compliance and therefore cannot use eStreamer. The data is being ingested into Splunk and the dashboard is showing some basic events, like connection events, volume file events and malware events. When I try to learn more about these events it doesn't drill down into more info. For example, when I click on the 14 Malware Events and chose open in search it just shows the number of events. There is no information regarding these events. When I click on inspect, it shows command.tstats at13 and  command.tstats.execute_output at 1. It doesn't provide further clarity regarding the malware events. When I view the Malware files dashboard on the FMC, is shows no data for malware threats. So based on the FMC it seems that the data in the Splunk dashboard is incorrect or at least interpreting malware events differently from the FMC dashboard. 
I am posting this to maybe save you from few hours of troubleshooting like I did. I did clean install of Splunk 9.4 in small customer environment with virtualized AIO instance. After the installat... See more...
I am posting this to maybe save you from few hours of troubleshooting like I did. I did clean install of Splunk 9.4 in small customer environment with virtualized AIO instance. After the installation there was an error notifying that KV Store can not start and that mongo log should be checked. The following error was logged:   ERROR KVStoreConfigurationProvider [4755 KVStoreConfigurationThread] - Could not start mongo instance. Initialization failed.     Mongod.log was completely empty.  So there was no clues in the log files about what is wrong and what can I do to make KVStore operational. Time to start Googling. Solution will be posted in the next post.
I have this search, where I get the duration and I need to convert it to integer: Example: Min:Sec to Whole 00:02      to   1 00:16      to   1 01:53      to  2 09:20      to  10 ...etc S... See more...
I have this search, where I get the duration and I need to convert it to integer: Example: Min:Sec to Whole 00:02      to   1 00:16      to   1 01:53      to  2 09:20      to  10 ...etc Script: index="cdr" | search "Call.TermParty.TrunkGroup.TrunkGroupId"="2811" OR "Call.TermParty.TrunkGroup.TrunkGroupId"="2810" "Call.ConnectTime"=* "Call.DisconnectTime"=* |lookup Pais Call.RoutingInfo.DestAddr OUTPUT Countrie | eval Disctime=strftime('Call.DisconnectTime'/1000,"%m/%d/%Y %H:%M:%S %Q") | eval Conntime=strftime('Call.ConnectTime'/1000, "%m/%d/%Y %H:%M:%S%Q") | eval diffTime=('Call.DisconnectTime'-'Call.ConnectTime') | eval Duracion=strftime(diffTime/1000, "%M:%S") | table Countrie, Duración Spain 00:02 Spain 00:16 Argentina 00:53 Spain 09:20 Spain 02:54 Spain 28:30 Spain 01:18 Spain 00:28 Spain 16:40 Spain 00:03 Chile 00:25 Uruguay 01:54 Spain 01:54  Regards.  
Dear experts Based on the following search:  <search id="subsearch_results"> <query> search index="iii" search_name="nnn" Umgebung="uuu" isbName="isb" status IN ("ALREA... See more...
Dear experts Based on the following search:  <search id="subsearch_results"> <query> search index="iii" search_name="nnn" Umgebung="uuu" isbName="isb" status IN ("ALREADY*", "NO_NOTIF*", "UNCONF*", "NOTIF*") zbpIdentifier NOT 453-8888 stoerCodeGruppe NOT ("GUT*") | eval importZeit_unixF = strptime(importZeit, "%Y-%m-%dT%H:%M:%S.%N%Z") | eval importZeit_humanF = strftime(importZeit_unixF, "%Y-%m-%d %H:%M:%S") | table importZeit_humanF importZeit_unixF zbpIdentifier status stoerCode stoerCodeGruppe </query> <earliest>$t_time.earliest$</earliest> <latest>$t_time.latest$@d</latest> <done> <condition> <set token="stoermeldungen_sid">$job.sid$</set> </condition> </done> </search> I try to load some data with:  <query> | loadjob $stoermeldungen_sid$ | where stoerCode IN ("S00") | where [ | loadjob $stoermeldungen_sid$ | where stoerCode IN ("S00") | addinfo | where importZeit_unixF &gt;= relative_time(info_max_time,"-d@d") AND importZeit_unixF &lt;= relative_time(info_max_time,"@d") | stats count as dayCount by zbpIdentifier | sort -dayCount | head 10 | table zbpIdentifier ] | addinfo | where .... Basic idea:  the subsearch first derives the top 10 of the elements based on the number of yesterdays error messages.  based on the subsearch result then the 7 day history is read and displayed (not fully shown in the example above) All works fine except if there are no messages found by the subsearch. If yesterday no error messages of the given type were recorded, the subsearch returns a result which causes the following error message in the dashboard: Error in ´where´command: The expression is malformed. An unexpected character is reached at ´)´.  The where command is the one which should take the result of the subsearch (3rd line of code).  The error message is just not nice for the end user, better would be to get just an empty chart if no data is found.  The question is: How to fix the result of the subsearch in a way, that also the main search runs and gets the proper empty result, and therefore the empty graph instead of the "not nice" error message? Thank you for your help.
Hello All!  I am trying to discard a certain event before the Indexers Ingest it using keyword envoy. Below is an example timestamp vcenter envoy-access 2024-12-29T23:53:56.632Z info envoy[139855... See more...
Hello All!  I am trying to discard a certain event before the Indexers Ingest it using keyword envoy. Below is an example timestamp vcenter envoy-access 2024-12-29T23:53:56.632Z info envoy[139855859431232] [Originator@6876 sub=Default] 2024-12-29T23:53:50.392Z POST /sdk HTTP/1.1 200 via_upstream I tried creating props and transforms conf in  $SPLUNK_HOME/etc/system/local but it's not working. My questions are if my stanzas are correct and if I should put them in local directory? Appreciate any assistance you can provide, Thank you.  Props.conf [nullQueue] queue = nullQueue [host::vcenter] TRANSFORMS-null = setnull [source::/var/log/remote/catchall/(IPAddress of Vcenter)/*.log] TRANSFORMS-null = setnull transforms.conf [setnull] REGEX = envoy DEST_KEY = queue FORMAT = nullQueue
I'm trying to create a search in which the following should be done:  - look for a user creation process (ID 4720) - and then look (for the same user) if there is a follow up group adding event (... See more...
I'm trying to create a search in which the following should be done:  - look for a user creation process (ID 4720) - and then look (for the same user) if there is a follow up group adding event (4728) for privileged groups like (512,516 etc.)    my SPL was so far like that:    index=lalala source=lalala EventID=4720 OR 4728 PrimaryGroupId IN (512,516,517,518,519)   BUT that way I only look for either a user creation OR a user being added as a privileged user. but I want to like both. I understand that I need to somehow connect those two searches but I don't know how exactly.     
Hi, After completing the upgrade from Splunk Enterprise version 9.3.2 to v9.4 the KVstore will no longer start. Splunk has yet to do the KVstore upgrade to v7 as the KVstore cannot start. We were al... See more...
Hi, After completing the upgrade from Splunk Enterprise version 9.3.2 to v9.4 the KVstore will no longer start. Splunk has yet to do the KVstore upgrade to v7 as the KVstore cannot start. We were already on 4.2 wiredtiger. The is no [kvstore] stanza in server.conf so everything should be default. The relavent lines from splunkd.log are:     INFO KVStoreConfigurationProvider [9192 MainThread] - Since x509 is not enabled - using a default config from [sslConfig] for Mongod mTLS authentication WARN KVStoreConfigurationProvider [9192 MainThread] - Action scheduled, but event loop is not ready yet INFO MongodRunner [7668 KVStoreConfigurationThread] - Starting mongod with executable name=mongod-4.2.exe version=kvstore version 4.2 INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --dbpath C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --storageEngine wiredTiger INFO MongodRunner [7668 KVStoreConfigurationThread] - Using cacheSize=1.65GB INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --port 8191 INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --timeStampFormat iso8601-utc INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --oplogSize 200 INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --keyFile C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo\splunk.key INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --setParameter enableLocalhostAuthBypass=0 INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --setParameter oplogFetcherSteadyStateMaxFetcherRestarts=0 INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --replSet 4EA2F2AF-2584-4BB0-A2C4-414E7CB68BC2 INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --bind_ip=0.0.0.0 (all ipv4 addresses) INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --sslCAFile C:\Program Files\Splunk\etc\auth\cacert.pem INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --tlsAllowConnectionsWithoutCertificates for version 4.2 INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --sslMode requireSSL INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --sslAllowInvalidHostnames WARN KVStoreConfigurationProvider [9192 MainThread] - Action scheduled, but event loop is not ready yet INFO KVStoreConfigurationProvider [9192 MainThread] - "SAML cert db" registration with KVStore successful INFO KVStoreConfigurationProvider [9192 MainThread] - "Auth cert db" registration with KVStore successful INFO KVStoreConfigurationProvider [9192 MainThread] - "JsonWebToken Manager" registration with KVStore successful INFO KVStoreBackupRestore [1436 KVStoreBackupThread] - thread started. INFO KVStoreConfigurationProvider [9192 MainThread] - "Certificate Manager" registration with KVStore successful INFO MongodRunner [7668 KVStoreConfigurationThread] - Found an existing PFX certificate INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --sslCertificateSelector subject=SplunkServerDefaultCert INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --sslAllowInvalidCertificates INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --tlsDisabledProtocols noTLS1_0,noTLS1_1 INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --sslCipherConfig ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES128-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:AES128-SHA256 INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --noscripting WARN MongoClient [7668 KVStoreConfigurationThread] - Disabling TLS hostname validation for localhost ERROR MongodRunner [5692 MongodLogThread] - mongod exited abnormally (exit code 14, status: exited with code 14) - look at mongod.log to investigate. ERROR KVStoreBulletinBoardManager [5692 MongodLogThread] - KV Store process terminated abnormally (exit code 14, status exited with code 14). See mongod.log and splunkd.log for details. WARN KVStoreConfigurationProvider [5692 MongodLogThread] - Action scheduled, but event loop is not ready yet ERROR KVStoreBulletinBoardManager [5692 MongodLogThread] - KV Store changed status to failed. KVStore process terminated.. ERROR KVStoreConfigurationProvider [7668 KVStoreConfigurationThread] - Failed to start mongod on first attempt reason=KVStore service will not start because kvstore process terminated ERROR KVStoreConfigurationProvider [7668 KVStoreConfigurationThread] - Could not start mongo instance. Initialization failed. ERROR KVStoreBulletinBoardManager [7668 KVStoreConfigurationThread] - Failed to start KV Store process. See mongod.log and splunkd.log for details. INFO KVStoreConfigurationProvider [7668 KVStoreConfigurationThread] - Mongod service shutting down     mogod.log contains the following:   W CONTROL [main] Option: sslMode is deprecated. Please use tlsMode instead. W CONTROL [main] Option: sslCAFile is deprecated. Please use tlsCAFile instead. W CONTROL [main] Option: sslCipherConfig is deprecated. Please use tlsCipherConfig instead. W CONTROL [main] Option: sslAllowInvalidHostnames is deprecated. Please use tlsAllowInvalidHostnames instead. W CONTROL [main] Option: sslAllowInvalidCertificates is deprecated. Please use tlsAllowInvalidCertificates instead. W CONTROL [main] Option: sslCertificateSelector is deprecated. Please use tlsCertificateSelector instead. W CONTROL [main] net.tls.tlsCipherConfig is deprecated. It will be removed in a future release. W NETWORK [main] Mixing certs from the system certificate store and PEM files. This may produced unexpected results. W NETWORK [main] sslCipherConfig parameter is not supported with Windows SChannel and is ignored. W NETWORK [main] sslCipherConfig parameter is not supported with Windows SChannel and is ignored. W NETWORK [main] Server certificate has no compatible Subject Alternative Name. This may prevent TLS clients from connecting W ASIO [main] No TransportLayer configured during NetworkInterface startup W NETWORK [main] sslCipherConfig parameter is not supported with Windows SChannel and is ignored. W ASIO [main] No TransportLayer configured during NetworkInterface startup W NETWORK [main] sslCipherConfig parameter is not supported with Windows SChannel and is ignored. I CONTROL [initandlisten] MongoDB starting : pid=4640 port=8191 dbpath=C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo 64-bit host=[redacted] I CONTROL [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2 I CONTROL [initandlisten] db version v4.2.24 I CONTROL [initandlisten] git version: 5e4ec1d24431fcdd28b579a024c5c801b8cde4e2 I CONTROL [initandlisten] allocator: tcmalloc I CONTROL [initandlisten] modules: enterprise I CONTROL [initandlisten] build environment: I CONTROL [initandlisten] distmod: windows-64 I CONTROL [initandlisten] distarch: x86_64 I CONTROL [initandlisten] target_arch: x86_64 I CONTROL [initandlisten] options: { net: { bindIp: "0.0.0.0", port: 8191, tls: { CAFile: "C:\Program Files\Splunk\etc\auth\cacert.pem", allowConnectionsWithoutCertificates: true, allowInvalidCertificates: true, allowInvalidHostnames: true, certificateSelector: "subject=SplunkServerDefaultCert", disabledProtocols: "noTLS1_0,noTLS1_1", mode: "requireTLS", tlsCipherConfig: "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RS..." } }, replication: { oplogSizeMB: 200, replSet: "4EA2F2AF-2584-4BB0-A2C4-414E7CB68BC2" }, security: { javascriptEnabled: false, keyFile: "C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo\splunk.key" }, setParameter: { enableLocalhostAuthBypass: "0", oplogFetcherSteadyStateMaxFetcherRestarts: "0" }, storage: { dbPath: "C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo", engine: "wiredTiger", wiredTiger: { engineConfig: { cacheSizeGB: 1.65 } } }, systemLog: { timeStampFormat: "iso8601-utc" } } W NETWORK [initandlisten] sslCipherConfig parameter is not supported with Windows SChannel and is ignored. W NETWORK [initandlisten] sslCipherConfig parameter is not supported with Windows SChannel and is ignored. I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=1689M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress], W STORAGE [initandlisten] Failed to start up WiredTiger under any compatibility version. F STORAGE [initandlisten] Reason: 129: Operation not supported F - [initandlisten] Fatal Assertion 28595 at src\mongo\db\storage\wiredtiger\wiredtiger_kv_engine.cpp 928 F - [initandlisten] \n\n***aborting after fassert() failure\n\n    Does anyone have any idea how to resolve this? Thanks,