All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi. We are using Splunk version of 7.1.3 and DB connect version is 3.6.0. We are planning to upgrade it by end of the year. Before that we are facing an issue as it shows only " Parsing search " for ... See more...
Hi. We are using Splunk version of 7.1.3 and DB connect version is 3.6.0. We are planning to upgrade it by end of the year. Before that we are facing an issue as it shows only " Parsing search " for long period of time when we we try to run a DB query. Could someone suggest what may be the issue and how it gets resolved?
Hi  I have a SPL query that needs to adjust at search time when we are falling in and out of BST.  During BST, the search has to search between the hours of 19:00 & 7:00. Outside of BST, the search... See more...
Hi  I have a SPL query that needs to adjust at search time when we are falling in and out of BST.  During BST, the search has to search between the hours of 19:00 & 7:00. Outside of BST, the search needs to adjust and search between the hours of 20:00 & 8:00.  I have created a lookup where I capture the dates of when BST starts and stops. I have also created the logic max date and min date to identify the Sundays that start and end BST. This part is working I need help to complete the search to filter results where if the date is outside of BST, to adjust from 19:00-7:00 search window to the 20:00 - 8:00   search window.   index=my_index | eval year=strftime(_time,"%Y") | lookup bst_lookup.csv year OUTPUTNEW date_sunday | stats values(*) as * max(date_sunday) as maxdate min(date_sunday) as mindate latest(_time) as time by field | eval isbst=if(time>mindate AND time<maxdate , 1,0)     Thanks!
Hello, is there any way we can extract fields from this sample data, any help will be highly appreciated. Thank you!   2022-07-22 17:21:50 - { "type" : "core", "r/o" : false, "booting" : true... See more...
Hello, is there any way we can extract fields from this sample data, any help will be highly appreciated. Thank you!   2022-07-22 17:21:50 - { "type" : "core", "r/o" : false, "booting" : true, "version" : "7.2.9.GA", "user" : "anonymous", "domainUUID" : null, "access" : null, "remote-address" : null, "success" : true, "ops" : [ { "operation" : "add", "address" : [{ "system-property" : "dstest.tx.node.id" }], "value" : "vp2mbg_c001_r3050" }, { "operation" : "add", "address" : [{ "system-property" : "jdk.tls.client.protocols" }], "value" : "TLSv1.2" }, { "operation" : "add", "address" : [{ "system-property" : "org.apache.coyote.ajp.DEFAULT_CONNECTION_TIMEOUT" }], "value" : "600000" }, { "operation" : "add", "address" : [{ "system-property" : "org.apache.coyote.ajp.MAX_PACKET_SIZE" }], "value" : "65536" }, { "operation" : "add", "address" : [{ "system-property" : "javax.net.ssl.trustStore" }], "value" : "/opt/app/dstest/ssl/cacerts.jks" }, { "operation" : "add", "address" : [{ "system-property" : "javax.net.ssl.trustStorePassword" }], "value" : { "EXPRESSION_VALUE" : "${VAULT::vb::truststorepass::1}" } }, { "operation" : "add", "address" : [{ "system-property" : "javax.net.ssl.keyStore" }], "value" : "/opt/app/DSTest/ssl/tccs-proddr.keystore" }, { "operation" : "add", "address" : [{ "system-property" : "javax.net.ssl.keyStorePassword" }], "value" : { "EXPRESSION_VALUE" : "${VAULT::vb::certpass::1}" } }, { "operation" : "add", "address" : [{ "system-property" : "tcp.allow.dev.esa.token" }], "value" : "true" }, { "operation" : "add", "address" : [{ "system-property" : "tccs.allow.dev.esa.token" }], "value" : "true" }, { "operation" : "add", "address" : [{ "system-property" : "CLAS.ENVIRONMENT" }], "value" : "prod" }, { "operation" : "add", "address" : [{ "system-property" : "TCCS.ENVIRONMENT" }], "value" : "prod" }, { "operation" : "add", "address" : [{ "system-property" : "agent.user" }], "value" : { "EXPRESSION_VALUE" : "${VAULT::vb::agentuser::1}" } }, { "operation" : "add", "address" : [{ "system-property" : "agent.password" }], "value" : { "EXPRESSION_VALUE" : "${VAULT::vb::agentpass::1}" } }, { "address" : [{ "path" : "DSTest.server.ADCredStore.dir" }], "operation" : "add", "path" : "/opt/app/DSTest/profiles/instances/tccs/ADCredStore" }, { "address" : [{ "path" : "DSTest.ssl" }], "operation" : "add", "path" : "/opt/app/DSTest/ssl" }, { "address" : [{ "core-service" : "vault" }], "operation" : "add", "vault-options" : [ { "KEYSTORE_URL" : "/opt/app/DSTest/profiles/instances/tccs/configuration/eap7vault.keystore" }, { "KEYSTORE_PASSWORD" : "MASK-0dF/GimhesRBlxgjOeSNqf" }, { "KEYSTORE_ALIAS" : "vault" }, { "SALT" : "147asa2900" }, { "ITERATION_COUNT" : "8" }, { "ENC_FILE_DIR" : "/opt/app/DSTest/profiles/instances/tccs/configuration/" } ] }] }  
    platfrom      bkc_name     domain   testcase_id    tnl                 abzke             hef                  gh_102    asc                   kit1            touch                ig_103   sou ... See more...
    platfrom      bkc_name     domain   testcase_id    tnl                 abzke             hef                  gh_102    asc                   kit1            touch                ig_103   sou                   kit2            hub                     jk_104   img                   kit3             hub1                 lk_105 ------------------------------- sub_gruop    platfrom      bkc_name     domain   testcase_id wow                   20                        19                  15                    12 audio                10                         16                   11                    13 sound                25                        30                  18                    19  
I have a kvstore like below populated with about 1mil rows.  _key name count1 count2 calculated_number1 calculated_number2 sha256 hash Joe Cool  1 2 3 4   ... See more...
I have a kvstore like below populated with about 1mil rows.  _key name count1 count2 calculated_number1 calculated_number2 sha256 hash Joe Cool  1 2 3 4   How can I update the kvstore where I update the two counts and recalculate the two calculated numbers based on the newly updated counts? I am trying not to read in all 1 million rows and overwrite if I dont have to.  Any potential pathways are welcome and I am here to learn. Thank you all.   
We run a number of machine learning models and routinely run into limitations of the "knowledge bundle" getting too big with errors like this bundle errors   We increased the limits.conf to a... See more...
We run a number of machine learning models and routinely run into limitations of the "knowledge bundle" getting too big with errors like this bundle errors   We increased the limits.conf to alleviate it but error came back after a few more models were made. I've noticed that these likely need to be included in the knowledge bundle since they are not explicitly blacklisted from the distsearch.conf         [replicationSettings:refineConf] replicate.algos = true replicate.mlspl = true replicate.scorings = true [replicationBlacklist] non_model_lookups = apps[/\\]Splunk_ML_Toolkit[/\\]lookups[/\\](?!__mlspl_)*.csv non_model_lookups_docs = apps[/\\]Splunk_ML_Toolkit[/\\]lookups[/\\]docs[/\\]...         Now looking at the users directory there are a lot of double ups.  /opt/splunk/etc/users/theusername/Splunk_ML_Toolkit/lookups users ML lookup directory Is there a way to get rid of these _draft_ ones in the Machine Learning GUI?  
Hi I'm trying to search for multiple strings within all fields of my index using fieldsummary, e.g. index=centre_data | fieldsummary | search values="*DAN012A Dance*" OR values="*2148 FNT004F Nut... See more...
Hi I'm trying to search for multiple strings within all fields of my index using fieldsummary, e.g. index=centre_data | fieldsummary | search values="*DAN012A Dance*" OR values="*2148 FNT004F Nutrition Technology*" | table fields Is there another/better way to perform this search or modify this query so that I can add the field where the "string" appears in the event, as well as include other output fields of my choosing? e.g. User, Date, FieldWhereStringAppears, Object I have tried a number of things and can't work it out. Many thanks
 When I see this screen I think ... this is where all my forwarder  are any that I've added no matter the means will show up here and I can see their status. How wrong am I?   also technic... See more...
 When I see this screen I think ... this is where all my forwarder  are any that I've added no matter the means will show up here and I can see their status. How wrong am I?   also technically could you have lets say 2 forwarder but 20 machines sending data to those forwarder  and then those forwarders sending data to your indexers where you can then  uses app or searches to make sense of that data?
Hi, I need help to extract the 3 words after [yyy] using regex,  True [xxx] [yyy] Issue with ios phone 11 False [yyy] Issue with android phone True [yyy] Issue with windows phone    
Hi, I have my current search giving below output, I want to have stats listed by Month. Can someone help on this one Current Search:  my search |  eval True=(total1-total2) | eval False=round(False... See more...
Hi, I have my current search giving below output, I want to have stats listed by Month. Can someone help on this one Current Search:  my search |  eval True=(total1-total2) | eval False=round(False/(True+False)*100,2) | table False Output:        False                         42.12   Desired Output:  Month      False August    42.12 July           xx.xx june           xx.xx . . .
Hi all - I am trying to take one lookup and limit its results with another lookup.  I can kinda get it to work with my current SPL, but it's taking a long time to run and the results don't come out a... See more...
Hi all - I am trying to take one lookup and limit its results with another lookup.  I can kinda get it to work with my current SPL, but it's taking a long time to run and the results don't come out as expected. Here's what I have so far:   | inputlookup my_kvstore | lookup my_lookup lookupfield_1 AS kvstorefield_1 OUTPUT lookupfield_1 | lookup my_kvstore kvstorefield_1 AS lookupfield_1 OUTPUT kvstorefield_2, kvstorefield_3, kvstorefield_4, kvstorefield_5 | WHERE kvstorefield_1=lookupfield_1     Results: kvstorefield_1 kvstorefield_2 kvstorefield_3 kvstorefield_4 kvstorefield_5 lookupfield_1 2016 2016 centos centos linux linux web web workstation1 workstation2 2016 2016 2017 2017 2017 apache apache apache tomcat tomcat tomcat http http http server1 server2 server3 2017 2017 2017   1. Is my search formed correctly?  2. How do I get each of the events to come out in their own row instead of being grouped into one line based on the matching kvstorefield/lookupfield?
I may use a search similar to this: index=mock_index source=mock_source | eval event = _raw | stats count as frequency by event | table event, frequency which results in a table similar to the... See more...
I may use a search similar to this: index=mock_index source=mock_source | eval event = _raw | stats count as frequency by event | table event, frequency which results in a table similar to the one below: Event Frequency 2022-08-22 13:11:12 [stuff] apple.bean.34 [stuff] 2000 2022-08-22 14:18:22 [stuff] apple.bean.86 6 [stuff] 200 2022-08-22 15:17:42 [stuff] apple.bean.1 546 [stuff] 2   Some of the tables which I get from this search give an error that states the search_process_memory_usage_threshold has been exceeded.  If I know that I am not interested in rows where the frequency is less than 1,000, is there a way to limit the table so it only shows the rows above 1,000?  Would this also improve memory usage?
I want to capture the Path (\Απεσταλμένα) and Subject (TYPICAL MAIN SHELF) .  I am using below regex Subject\W\s(?<Subject>.*)  and  rex "Path\W\s(?<Path>\W.*)"    But these are not working . I... See more...
I want to capture the Path (\Απεσταλμένα) and Subject (TYPICAL MAIN SHELF) .  I am using below regex Subject\W\s(?<Subject>.*)  and  rex "Path\W\s(?<Path>\W.*)"    But these are not working . It is not capturing the path while for subject it is capturing many more lines which are not required .   Someone please help    PH0PR07MB8510A5DC1014429F3B411EB1E39B9@PH0PR07MB8510.namprd07.prod.outlook.com> IsRecord: false ParentFolder: { [-] Id: LgAAAACYR3ou5YLkQLdwhKR5o0aGAQDzGy/hF08sRpmozaW+A2HqAAAAdHcNAAAB Path: \Απεσταλμένα } SizeInBytes: 180998 Subject: TYPICAL MAIN SHELF } LogonType: 0 LogonUserSid: S-1-5-21-2050334910-350505970-4048673702-5100548 MailboxGuid: 967cf2f1-6b52-4e79-bf98-1hnfj55667 MailboxOwnerSid: S-1-5-21-2050334910-350505970-499886553
Hi there, so I have a search which contains the field myMetric (done within field extraction). I want to show a dashboard panel presenting only myMetrics on the y-axis and time on the x-axis. I... See more...
Hi there, so I have a search which contains the field myMetric (done within field extraction). I want to show a dashboard panel presenting only myMetrics on the y-axis and time on the x-axis. I fail using "| timechart" since I am forced to use a statistic function or count (I want to show myMetric, not the count). Using "| eventstats" my first problem was that the dashboard legend shows way to many fields, but I was able to remove them using "| fields -a,b,c". But the x-axis is labeled with "Time" instead of showing concrete datetimes. So how can I archive this?
[tcp-ssl://9515] disabled=0 index = myindex connection_host = ip sourcetype = mysourcetype _TCP_ROUTING = myindexcluster   The above will allow raw events and default fields to be put i... See more...
[tcp-ssl://9515] disabled=0 index = myindex connection_host = ip sourcetype = mysourcetype _TCP_ROUTING = myindexcluster   The above will allow raw events and default fields to be put into the indexer.  The below allows indexed csv fields (structured) to be put into the indexer. The props.conf entry for the sourcetype is used by both tcp and disk file input. I am using identical csv files as data for each. Why cannot the tcp ingested csv file be indexed by the forwarder and sent to the indexer?       [batch:///data/myfolder] move_policy = sinkhole disabled = 0 index= myindex sourcetype = mysourcetype crcSalt = <SOURCE> recursive = false _TCP_ROUTING = myindexcluster
Given the below example events: Initial event: [stuff] apple.bean.carrot2donut.57.egg.fish(10) max:311 min 15 avg 101 low:1[stuff] Result event 1: [stuff] apple.bean.carrot&donut.&.egg.fish(&... See more...
Given the below example events: Initial event: [stuff] apple.bean.carrot2donut.57.egg.fish(10) max:311 min 15 avg 101 low:1[stuff] Result event 1: [stuff] apple.bean.carrot&donut.&.egg.fish(&) max:& min & avg & low:&[stuff] Result event 2: [stuff] apple.bean.carrot2donut.57.egg.fish(&) max:& min & avg & low:&[stuff] I want to get Result 2 rather than Result 1.  I want to replace any series of numbers with an ampersand only if one of three conditions are true.  These conditions: The number series is preceded by a space. The number series is preceded by a colon. The number series is preceded by an open parenthesis and followed by a closed parenthesis. If I use the replace line below, the new variable created will contain Result 1 rather than the Result 2 I desire. | eval event = replace(_raw, "[0-9]+", "&") How do I get Result 2 instead?
Hi, We run Splunk Enterprise 9.0.0 and we forgot to add an indexer inside a license pool (7 orphan_peer licensing alerts). Now we obtain the error message "have exceeded your license limit too many... See more...
Hi, We run Splunk Enterprise 9.0.0 and we forgot to add an indexer inside a license pool (7 orphan_peer licensing alerts). Now we obtain the error message "have exceeded your license limit too many times" on this indexer. We desassociated and re-associated this indexer in the pool, but the error message stays present. Could you tell me how to correcting pool_violated_peer_count ? We get the licensing alerts "Correct by midnight to avoid violation". Does it mean this license violation/limit will be remove today after midnight? Regards, Chris
Hi, Is there a way to rename a specific value in the column of the table.  For example:  
Hi all! So I am helping the networking team transition their logging to Splunk and last week I discovered the Cisco Meraki Add-on.  Also discovered that in order to install the add-on as well as conf... See more...
Hi all! So I am helping the networking team transition their logging to Splunk and last week I discovered the Cisco Meraki Add-on.  Also discovered that in order to install the add-on as well as configure any part of it; connection, inputs, etc.. that I needed a pretty high permission level.  (requires capacity: admin_all_objects) Since I am not a Splunk "admin" here at work.. I am wondering if there an existing role that might allow me to configure add-ons but not allow me to manage "all_objects"? Our actual Splunk Admin is a super busy guy so I am trying to help him out on this. I do have a higher level of access than most but all objects on an add-on seems incredibly silly. Thanks!!
Hi We have a situation in PCI compliance app. The alerts are triggered and are acknowledge. A user from the ISOC has acknowledge all the alerts  So we are trying to roll it back.  Is there is an... See more...
Hi We have a situation in PCI compliance app. The alerts are triggered and are acknowledge. A user from the ISOC has acknowledge all the alerts  So we are trying to roll it back.  Is there is any chance to do that.