All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All I have configured Splunk_TA_vmware along with SA_Hydra in our HF to collect data from vcenter. I have also installed VMWIndex add-on on Indexer clusters as suggested in the documentation. H... See more...
Hi All I have configured Splunk_TA_vmware along with SA_Hydra in our HF to collect data from vcenter. I have also installed VMWIndex add-on on Indexer clusters as suggested in the documentation. However the data is going to lastchance index when I was hoping the VMWIndex add-on would take care of the proper index configuration.  Is there any additional configuration I need to do to get the logs into the indexes created by VMWIndex addon. Attaching the indexes.conf file from the addon. Tried adding index=index_name in the inputs.conf of Splunk_TA_vmware addon, but no luck. It is not getting any effect and still going into lastchance index only. Kindly suggest.  
Hello,   I have a report that uses federated search: index="federated:xxx" filter="Value" | rest_of_the_search I can insert it in my dashboards as follow and it works:  <search id="base_... See more...
Hello,   I have a report that uses federated search: index="federated:xxx" filter="Value" | rest_of_the_search I can insert it in my dashboards as follow and it works:  <search id="base_search_name" ref="report_name"></search>   However, I now want to give an argument to this second report: index="federated:xxx" filter=$token$ | rest_of_the_search So that I can call it like this: <search id="base_search_name2"> <query>| savedsearch "report_name2" token=$dashboard_token$</query> </search>   This does not work. Because probable "savedsearch" does not work with federated search? https://docs.splunk.com/Documentation/Splunk/9.0.1/Search/Aboutfederatedsearch     Long story short, How to pass a parameter to a report that uses federated search     Thanks in advance, Tom          
I want to have a graph where where you can easily see when that system is no longer taking kerberos authentications.  But when it doesn't show anything for over 12h, then that object is no longer in ... See more...
I want to have a graph where where you can easily see when that system is no longer taking kerberos authentications.  But when it doesn't show anything for over 12h, then that object is no longer in that graph. Is there a way to keep my servers showing even if there are 0 events for that time period? index=perfmon source="Perfmon:Security System-Wide Statistics" counter="Kerberos Authentications" earliest=-12h latest=now [inputlookup Prod_DC.csv] | eval host=lower(host) |bucket span=5m time | stats count by _time,host|eval count=if(count>0,1,0) |timechart span=5m limit=0 last(count) by host
HI,   I would like to get the servers who use only ntlmv1.   So in a first search I using this command       index="windows" EventCode=4624 AND(host="*-toto") Authentication_Packag... See more...
HI,   I would like to get the servers who use only ntlmv1.   So in a first search I using this command       index="windows" EventCode=4624 AND(host="*-toto") Authentication_Package=NTLM Package_Name__NTLM_only_="NTLM V1"       I want to inject the result of this search in a second command line to retrieve the server which using ntlmV2.   At the end of those search i Want to get the server that only uses NTLMV1.   How can I proceed ?   Regards
I found follow logs in _audit logs.  The user who run this search cannot access internal logs, so I assume the underline part is added by Splunk system.  Could anyboda explain follow 2 questions? W... See more...
I found follow logs in _audit logs.  The user who run this search cannot access internal logs, so I assume the underline part is added by Splunk system.  Could anyboda explain follow 2 questions? What does the underline part mean? what does the field _cd mean? search='search (index=* OR index=_*) _time>=1661000447 _time<1661000460 host="XXX" source="XXX" | eval _DBID = replace(_cd, "(\d+):\d+", "\1") | eval _OFFSET = replace(_cd, "\d+:(\d+)", "\1")']
Hi all, I am pretty new to splunk myself. I recently installed an add-on for ingesting CAS logs from our exchange servers on a Heavy Forwarder. Ref: Splunk Add-on for Microsoft Exchange - https:... See more...
Hi all, I am pretty new to splunk myself. I recently installed an add-on for ingesting CAS logs from our exchange servers on a Heavy Forwarder. Ref: Splunk Add-on for Microsoft Exchange - https://splunkbase.splunk.com/app/3225/ The splunk universal forwarder version on the exchange servers are currently 8.x and the Splunk version on the HF is version 9.  The logs were not coming thru, and we identified this was probably due to version 9 now having authentication features to communicate with UF.  So I temporarily modified the "authKeyStanza" in the restmap.conf file to "requireAuthentication = false" Restarted splunk  Recreated server class via web console in forwader management. Immediately started seeing quite a few events.  After getting proof of the events coming into the Search Heads also, I went back and change the "authKeyStanza" in the restmap.conf file to "requireAuthentication = true" and Restarted splunk again Coming to MY QUESTION NOW is, will reverting my authentication value to true; STOP the ingestion of those logs?  I have not been able to view any error in splunkd.log, but I dont even see latest events. 
Hi. Some of the alerts are created in the Splunk. Now I need to create a Nagios service check for each alert so that if Splunk alert returns a record,  Nagios check should return to critical state. B... See more...
Hi. Some of the alerts are created in the Splunk. Now I need to create a Nagios service check for each alert so that if Splunk alert returns a record,  Nagios check should return to critical state. Basically I need to monitor Splunk alert from Nagios. Can someone give solution for this?  
I want to create an alert if any of the files are missing, a description printout for that. But this search only gives me one event although it should give me two. In a nutshell, the second part afte... See more...
I want to create an alert if any of the files are missing, a description printout for that. But this search only gives me one event although it should give me two. In a nutshell, the second part after append is not working. Individual search work.  Please guide. It would be greatly appreciated.      index = axway abc@gmail.com *INCL* | stats count by host | where count = 0 | eval description="File1 INCL Missing" |table description | append [search index = axway abc@gmail.com *POD* | stats count | where count = 0 | eval description="File2 POD Missing" |table description]  
index=A host="bd*" OR host="p*" source="/apps/logs/*" | bin _time span="30m" | stats values(point) as point values(promotion) as promotionAction BY global _time | stats count(eval(promotion="OFFERE... See more...
index=A host="bd*" OR host="p*" source="/apps/logs/*" | bin _time span="30m" | stats values(point) as point values(promotion) as promotionAction BY global _time | stats count(eval(promotion="OFFERED")) AS Offers count(eval(promotion="ACCEPTED")) AS Redeemed by _time point= | eval Take_Rate_Percent=((Redeemed)/(Offers)*100) | eval Take_Rate_Percent=round(Take_Rate_Percent,2) this search is  running for 15 min but when i search for more than 15 min it is giving search suspened due to huge data. Please help me to optimize the query. Thank you in advance veeru
Hi. We are using Splunk version of 7.1.3 and DB connect version is 3.6.0. We are planning to upgrade it by end of the year. Before that we are facing an issue as it shows only " Parsing search " for ... See more...
Hi. We are using Splunk version of 7.1.3 and DB connect version is 3.6.0. We are planning to upgrade it by end of the year. Before that we are facing an issue as it shows only " Parsing search " for long period of time when we we try to run a DB query. Could someone suggest what may be the issue and how it gets resolved?
Hi  I have a SPL query that needs to adjust at search time when we are falling in and out of BST.  During BST, the search has to search between the hours of 19:00 & 7:00. Outside of BST, the search... See more...
Hi  I have a SPL query that needs to adjust at search time when we are falling in and out of BST.  During BST, the search has to search between the hours of 19:00 & 7:00. Outside of BST, the search needs to adjust and search between the hours of 20:00 & 8:00.  I have created a lookup where I capture the dates of when BST starts and stops. I have also created the logic max date and min date to identify the Sundays that start and end BST. This part is working I need help to complete the search to filter results where if the date is outside of BST, to adjust from 19:00-7:00 search window to the 20:00 - 8:00   search window.   index=my_index | eval year=strftime(_time,"%Y") | lookup bst_lookup.csv year OUTPUTNEW date_sunday | stats values(*) as * max(date_sunday) as maxdate min(date_sunday) as mindate latest(_time) as time by field | eval isbst=if(time>mindate AND time<maxdate , 1,0)     Thanks!
Hello, is there any way we can extract fields from this sample data, any help will be highly appreciated. Thank you!   2022-07-22 17:21:50 - { "type" : "core", "r/o" : false, "booting" : true... See more...
Hello, is there any way we can extract fields from this sample data, any help will be highly appreciated. Thank you!   2022-07-22 17:21:50 - { "type" : "core", "r/o" : false, "booting" : true, "version" : "7.2.9.GA", "user" : "anonymous", "domainUUID" : null, "access" : null, "remote-address" : null, "success" : true, "ops" : [ { "operation" : "add", "address" : [{ "system-property" : "dstest.tx.node.id" }], "value" : "vp2mbg_c001_r3050" }, { "operation" : "add", "address" : [{ "system-property" : "jdk.tls.client.protocols" }], "value" : "TLSv1.2" }, { "operation" : "add", "address" : [{ "system-property" : "org.apache.coyote.ajp.DEFAULT_CONNECTION_TIMEOUT" }], "value" : "600000" }, { "operation" : "add", "address" : [{ "system-property" : "org.apache.coyote.ajp.MAX_PACKET_SIZE" }], "value" : "65536" }, { "operation" : "add", "address" : [{ "system-property" : "javax.net.ssl.trustStore" }], "value" : "/opt/app/dstest/ssl/cacerts.jks" }, { "operation" : "add", "address" : [{ "system-property" : "javax.net.ssl.trustStorePassword" }], "value" : { "EXPRESSION_VALUE" : "${VAULT::vb::truststorepass::1}" } }, { "operation" : "add", "address" : [{ "system-property" : "javax.net.ssl.keyStore" }], "value" : "/opt/app/DSTest/ssl/tccs-proddr.keystore" }, { "operation" : "add", "address" : [{ "system-property" : "javax.net.ssl.keyStorePassword" }], "value" : { "EXPRESSION_VALUE" : "${VAULT::vb::certpass::1}" } }, { "operation" : "add", "address" : [{ "system-property" : "tcp.allow.dev.esa.token" }], "value" : "true" }, { "operation" : "add", "address" : [{ "system-property" : "tccs.allow.dev.esa.token" }], "value" : "true" }, { "operation" : "add", "address" : [{ "system-property" : "CLAS.ENVIRONMENT" }], "value" : "prod" }, { "operation" : "add", "address" : [{ "system-property" : "TCCS.ENVIRONMENT" }], "value" : "prod" }, { "operation" : "add", "address" : [{ "system-property" : "agent.user" }], "value" : { "EXPRESSION_VALUE" : "${VAULT::vb::agentuser::1}" } }, { "operation" : "add", "address" : [{ "system-property" : "agent.password" }], "value" : { "EXPRESSION_VALUE" : "${VAULT::vb::agentpass::1}" } }, { "address" : [{ "path" : "DSTest.server.ADCredStore.dir" }], "operation" : "add", "path" : "/opt/app/DSTest/profiles/instances/tccs/ADCredStore" }, { "address" : [{ "path" : "DSTest.ssl" }], "operation" : "add", "path" : "/opt/app/DSTest/ssl" }, { "address" : [{ "core-service" : "vault" }], "operation" : "add", "vault-options" : [ { "KEYSTORE_URL" : "/opt/app/DSTest/profiles/instances/tccs/configuration/eap7vault.keystore" }, { "KEYSTORE_PASSWORD" : "MASK-0dF/GimhesRBlxgjOeSNqf" }, { "KEYSTORE_ALIAS" : "vault" }, { "SALT" : "147asa2900" }, { "ITERATION_COUNT" : "8" }, { "ENC_FILE_DIR" : "/opt/app/DSTest/profiles/instances/tccs/configuration/" } ] }] }  
    platfrom      bkc_name     domain   testcase_id    tnl                 abzke             hef                  gh_102    asc                   kit1            touch                ig_103   sou ... See more...
    platfrom      bkc_name     domain   testcase_id    tnl                 abzke             hef                  gh_102    asc                   kit1            touch                ig_103   sou                   kit2            hub                     jk_104   img                   kit3             hub1                 lk_105 ------------------------------- sub_gruop    platfrom      bkc_name     domain   testcase_id wow                   20                        19                  15                    12 audio                10                         16                   11                    13 sound                25                        30                  18                    19  
I have a kvstore like below populated with about 1mil rows.  _key name count1 count2 calculated_number1 calculated_number2 sha256 hash Joe Cool  1 2 3 4   ... See more...
I have a kvstore like below populated with about 1mil rows.  _key name count1 count2 calculated_number1 calculated_number2 sha256 hash Joe Cool  1 2 3 4   How can I update the kvstore where I update the two counts and recalculate the two calculated numbers based on the newly updated counts? I am trying not to read in all 1 million rows and overwrite if I dont have to.  Any potential pathways are welcome and I am here to learn. Thank you all.   
We run a number of machine learning models and routinely run into limitations of the "knowledge bundle" getting too big with errors like this bundle errors   We increased the limits.conf to a... See more...
We run a number of machine learning models and routinely run into limitations of the "knowledge bundle" getting too big with errors like this bundle errors   We increased the limits.conf to alleviate it but error came back after a few more models were made. I've noticed that these likely need to be included in the knowledge bundle since they are not explicitly blacklisted from the distsearch.conf         [replicationSettings:refineConf] replicate.algos = true replicate.mlspl = true replicate.scorings = true [replicationBlacklist] non_model_lookups = apps[/\\]Splunk_ML_Toolkit[/\\]lookups[/\\](?!__mlspl_)*.csv non_model_lookups_docs = apps[/\\]Splunk_ML_Toolkit[/\\]lookups[/\\]docs[/\\]...         Now looking at the users directory there are a lot of double ups.  /opt/splunk/etc/users/theusername/Splunk_ML_Toolkit/lookups users ML lookup directory Is there a way to get rid of these _draft_ ones in the Machine Learning GUI?  
Hi I'm trying to search for multiple strings within all fields of my index using fieldsummary, e.g. index=centre_data | fieldsummary | search values="*DAN012A Dance*" OR values="*2148 FNT004F Nut... See more...
Hi I'm trying to search for multiple strings within all fields of my index using fieldsummary, e.g. index=centre_data | fieldsummary | search values="*DAN012A Dance*" OR values="*2148 FNT004F Nutrition Technology*" | table fields Is there another/better way to perform this search or modify this query so that I can add the field where the "string" appears in the event, as well as include other output fields of my choosing? e.g. User, Date, FieldWhereStringAppears, Object I have tried a number of things and can't work it out. Many thanks
 When I see this screen I think ... this is where all my forwarder  are any that I've added no matter the means will show up here and I can see their status. How wrong am I?   also technic... See more...
 When I see this screen I think ... this is where all my forwarder  are any that I've added no matter the means will show up here and I can see their status. How wrong am I?   also technically could you have lets say 2 forwarder but 20 machines sending data to those forwarder  and then those forwarders sending data to your indexers where you can then  uses app or searches to make sense of that data?
Hi, I need help to extract the 3 words after [yyy] using regex,  True [xxx] [yyy] Issue with ios phone 11 False [yyy] Issue with android phone True [yyy] Issue with windows phone    
Hi, I have my current search giving below output, I want to have stats listed by Month. Can someone help on this one Current Search:  my search |  eval True=(total1-total2) | eval False=round(False... See more...
Hi, I have my current search giving below output, I want to have stats listed by Month. Can someone help on this one Current Search:  my search |  eval True=(total1-total2) | eval False=round(False/(True+False)*100,2) | table False Output:        False                         42.12   Desired Output:  Month      False August    42.12 July           xx.xx june           xx.xx . . .
Hi all - I am trying to take one lookup and limit its results with another lookup.  I can kinda get it to work with my current SPL, but it's taking a long time to run and the results don't come out a... See more...
Hi all - I am trying to take one lookup and limit its results with another lookup.  I can kinda get it to work with my current SPL, but it's taking a long time to run and the results don't come out as expected. Here's what I have so far:   | inputlookup my_kvstore | lookup my_lookup lookupfield_1 AS kvstorefield_1 OUTPUT lookupfield_1 | lookup my_kvstore kvstorefield_1 AS lookupfield_1 OUTPUT kvstorefield_2, kvstorefield_3, kvstorefield_4, kvstorefield_5 | WHERE kvstorefield_1=lookupfield_1     Results: kvstorefield_1 kvstorefield_2 kvstorefield_3 kvstorefield_4 kvstorefield_5 lookupfield_1 2016 2016 centos centos linux linux web web workstation1 workstation2 2016 2016 2017 2017 2017 apache apache apache tomcat tomcat tomcat http http http server1 server2 server3 2017 2017 2017   1. Is my search formed correctly?  2. How do I get each of the events to come out in their own row instead of being grouped into one line based on the matching kvstorefield/lookupfield?