All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi there,   I have 2 messages that log when a job is run, which share a job_id field  event_name=process.start  event_name=process.end   I'm trying to create an alert that fires if there ... See more...
Hi there,   I have 2 messages that log when a job is run, which share a job_id field  event_name=process.start  event_name=process.end   I'm trying to create an alert that fires if there is an event_name=process.start , but no event_name=process.end , after 3 hours. I've seen lots of examples of using transactions between 2 events to get the duration, but not any if an event is missing. Many thanks, apologies if this is a noob question
Hi all, I need to write a query that checks whether  (Daily AH <= Daily Po <= Daily Risk <= Daily File <= Daily Instrum)  condition is met for each row. If the condition is not met get rid of the r... See more...
Hi all, I need to write a query that checks whether  (Daily AH <= Daily Po <= Daily Risk <= Daily File <= Daily Instrum)  condition is met for each row. If the condition is not met get rid of the row value that did not meet the condition and all the values after it.      
Hi All, We are facing an issue while the alert sent for any health rule violation to the customer is in UTC timezone. So please let us know if there is any way that we can change the timezone fro... See more...
Hi All, We are facing an issue while the alert sent for any health rule violation to the customer is in UTC timezone. So please let us know if there is any way that we can change the timezone from UTC to Local TimeZone so that the customer can receive the alert/health rule violation time in the local timezone.
Hi all, I have few queries to be modified using tstats: I am new to splunk, please let me know whether these queries can be converted into tstats. Query1: index=abc  "NEW"  "/resource/page"  a... See more...
Hi all, I have few queries to be modified using tstats: I am new to splunk, please let me know whether these queries can be converted into tstats. Query1: index=abc  "NEW"  "/resource/page"  appname=ui OR appname=uz  |stats  avg(response_time). Query2: index=abc  sourcetype=abc  host=ghjy   "transaction" NOT "user" |stats avg(ResponseTime) Query3: index=abc  iru=/resiurce/page  appname=ui NOT 1234 NOT 1991 NOT 2022 "Bank status" |stats count  
Hello, I am trying to monitor an application log and have Splunk generate an alert only when the  service_status = "disconnected" and  service_status="connected" entries are logged and the time betw... See more...
Hello, I am trying to monitor an application log and have Splunk generate an alert only when the  service_status = "disconnected" and  service_status="connected" entries are logged and the time between the two is greater than the span of 10 seconds OR if the Service_status = "disconnected" is the only entry being logged.   I've been experimenting with the transaction command but I am not getting the desired results.  Thanks in advance for any help with this. Example log entries: --- service is okay, do not generate an alert.--- 9/2/2022 00:10:36.683   service_status = "disconnected" 9/2/2022 00:10:38.236  service_status="connected"   --- service is down, generate an alert.--- 9/2/2022 00:10:40.683   service_status = "disconnected" 9/2/2022 00:10:51.736  service_status="connected"   --- service is down,  service_status="connected" event is missing,  generate an alert.--- 9/2/2022 01:15:15.603   service_status = "disconnected"  
I inherited a splunk mesh of search-heads, deployment server, index cluster, etc. I am trying to figure out all this splunk stuff, but ran into an issue that I am not sure if it ignores best practice... See more...
I inherited a splunk mesh of search-heads, deployment server, index cluster, etc. I am trying to figure out all this splunk stuff, but ran into an issue that I am not sure if it ignores best practice, poor judgement, or as intended. We have 8 main indexers that do what indexers do, all clustered as peer nodes. The deployment server is the master node and the search head for the cluster (which I don't understand that since we also have 5 main separate search heads).  We also have a disaster recovery DR site that has an indexer as a peer node part of the aforementioned cluster. The cluster has a Replication factor 3, for # of copies of raw data. The cluster has a Search factor of 2, for # of searchable copies. Newer to cyber so forgive me if I don't understand right away or if I am missing something glaringly obvious. But does it makes sense to have the DR indexer be part of the cluster? If it does makes sense, then how do i ensure that the other 8 indexers send a copy of all their stuff to the DR indexer? I thought the master node just kind of juggles the incoming streams from the forwarders and balances the data across all the indexers.  Also; - should the deployment server double as a master node and search head for the index cluster? - what is the difference between the 5 main separate search heads and the search head in the index cluster? - (last one, i swear) would it make sense to have a search head cluster, or keep the search heads separate as they the 5 are accessed and used by different groups (networking, development, QA/testing, cybersecurity, and UBA (which we dont even have UBA servers active right now cuz I cannot get them to work or web ui to launch))
Hi,  I am trying to get SQL Performance monitoring logs into our environment for one of our ITSI use cases The event successfully comes into our event index however I would like to convert these ... See more...
Hi,  I am trying to get SQL Performance monitoring logs into our environment for one of our ITSI use cases The event successfully comes into our event index however I would like to convert these performance monitoring sql logs into metrics as it will work much better with ITSI  I am struggling to convert the logs into metrics and am using the following documentation to help me do so -  https://docs.splunk.com/Documentation/Splunk/9.0.1/Data/Extractfieldsfromfileswithstructureddata   Here are my props and transforms conf files for 1 of the sql perfmon inputs props.conf      [Perfmon:sqlserverhost:physicaldisk] TRANSFORMS-field_value = field_extraction TRANSFORMS-sqlphysicaldiskmetrics = eval_sqlphysicaldiskcounter METRIC-SCHEMA-TRANSFORMS = metric-schema:extract_sqlphysicaldisk     transforms.conf     [eval_sqlphysicaldiskcounter] INGEST_EVAL = metric_name=counter [metric-schema:extract_sqlphysicaldisk] METRIC-SCHEMA-MEASURES = _ALLNUMS_       My SQL index where i would like these logs to go into does not have the "datatype=metrics" setting as i thought this should convert the events into metrics regardless, also i changed this setting so that the datatype = metrics but this removed all the data entirely and no data was populated into the sql index  I can still see the event data populating in the SQL index but it cannot be searched using the metrics commands (mstats, mcatalog etc)  Note - There are 8 counter field values which i would like to convert individually into metrics hence why i set the metric_name = counter. I did not break it down individually into separate settings under the transforms.conf due to there being spaces in the field values   Any idea why this is failing and how i can fix this? Any help would be greatly appreciated!  Any questions please ask!  Thanks    
Recently learned of this new Splunk app for Red Hat Insights. Just curious if anyone is currently utilizing in with their Splunk Cloud. We looked into installing the app but says the app does not sup... See more...
Recently learned of this new Splunk app for Red Hat Insights. Just curious if anyone is currently utilizing in with their Splunk Cloud. We looked into installing the app but says the app does not support search head cluster deployments. I realize Splunk defers all support to Red Hat for this application and exploring if anyone came across this issue. 
For example I have getting splunk logs with 4 fields    Time Event time 1 service = "service1"  | operation = "sampleOperation1" | responseTime = "10" | requestId = "sampleRequestId1" ti... See more...
For example I have getting splunk logs with 4 fields    Time Event time 1 service = "service1"  | operation = "sampleOperation1" | responseTime = "10" | requestId = "sampleRequestId1" time2 service = "service2"  | operation = "sampleOperation2" | responseTime = "60" | requestId = "sampleRequestId2" time3 service = "service2"  | operation = "sampleOperation2" | responseTime = "60" | requestId = "uniqueRequestId3" time4 service = "service4"  | operation = "sampleOperation4" | responseTime = "30" | requestId = "sampleRequestId4"   My objective is to find from all the logs if count is greater then 20 for  combination of (service,operation) with reponseTime>40. Expected Output service1  operation1  [sampleRequestId2,uniqueRequestId3]   The query I have for now is search here...... | stats count(eval(responseTime>60)) as reponseCount by service, operation | eval title= case(                               match(service,"service2") AND reponseCount>20, "alert1", ) | search title=* | table title,service   But here I cannot refer to requestId which is being dropped. Please suggest if you any solution.
Hi All ,    So i was trying to create an global field for a newly indexed data , so trying out with automatic lookup settings . Ex- in the indexed data - datacenter name is not mentioned , so wan... See more...
Hi All ,    So i was trying to create an global field for a newly indexed data , so trying out with automatic lookup settings . Ex- in the indexed data - datacenter name is not mentioned , so wanted to populate it using automatic lookup . I am able to do that , but for only 1 sourcetype , i have 100+ sourcetypes , is there any way to define apply to - sourcetype/hosts to multiple one . Please let me know .
Hello everyone! I have time in such format 2022-09-02T18:44:15, this time in GMT+3, and I need to change convert this time to UTC. Can you help me? 
Splunk Connect for Zoom stopped working after Zoom enforced use of ssl certificates on 2022/07/20 After support tickets with Zoom and Splunk here are some experience would like to share. Using sign... See more...
Splunk Connect for Zoom stopped working after Zoom enforced use of ssl certificates on 2022/07/20 After support tickets with Zoom and Splunk here are some experience would like to share. Using signed ssl certificates private or internal CA did not work. It seems that I had to use a certificate signed a commercial CA like Entrust. If you want to chain your ssl certificate with Entrust root and intermediate certificates, please ensure that the certificates are in the order as follows after running the command: openssl crl2pkcs7 -nocrl -certfile yoursslcertificate.entrust.pem | openssl pkcs7 -print_certs -noout Or you could just include the commercially issued ssl certificate without the root and intermediate certificates.       subject=/C=US/ST=STATE/L=CITY/O=ORG, Inc./CN=mycompany.com issuer=/C=US/O=Entrust, Inc./OU=See www.entrust.net/legal-terms/OU=(c) 2012 Entrust, Inc. - for authorized use only/CN=Entrust Certification Authority - L1K subject=/C=US/O=Entrust, Inc./OU=See www.entrust.net/legal-terms/OU=(c) 2012 Entrust, Inc. - for authorized use only/CN=Entrust Certification Authority - L1K issuer=/C=US/O=Entrust, Inc./OU=See www.entrust.net/legal-terms/OU=(c) 2009 Entrust, Inc. - for authorized use only/CN=Entrust Root Certification Authority - G2 subject=/C=US/O=Entrust, Inc./OU=See www.entrust.net/legal-terms/OU=(c) 2009 Entrust, Inc. - for authorized use only/CN=Entrust Root Certification Authority - G2 issuer=/C=US/O=Entrust, Inc./OU=See www.entrust.net/legal-terms/OU=(c) 2009 Entrust, Inc. - for authorized use only/CN=Entrust Root Certification Authority - G2       If all works after restarting Splunk, running the netstat -nap |grep 9997 will show the following connections from Zoom ip addresses and you would see logs under the sourcetype=zoom:webhook       tcp 0 0 0.0.0.0:4443 0.0.0.0:* LISTEN 25849/python3.7 tcp 0 0 10.#.#.#:4443 3.235.82.171:41101 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.82.171:58497 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.82.171:54514 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.82.172:48513 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.82.171:53006 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.82.172:55259 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.82.172:46028 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.82.172:52837 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.82.172:7527 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.82.171:12934 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.83.101:32088 TIME_WAIT -          
Hello all, how is possible to change default dump folder on Windows?
I had so much trouble with this but figured I would share what I did to make it work for me. You may have other ways of doing it but I found very little guidance online to help someone going through ... See more...
I had so much trouble with this but figured I would share what I did to make it work for me. You may have other ways of doing it but I found very little guidance online to help someone going through the process. If you have done other things that worked for you, feel free to reply and share.
Hello all, a Splunk newbie here. For the company that I work for we want to monitor some licenses that are being used. The logs show the user the type of license that they have. The type can be for... See more...
Hello all, a Splunk newbie here. For the company that I work for we want to monitor some licenses that are being used. The logs show the user the type of license that they have. The type can be for the most part IN (not using) or OUT (using the license) and sometimes DENIED but that is not of interest currently. Because sometimes users forget to log off we want to take this into account by looking at the data over the past 2 weeks. I count the most recent type for each user and focus if the type is OUT. Because this means that the user is using a license. This gives a count of OUT over the past 2 weeks, which is pretty accurate with what the license manager shows. This count of OUT over the past 2 weeks is needed to be shown every 5 minutes on a (time)chart. So, is it possible to have a (time)chart that runs a count over the past 2 weeks every 5 minutes? For the query I have: base search | dedup 1 user sortby -_time | table user type _time | search type=out This gives me only the users that have a type OUT, which means these are the ones that are using a license. Again, I would like to count the number of OUTS these past 2 weeks and have that number calculated every 5 minutes and shown on a (time)chart. I have tried loads of stuff (from other posts) but I did not manage to get it to work. There already is a workaround where we use an ETL tool with the Splunk API as middleware, but I thought there should be a more efficient way to do it. If any more info is needed I (hopefully) can provide that, Thanks in advance, M.
Can we configure input.conf define port with multiple sourcetype? For ex. [tcp://6134] index = top sourcetype = mac_log sourcetype= tac_log disabled = 0 Or  Is there any way to segregate... See more...
Can we configure input.conf define port with multiple sourcetype? For ex. [tcp://6134] index = top sourcetype = mac_log sourcetype= tac_log disabled = 0 Or  Is there any way to segregate logs coming in one port with different sourcetypes?
Hi All, we are using Netapp cloud secure add on to collect data from cloud secure data and we have configured input but not getting all data below is the configuration please suggest if anything t... See more...
Hi All, we are using Netapp cloud secure add on to collect data from cloud secure data and we have configured input but not getting all data below is the configuration please suggest if anything to add in configuration.   [cloud_secure_alerts://******] builtin_system_checkpoint_storage_type = auto entityaccessedtime = 1635795607850 index = main interval = 60 netapp_secure_insight_fqdn = ********.cloudinsights.netapp.com sourcetype = netapp:cloud_secure:alerts        
I am trying to configure NEAP policy action rules  to integrate servicenow incident comments by passing a token, but it looks Splunk doesn't support tokens in NEAP action rules. I heard there is some... See more...
I am trying to configure NEAP policy action rules  to integrate servicenow incident comments by passing a token, but it looks Splunk doesn't support tokens in NEAP action rules. I heard there is some custom script would pass the tokens, does anybody have idea on this customization part and how we can achieve it ?
We have enabled Bidirectional correlation search for Service now in our ITSI, unfortunately  itsi_notable_event_external_ticket  lookup is not updating proper values. I couldn't find the saved search... See more...
We have enabled Bidirectional correlation search for Service now in our ITSI, unfortunately  itsi_notable_event_external_ticket  lookup is not updating proper values. I couldn't find the saved search which is used to update the lookup to troubleshoot further. Can some one tell me how itsi_notable_event_external_ticket lookup is being updated ?
I have borrowed a search from an earlier question to help give kWh information on a given month. How can I modify the search to show only the host_name and the sum total of the avg_kWh column? inde... See more...
I have borrowed a search from an earlier question to help give kWh information on a given month. How can I modify the search to show only the host_name and the sum total of the avg_kWh column? index=network sourcetype=zabbix metric_name="st4InputCordActivePower" host_name="pdu02.LON5.Contoso.com" | bin _time span=1h | stats count as samples sum(value) as watt_sum by _time | eval kW_Sum=watt_sum/1000 | eval avg_kWh=kW_Sum/samples | addcoltotals   2022-05-30 18:00 12 44335.0 3.69458 44.3350 ....         2022-05-31 23:00 12 43489.0 3.62408 43.4890   7686 27425688.0 2595.96346 27425.6880