All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I was asked to ask - Our alerts are relying on various lookups, lookup generators, and other searches. If anything about these underlying layers fail, we have an alert with failing SPL, and these fa... See more...
I was asked to ask - Our alerts are relying on various lookups, lookup generators, and other searches. If anything about these underlying layers fail, we have an alert with failing SPL, and these failures are silent, so the alert fails, and we have no idea that it’s because an error in SPL not because there are no events generating them. Would you ask Splunk Support groups, do we have any option to create an alert action to send us an email whenever a scheduled alert SPL fails due to errors in that SPL? We really need that.  
I have a large environment that the TZs between hosts & Splunk are off by minutes & hours at times. How do I get started ? If you have done such a project please share the procedures - any helpful SP... See more...
I have a large environment that the TZs between hosts & Splunk are off by minutes & hours at times. How do I get started ? If you have done such a project please share the procedures - any helpful SPLs. Thanks a million.
I have an input text and input dropdown that both need to allow blank value.  They cannot be null since the token must be set or the queries that use it won't run.  I need a prefix and suffix both wi... See more...
I have an input text and input dropdown that both need to allow blank value.  They cannot be null since the token must be set or the queries that use it won't run.  I need a prefix and suffix both wildcards only when there's a value, and to use a single wildcard (*) in its absence. I tried this, but the prefix and suffix keep multiplying and soon i have 10 suffixes and 10 prefixes. here's the input text:   <prefix/> <suffix/> <change> <eval token="assetFilter">if(len($assetFilter$)&gt;0, "*" . $assetFilter$ . "*", "*")</eval> </change>   after entering, removing, entering, removing values, the suffix and prefix kept multiplying and eventually looked like this ****tag=****   Here's the other one, a dropdown of macros so it needs the ` char with the wildcard, after selecting, unselecting, selecting, and unselecting values in the dropdown.   <change> <eval token="asset">if(len($asset$)&gt;0, "*`" . $asset$ . "`*", "*")</eval> </change>   *`*`*`*`*`*`*`*`*`*`*`*`*`*`*`ED_ENDI_Asdf`*`*`*`*`*`*`*`*`*`*`*`*`*`*`*   How can I use prefix and suffix conditionally a little better or in a way that works? -c    
I have a list of files name under one field called "attachment"  and I would like to split this string into multiple rows by file names. I tried split/makemv method but it was unsuccessful.. really a... See more...
I have a list of files name under one field called "attachment"  and I would like to split this string into multiple rows by file names. I tried split/makemv method but it was unsuccessful.. really appreciate any tips on separating this string by file. Thank you    For example:  Raw output: "image001.png Project_Preference_crosstab (3).csv GC_Preferences - 8 Oct.xlsx GC updated skills with ratings - 30 Sep 2021.xlsx Skill_Details_-_Base_crosstab (3).csv AP Talent list - 30 Sep 2021.xlsx UCD_Skills_Compliance_Details_crosstab (2).csv"   I would like to see: image001.png Project_Preference_crosstab (3).csv GC_Preferences - 8 Oct.xlsx GC updated skills with ratings - 30 Sep 2021.xlsx Skill_Details_-_Base_crosstab (3).csv AP Talent list - 30 Sep 2021.xlsx UCD_Skills_Compliance_Details_crosstab (2).csv
I need help to use the values from a lookup table into multiple fields, where the output from the lookup table is a list of values. The value from the table will be populated in a_ims, b_ims, c_ims.... See more...
I need help to use the values from a lookup table into multiple fields, where the output from the lookup table is a list of values. The value from the table will be populated in a_ims, b_ims, c_ims... instead of  "*" I tried this query below and some other variations but none of them worked. index=*  sourcetype=v_main (a_imsi=* OR b_imsi=* OR c_imsi=* OR d_imsi=* OR Imsi=*) | lookup ADHOC.csv Comment OUTPUT Imsi | eval IMSI=mvappend(a_imsi,b_imsi,c_imsi,d_imsi,Imsi) | mvexpand IMSI | bin span=1d _time | stats sum(TCDuration) as TCDuration by _time IMSI | eval TCDuration=TCDuration/1000 | eval Utilization=round(((TCDuration/86400)*100),1) | eval Utilization=if(Utilization >100, 100, Utilization) | fields - TCDuration | timechart eval(round(avg(Utilization),1)) by IMSI limit=0   Any ideas will be really helpful    thanks so much
What search can I do to find peers with status=down. Looking to form an alert when this happens but can't find it within a search. 
I have a dashboard that has a timechart displaying a count of values occurring every hour. My query is: index=app host=... sourcetype="..." siteType=... | timechart span=1h count(eval(status!=200)) ... See more...
I have a dashboard that has a timechart displaying a count of values occurring every hour. My query is: index=app host=... sourcetype="..." siteType=... | timechart span=1h count(eval(status!=200)) as Fails | eval Time = strftime(_time, "%Y-%d-%m %H:%M:%S") | fields - _time | table Time, Fails This works perfectly, but I want to add a drilldown on my table so the user can click on a row and see all the values for that hour. The closest thing I have been able to come up with is this query: index=app host=... sourcetype="..." siteType=... status!=200 ((earliest=$earliest$ latest<$latest$)) But if I click on a row, it gives me a search that looks like this: index=app host=... sourcetype="..." siteType=... status!=200 ((earliest=1633096800 latest<1633702712)) And I have an error in the search, "Invalid earliest_time." What is going on here? Is there a conversion I need to do on the earliest and latest tokens to get the correct time? 
Hello there, I hope someone can help! a report we generated doesn't send emails, or just sporadically. also, the action time for this is very high as you can see here (field "action_time_ms"):  10-... See more...
Hello there, I hope someone can help! a report we generated doesn't send emails, or just sporadically. also, the action time for this is very high as you can see here (field "action_time_ms"):  10-08-2021 14:20:35.156 +0200 INFO SavedSplunker - savedsearch_id="nobody;search;SIGNL4 - High or Critical Notable Events Clone", search_type="scheduled", user="maximilian.wehner", app="search", savedsearch_name="SIGNL4 - High or Critical Notable Events Clone", priority=default, status=success, digest_mode=0, scheduled_time=1633695120, window_time=0, dispatch_time=1633695122, run_time=2.838, result_count=1, alert_actions="email", sid="scheduler_bWF4aW1pbGlhbi53ZWhuZXI__search__RMD55d86aa6233cebf27_at_1633695120_428", suppressed=0, fired=1, skipped=0, action_time_ms=509817, thread_id="AlertNotifierWorker-1", message="", workload_pool=""   action_time_ms is a LOT. so something prevents it from being sent or whatever is going on.  usually I think splunk could send an email without configuring a mailserver but currently we want to use our o365 mailserver for it. this has been tested with another environment and there it definitely works like a charm.  here the config of the alert and the mailserver config:  --> we've artificially set the maxtime very high to check if splunk is finally sendint the mail after a while. record was over 8 minutes until a mail was sent.    My questions now are how can this happen? is there a way to further investigate and resolve this issue? currently this alert is mandatory for a security view and if this alert only comes every now and then, it's a main issue.    [email] auth_password = **** auth_username = user@xyz.de from = splunk@sxyz.de mailserver = smtp.office365.com pdf.header_left = none pdf.header_right = none use_tls = 1 reportPaperSize = a4 hostname = somehostname maxtime = 20m   is there something wrong with the config? What can I do to further troubleshoot this issue and hopefully resolve it? I guess this issue has come up in the past   thanks a lot for help!
Hello, I have a field with this values /v1/accounts/96ea01b5-7ea7-4dc6-b534-39ae8b114bba/transactions /v1/accounts/ff572b85-c3c6-4e54-8343-75c5aa954285 /v1/accounts/469754d0-9169-45ca-af86-a885... See more...
Hello, I have a field with this values /v1/accounts/96ea01b5-7ea7-4dc6-b534-39ae8b114bba/transactions /v1/accounts/ff572b85-c3c6-4e54-8343-75c5aa954285 /v1/accounts/469754d0-9169-45ca-af86-a885142d6ad4/transactions /v1/accounts/c68b8246-bd76-4d34-9d33-7fb4be4ebe9f/limits /v1/accounts/d9f1e948-e9aa-4a46-9e78-deeaf1d21143/limits /v1/accounts/f6fa235c-858d-42d2-80ae-85b12a750351 /v1/accounts/f4a0877f-5807-41ed-b7ee-c6be2e4e25be /v1/accounts/042c6b58-ea01-48cd-838e-06929b427f75 I need a query that show me only the lines that doesn't have nothing after the ID. Exemple  /v1/accounts/ff572b85-c3c6-4e54-8343-75c5aa954285 /v1/accounts/f6fa235c-858d-42d2-80ae-85b12a750351 /v1/accounts/f4a0877f-5807-41ed-b7ee-c6be2e4e25be /v1/accounts/042c6b58-ea01-48cd-838e-06929b427f75 Thanks
I am trying to install the Splunk App For Jenkins (v 2.0.4) in our Splunk Cloud environment (v 8.1.2103.3) and the app is failing vetting.  On the Splunkbase page for the App, the compatibility indic... See more...
I am trying to install the Splunk App For Jenkins (v 2.0.4) in our Splunk Cloud environment (v 8.1.2103.3) and the app is failing vetting.  On the Splunkbase page for the App, the compatibility indicates that I should be able to install. Is someone out there actively maintaining this app and can take note of some changes in a future version? Thanks, REID These are the messages related to the failures (there are some warning too): [ Failure Summary ] Failures will block the Cloud Vetting. They must be fixed. check_rest_handler_python_executable_exists The handler of stanza [script:customIndex] should be `python3` executable. File: default/restmap.conf Line Number: 12 The handler of stanza [script:customPanel] should be `python3` executable. File: default/restmap.conf Line Number: 1 The handler of stanza [script:userValidation] should be `python3` executable. File: default/restmap.conf Line Number: 23 check_for_telemetry_metrics_in_javascript The telemetry operations are not permitted. Match: window._splunk_metrics_events.push File: appserver/static/pages/job.js Line Number: 6 The telemetry operations are not permitted. Match: window._splunk_metrics_events.push File: appserver/static/pages/health.js Line Number: 6 The telemetry operations are not permitted. Match: window._splunk_metrics_events.push File: appserver/static/pages/audit.js Line Number: 6
I am using the following in a configuration being distributed to several remote syslog servers.  Works as expected on all UF's, except 1.  From a single UF's, the 'host' field in the indexed events i... See more...
I am using the following in a configuration being distributed to several remote syslog servers.  Works as expected on all UF's, except 1.  From a single UF's, the 'host' field in the indexed events is being reported as "PaloAlto" instead of the 4th segment as expected?  I searched through all of the .conf files on the UF manually and used BTOOL looking for a missed "host_segment" entry or something hidden in another config that would cause this, none found.  Am I am missing something obvious to the rest of you?       [monitor:///app01/logs/ASA] whitelist = .+\.log$ host_segment = 4 index = syn_netfw sourcetype = cisco:asa ignoreOlderThan = 2d disabled = 0 [monitor:///app01/logs/PaloAlto] whitelist = .+\.log$ host_segment = 4 index = syn_netfw sourcetype = pan:log ignoreOlderThan = 2d disabled = 0      
Hi, recently we deployed IT Essential Works with latest Exchange Content Pack. we also deployed the three addons for the Exchange  in the exchange nodes (including IIS and OWA logs). Now we are i... See more...
Hi, recently we deployed IT Essential Works with latest Exchange Content Pack. we also deployed the three addons for the Exchange  in the exchange nodes (including IIS and OWA logs). Now we are in the process of validation of the ITSI dashboards, External Logins Map is one of them, and we realized that the extracted source IP (c_ip field) corresponds to our load balancer (XXX.XXX.XXX.XXX) instead of the remote host (IP shown at the end of the event). below an example of exchange event that reach our splunk infra.     2021-10-08 12:22:31 XXX.XXX.XXX.XXX POST /Microsoft-Server-ActiveSync/default.eas Cmd=Ping&User=---%5n---&DeviceId=-------&DeviceType=Outlook&CorrelationID=<empty>;&cafeReqId=c586f22d-14cd-4449-be95-fe666b30c92e; 443 -------\----- 192.168.X.X Outlook-iOS-Android/1.0 - 200 0 0 181382 52.98.193.109     we use the official TA-Exchange-2013-Mailbox, TA-Exchange-ClientAccess and Ta-Windows-Exchange-IIS addons. I found the definition of c_ip field in the transform.conf and props.conf in  the TA-Windows-Exchange-IIS, but I dont see any specific regex for its correct extraction.  could someone tell me how to proceed to fix this parsing issue so the dashboards can show correct information? many thanks
Hi, my regex was like below , search| rex field=_raw "Status=(?<Status>\"\w+\s+\w+\".*?)," |stats count by Status My output is like below , Status                      count  "No Service"       ... See more...
Hi, my regex was like below , search| rex field=_raw "Status=(?<Status>\"\w+\s+\w+\".*?)," |stats count by Status My output is like below , Status                      count  "No Service"           250 Service                      500 but i am in need of output as below , Status                      count  No Service           250 Service                   500 I am in need of status ("No Service") as No Service with double quotes in Output .Please let me know what i am missing here .
Bonjour, Nous prévoyons de réaliser un upgrade de Splunk Enterprise. Actuellement nous avons 2 noeuds en version 8.0.6. Nous aimerions monter la version en 8.2.2. Nous utilisons entre autre les app... See more...
Bonjour, Nous prévoyons de réaliser un upgrade de Splunk Enterprise. Actuellement nous avons 2 noeuds en version 8.0.6. Nous aimerions monter la version en 8.2.2. Nous utilisons entre autre les app suivantes :  Splunk Add-on for Blue Coat ProxySG Splunk_TA_bluecoat-proxysg 3.6.0 Splunk Add-on for F5 BIG-IP Splunk_TA_f5-bigip 3.1.0 Pulse Connect Secure Splunk_TA_pulse_connect_secure 1.1.3 WALLIX Bastion TA-WALLIX_Bastion 1.0.4 Faut-il prévoir un upgrade de ces app avec la version Splunk Enterprise 8.2  ? Nous utilisons également des universal Forwarder en version 7.0 . Seront-ils toujours compatibles avec Splunk 8.2 ? Merci, Jean-Christophe Hermitte
Hi Folks, We have log file monitoring of one of the text file , and that text file getting updated once in a week. Then Splunk reads the data from that file. Today we had faced a situation , wh... See more...
Hi Folks, We have log file monitoring of one of the text file , and that text file getting updated once in a week. Then Splunk reads the data from that file. Today we had faced a situation , where log file updated with todays data but not logs were sent to Splunk. we verified in splunkd.log and didn't find any info related to that specific log file, and Splunk UF connected to HF and everything  is working fine and other data was flowing to Splunk as usal. However after Splunk restart data sent to splunk, I was wondering if log file is not getting updated for some time , will Splunk ignores the file from monitoring until restart?. and we have stanza ignoreOlderthan set to 5d , is this something to do with> . we are aware that ignoreOlderthan used to lgnore logs data older than specified  time, just wanted to make sure this is not that case.
I have this task were I am successful in getting result sets from nodes that are present in my splunk instance. How ever I can't find a way to announce in the spl syntax if the Name of the Node that... See more...
I have this task were I am successful in getting result sets from nodes that are present in my splunk instance. How ever I can't find a way to announce in the spl syntax if the Name of the Node that doesn't exist with a Yes or no value in another field i.e. Node            present Appserver   No    
Hello All, While connecting to Splunk Cloud application through ODBC DSN Configuration I am getting HTTP Protocol Error 404 issue. Can someone suggest whether any firewall issue is the error occurre... See more...
Hello All, While connecting to Splunk Cloud application through ODBC DSN Configuration I am getting HTTP Protocol Error 404 issue. Can someone suggest whether any firewall issue is the error occurred or the link should have IP address?  NOTE: Th rough browser I am able to login into application with my credentials.
Hi all, Is there any app, method or guidance for ingesting emails directly form a O365 mailbox? So a use case for us would be: We have a mailbox which receives Phishing Reports SOAR logs ont... See more...
Hi all, Is there any app, method or guidance for ingesting emails directly form a O365 mailbox? So a use case for us would be: We have a mailbox which receives Phishing Reports SOAR logs onto the mailbox, downloads the unread mails + turns them into "Events" Playbook begins working on these events - checking URL's, checking to/from addresses, maybe further triage based on o365 logs or whatever Detonate mail/attachments in Sandbox, capture networks/process/file related results, e.g. Cuckoo Playbook decides if mail is okay, suspicious, or phishing (or integrates with another tool to get that information - e.g. Proofpoint All information made available to the analyst who reviews In order to kick these off we'd need to be able to INGEST the email to begin with, but don't see any way to do that at present. If it doesn't exist I will write my own app for it - but don't want to reinvent the wheel if I don't have to Thanks!  
Hi! I have the following data and would like to check, for those records with the same ID, if one record has CREATED_DATE within CREATED_DATE and RESOLVED_DATE of another one. So in the example, the... See more...
Hi! I have the following data and would like to check, for those records with the same ID, if one record has CREATED_DATE within CREATED_DATE and RESOLVED_DATE of another one. So in the example, the first record in blue was created on 10-4 and resolved on 10-07, where the second record with the same ID was created on 10-05 while the other one was open. Can we do this kind of check in Splunk? ID CREATED_DATE RESOLVED_DATE 123 2021-10-04 19:30:35 2021-10-07 15:13:16 123 2021-10-05 16:11:25 2021-10-15 12:05:32 456 2021-03-05 10:10:13 2021-05-05 11:05:21   We'd need another column, say CHECK, that says "overlap" when the second record was created between the range of the first one, with the same ID. Thank you very much in advance!
Hi all, strange thing - when using mean() and avg() in the same stats command, whichever is written first is empty, while the second value  is shown with the correct result. ... | stats mean(Capaci... See more...
Hi all, strange thing - when using mean() and avg() in the same stats command, whichever is written first is empty, while the second value  is shown with the correct result. ... | stats mean(Capacity) avg(Capacity) mean(Capacity) avg(Capacity)   20.71428   ... | stats  avg(Capacity) mean(Capacity) avg(Capacity)  mean(Capacity)   20.71428   I know they are basically the same values. But why can't I show them side by side? Each function on its own is working fine. Also adding any of the other statistical functions is no problem, just avg() and mean() don't go together. Why? I'm on 8.2.0 at the moment. Thank you very much and kind regards, Gunnar