All Topics

Top

All Topics

Hello, I have Universal Forward and Heavy Forward in Linux machine, how would I stop and restart them.  Any help will be highly appreciated. Thank you so much, appreciate your support in these effor... See more...
Hello, I have Universal Forward and Heavy Forward in Linux machine, how would I stop and restart them.  Any help will be highly appreciated. Thank you so much, appreciate your support in these efforts.
Is there an SPL to list all my Hosts (Win & Linus), version of their UF, date & time & TZ please? Thanks a million.
Has anyone run into an issue where the Microsoft 365 App for Splunk is causing a search head to crash? I'm wondering if part of the issue is the use of the sankey visualizations on some of the dashbo... See more...
Has anyone run into an issue where the Microsoft 365 App for Splunk is causing a search head to crash? I'm wondering if part of the issue is the use of the sankey visualizations on some of the dashboards given the volume of data it is trying to display. The SH that are crashing are moderately beefy.  https://splunkbase.splunk.com/app/3786/ Does the new answers backend support tagging the app? I forget how to do that.
We have Splunk Ent. + ES. I have a dashboard that I 'd like to install in Security Essentials. What level permission does a user need to install this in Security Essentials? ES admin or Ent. Admin le... See more...
We have Splunk Ent. + ES. I have a dashboard that I 'd like to install in Security Essentials. What level permission does a user need to install this in Security Essentials? ES admin or Ent. Admin level permission? Thanks a million for your reply.
Coming from older version of Splunk.  We have basically html links that when selected opened a new tab with a presaved search.  We have about 30 searches on a single page which all are unique and all... See more...
Coming from older version of Splunk.  We have basically html links that when selected opened a new tab with a presaved search.  We have about 30 searches on a single page which all are unique and all open a new tab to display.  How is this done with the newest version of Splunk?  The only thing I can find has is a panel that launches a saved search in a new tab but it also shows in on the dashboard which is what I don't want because I need to have 29 other items that can be selected.  
I was asked to ask - Our alerts are relying on various lookups, lookup generators, and other searches. If anything about these underlying layers fail, we have an alert with failing SPL, and these fa... See more...
I was asked to ask - Our alerts are relying on various lookups, lookup generators, and other searches. If anything about these underlying layers fail, we have an alert with failing SPL, and these failures are silent, so the alert fails, and we have no idea that it’s because an error in SPL not because there are no events generating them. Would you ask Splunk Support groups, do we have any option to create an alert action to send us an email whenever a scheduled alert SPL fails due to errors in that SPL? We really need that.  
I have a large environment that the TZs between hosts & Splunk are off by minutes & hours at times. How do I get started ? If you have done such a project please share the procedures - any helpful SP... See more...
I have a large environment that the TZs between hosts & Splunk are off by minutes & hours at times. How do I get started ? If you have done such a project please share the procedures - any helpful SPLs. Thanks a million.
I have an input text and input dropdown that both need to allow blank value.  They cannot be null since the token must be set or the queries that use it won't run.  I need a prefix and suffix both wi... See more...
I have an input text and input dropdown that both need to allow blank value.  They cannot be null since the token must be set or the queries that use it won't run.  I need a prefix and suffix both wildcards only when there's a value, and to use a single wildcard (*) in its absence. I tried this, but the prefix and suffix keep multiplying and soon i have 10 suffixes and 10 prefixes. here's the input text:   <prefix/> <suffix/> <change> <eval token="assetFilter">if(len($assetFilter$)&gt;0, "*" . $assetFilter$ . "*", "*")</eval> </change>   after entering, removing, entering, removing values, the suffix and prefix kept multiplying and eventually looked like this ****tag=****   Here's the other one, a dropdown of macros so it needs the ` char with the wildcard, after selecting, unselecting, selecting, and unselecting values in the dropdown.   <change> <eval token="asset">if(len($asset$)&gt;0, "*`" . $asset$ . "`*", "*")</eval> </change>   *`*`*`*`*`*`*`*`*`*`*`*`*`*`*`ED_ENDI_Asdf`*`*`*`*`*`*`*`*`*`*`*`*`*`*`*   How can I use prefix and suffix conditionally a little better or in a way that works? -c    
I have a list of files name under one field called "attachment"  and I would like to split this string into multiple rows by file names. I tried split/makemv method but it was unsuccessful.. really a... See more...
I have a list of files name under one field called "attachment"  and I would like to split this string into multiple rows by file names. I tried split/makemv method but it was unsuccessful.. really appreciate any tips on separating this string by file. Thank you    For example:  Raw output: "image001.png Project_Preference_crosstab (3).csv GC_Preferences - 8 Oct.xlsx GC updated skills with ratings - 30 Sep 2021.xlsx Skill_Details_-_Base_crosstab (3).csv AP Talent list - 30 Sep 2021.xlsx UCD_Skills_Compliance_Details_crosstab (2).csv"   I would like to see: image001.png Project_Preference_crosstab (3).csv GC_Preferences - 8 Oct.xlsx GC updated skills with ratings - 30 Sep 2021.xlsx Skill_Details_-_Base_crosstab (3).csv AP Talent list - 30 Sep 2021.xlsx UCD_Skills_Compliance_Details_crosstab (2).csv
I need help to use the values from a lookup table into multiple fields, where the output from the lookup table is a list of values. The value from the table will be populated in a_ims, b_ims, c_ims.... See more...
I need help to use the values from a lookup table into multiple fields, where the output from the lookup table is a list of values. The value from the table will be populated in a_ims, b_ims, c_ims... instead of  "*" I tried this query below and some other variations but none of them worked. index=*  sourcetype=v_main (a_imsi=* OR b_imsi=* OR c_imsi=* OR d_imsi=* OR Imsi=*) | lookup ADHOC.csv Comment OUTPUT Imsi | eval IMSI=mvappend(a_imsi,b_imsi,c_imsi,d_imsi,Imsi) | mvexpand IMSI | bin span=1d _time | stats sum(TCDuration) as TCDuration by _time IMSI | eval TCDuration=TCDuration/1000 | eval Utilization=round(((TCDuration/86400)*100),1) | eval Utilization=if(Utilization >100, 100, Utilization) | fields - TCDuration | timechart eval(round(avg(Utilization),1)) by IMSI limit=0   Any ideas will be really helpful    thanks so much
What search can I do to find peers with status=down. Looking to form an alert when this happens but can't find it within a search. 
I have a dashboard that has a timechart displaying a count of values occurring every hour. My query is: index=app host=... sourcetype="..." siteType=... | timechart span=1h count(eval(status!=200)) ... See more...
I have a dashboard that has a timechart displaying a count of values occurring every hour. My query is: index=app host=... sourcetype="..." siteType=... | timechart span=1h count(eval(status!=200)) as Fails | eval Time = strftime(_time, "%Y-%d-%m %H:%M:%S") | fields - _time | table Time, Fails This works perfectly, but I want to add a drilldown on my table so the user can click on a row and see all the values for that hour. The closest thing I have been able to come up with is this query: index=app host=... sourcetype="..." siteType=... status!=200 ((earliest=$earliest$ latest<$latest$)) But if I click on a row, it gives me a search that looks like this: index=app host=... sourcetype="..." siteType=... status!=200 ((earliest=1633096800 latest<1633702712)) And I have an error in the search, "Invalid earliest_time." What is going on here? Is there a conversion I need to do on the earliest and latest tokens to get the correct time? 
Hello there, I hope someone can help! a report we generated doesn't send emails, or just sporadically. also, the action time for this is very high as you can see here (field "action_time_ms"):  10-... See more...
Hello there, I hope someone can help! a report we generated doesn't send emails, or just sporadically. also, the action time for this is very high as you can see here (field "action_time_ms"):  10-08-2021 14:20:35.156 +0200 INFO SavedSplunker - savedsearch_id="nobody;search;SIGNL4 - High or Critical Notable Events Clone", search_type="scheduled", user="maximilian.wehner", app="search", savedsearch_name="SIGNL4 - High or Critical Notable Events Clone", priority=default, status=success, digest_mode=0, scheduled_time=1633695120, window_time=0, dispatch_time=1633695122, run_time=2.838, result_count=1, alert_actions="email", sid="scheduler_bWF4aW1pbGlhbi53ZWhuZXI__search__RMD55d86aa6233cebf27_at_1633695120_428", suppressed=0, fired=1, skipped=0, action_time_ms=509817, thread_id="AlertNotifierWorker-1", message="", workload_pool=""   action_time_ms is a LOT. so something prevents it from being sent or whatever is going on.  usually I think splunk could send an email without configuring a mailserver but currently we want to use our o365 mailserver for it. this has been tested with another environment and there it definitely works like a charm.  here the config of the alert and the mailserver config:  --> we've artificially set the maxtime very high to check if splunk is finally sendint the mail after a while. record was over 8 minutes until a mail was sent.    My questions now are how can this happen? is there a way to further investigate and resolve this issue? currently this alert is mandatory for a security view and if this alert only comes every now and then, it's a main issue.    [email] auth_password = **** auth_username = user@xyz.de from = splunk@sxyz.de mailserver = smtp.office365.com pdf.header_left = none pdf.header_right = none use_tls = 1 reportPaperSize = a4 hostname = somehostname maxtime = 20m   is there something wrong with the config? What can I do to further troubleshoot this issue and hopefully resolve it? I guess this issue has come up in the past   thanks a lot for help!
Hello, I have a field with this values /v1/accounts/96ea01b5-7ea7-4dc6-b534-39ae8b114bba/transactions /v1/accounts/ff572b85-c3c6-4e54-8343-75c5aa954285 /v1/accounts/469754d0-9169-45ca-af86-a885... See more...
Hello, I have a field with this values /v1/accounts/96ea01b5-7ea7-4dc6-b534-39ae8b114bba/transactions /v1/accounts/ff572b85-c3c6-4e54-8343-75c5aa954285 /v1/accounts/469754d0-9169-45ca-af86-a885142d6ad4/transactions /v1/accounts/c68b8246-bd76-4d34-9d33-7fb4be4ebe9f/limits /v1/accounts/d9f1e948-e9aa-4a46-9e78-deeaf1d21143/limits /v1/accounts/f6fa235c-858d-42d2-80ae-85b12a750351 /v1/accounts/f4a0877f-5807-41ed-b7ee-c6be2e4e25be /v1/accounts/042c6b58-ea01-48cd-838e-06929b427f75 I need a query that show me only the lines that doesn't have nothing after the ID. Exemple  /v1/accounts/ff572b85-c3c6-4e54-8343-75c5aa954285 /v1/accounts/f6fa235c-858d-42d2-80ae-85b12a750351 /v1/accounts/f4a0877f-5807-41ed-b7ee-c6be2e4e25be /v1/accounts/042c6b58-ea01-48cd-838e-06929b427f75 Thanks
I am trying to install the Splunk App For Jenkins (v 2.0.4) in our Splunk Cloud environment (v 8.1.2103.3) and the app is failing vetting.  On the Splunkbase page for the App, the compatibility indic... See more...
I am trying to install the Splunk App For Jenkins (v 2.0.4) in our Splunk Cloud environment (v 8.1.2103.3) and the app is failing vetting.  On the Splunkbase page for the App, the compatibility indicates that I should be able to install. Is someone out there actively maintaining this app and can take note of some changes in a future version? Thanks, REID These are the messages related to the failures (there are some warning too): [ Failure Summary ] Failures will block the Cloud Vetting. They must be fixed. check_rest_handler_python_executable_exists The handler of stanza [script:customIndex] should be `python3` executable. File: default/restmap.conf Line Number: 12 The handler of stanza [script:customPanel] should be `python3` executable. File: default/restmap.conf Line Number: 1 The handler of stanza [script:userValidation] should be `python3` executable. File: default/restmap.conf Line Number: 23 check_for_telemetry_metrics_in_javascript The telemetry operations are not permitted. Match: window._splunk_metrics_events.push File: appserver/static/pages/job.js Line Number: 6 The telemetry operations are not permitted. Match: window._splunk_metrics_events.push File: appserver/static/pages/health.js Line Number: 6 The telemetry operations are not permitted. Match: window._splunk_metrics_events.push File: appserver/static/pages/audit.js Line Number: 6
I am using the following in a configuration being distributed to several remote syslog servers.  Works as expected on all UF's, except 1.  From a single UF's, the 'host' field in the indexed events i... See more...
I am using the following in a configuration being distributed to several remote syslog servers.  Works as expected on all UF's, except 1.  From a single UF's, the 'host' field in the indexed events is being reported as "PaloAlto" instead of the 4th segment as expected?  I searched through all of the .conf files on the UF manually and used BTOOL looking for a missed "host_segment" entry or something hidden in another config that would cause this, none found.  Am I am missing something obvious to the rest of you?       [monitor:///app01/logs/ASA] whitelist = .+\.log$ host_segment = 4 index = syn_netfw sourcetype = cisco:asa ignoreOlderThan = 2d disabled = 0 [monitor:///app01/logs/PaloAlto] whitelist = .+\.log$ host_segment = 4 index = syn_netfw sourcetype = pan:log ignoreOlderThan = 2d disabled = 0      
Hi, recently we deployed IT Essential Works with latest Exchange Content Pack. we also deployed the three addons for the Exchange  in the exchange nodes (including IIS and OWA logs). Now we are i... See more...
Hi, recently we deployed IT Essential Works with latest Exchange Content Pack. we also deployed the three addons for the Exchange  in the exchange nodes (including IIS and OWA logs). Now we are in the process of validation of the ITSI dashboards, External Logins Map is one of them, and we realized that the extracted source IP (c_ip field) corresponds to our load balancer (XXX.XXX.XXX.XXX) instead of the remote host (IP shown at the end of the event). below an example of exchange event that reach our splunk infra.     2021-10-08 12:22:31 XXX.XXX.XXX.XXX POST /Microsoft-Server-ActiveSync/default.eas Cmd=Ping&User=---%5n---&DeviceId=-------&DeviceType=Outlook&CorrelationID=<empty>;&cafeReqId=c586f22d-14cd-4449-be95-fe666b30c92e; 443 -------\----- 192.168.X.X Outlook-iOS-Android/1.0 - 200 0 0 181382 52.98.193.109     we use the official TA-Exchange-2013-Mailbox, TA-Exchange-ClientAccess and Ta-Windows-Exchange-IIS addons. I found the definition of c_ip field in the transform.conf and props.conf in  the TA-Windows-Exchange-IIS, but I dont see any specific regex for its correct extraction.  could someone tell me how to proceed to fix this parsing issue so the dashboards can show correct information? many thanks
Hi, my regex was like below , search| rex field=_raw "Status=(?<Status>\"\w+\s+\w+\".*?)," |stats count by Status My output is like below , Status                      count  "No Service"       ... See more...
Hi, my regex was like below , search| rex field=_raw "Status=(?<Status>\"\w+\s+\w+\".*?)," |stats count by Status My output is like below , Status                      count  "No Service"           250 Service                      500 but i am in need of output as below , Status                      count  No Service           250 Service                   500 I am in need of status ("No Service") as No Service with double quotes in Output .Please let me know what i am missing here .
Bonjour, Nous prévoyons de réaliser un upgrade de Splunk Enterprise. Actuellement nous avons 2 noeuds en version 8.0.6. Nous aimerions monter la version en 8.2.2. Nous utilisons entre autre les app... See more...
Bonjour, Nous prévoyons de réaliser un upgrade de Splunk Enterprise. Actuellement nous avons 2 noeuds en version 8.0.6. Nous aimerions monter la version en 8.2.2. Nous utilisons entre autre les app suivantes :  Splunk Add-on for Blue Coat ProxySG Splunk_TA_bluecoat-proxysg 3.6.0 Splunk Add-on for F5 BIG-IP Splunk_TA_f5-bigip 3.1.0 Pulse Connect Secure Splunk_TA_pulse_connect_secure 1.1.3 WALLIX Bastion TA-WALLIX_Bastion 1.0.4 Faut-il prévoir un upgrade de ces app avec la version Splunk Enterprise 8.2  ? Nous utilisons également des universal Forwarder en version 7.0 . Seront-ils toujours compatibles avec Splunk 8.2 ? Merci, Jean-Christophe Hermitte
Hi Folks, We have log file monitoring of one of the text file , and that text file getting updated once in a week. Then Splunk reads the data from that file. Today we had faced a situation , wh... See more...
Hi Folks, We have log file monitoring of one of the text file , and that text file getting updated once in a week. Then Splunk reads the data from that file. Today we had faced a situation , where log file updated with todays data but not logs were sent to Splunk. we verified in splunkd.log and didn't find any info related to that specific log file, and Splunk UF connected to HF and everything  is working fine and other data was flowing to Splunk as usal. However after Splunk restart data sent to splunk, I was wondering if log file is not getting updated for some time , will Splunk ignores the file from monitoring until restart?. and we have stanza ignoreOlderthan set to 5d , is this something to do with> . we are aware that ignoreOlderthan used to lgnore logs data older than specified  time, just wanted to make sure this is not that case.