All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

OK assuming your start and end fields match the timestamp format you are using to parse, then this should work for both fields (but your example data doesn't show it as such). Have you tried it?
Yes but is it to apply the result of the date reformating provided into a fields of this answer :).  But i can open a new topic if necessary
Hello community! I want to extract data from 2 different logs like bellow: Log 1: 2024-04-28 06:38:51 INFO Start auth for accountId=1, ip=192.168.1.1 Log 2: 2024-04-28 06:38:27 INFO Collect respon... See more...
Hello community! I want to extract data from 2 different logs like bellow: Log 1: 2024-04-28 06:38:51 INFO Start auth for accountId=1, ip=192.168.1.1 Log 2: 2024-04-28 06:38:27 INFO Collect response for accountId=1, was: response=FINISH For example for accountId=1 I have 10 logs with "Start auth", I mean 10 attempts of start auth. In second log, for the same accountId I have 1 or more logs with FINISH. I want to make a table like accountId                              Start auth                                      FINISH 1                                                 10                                                       1   Could you helm me with this?  Thank you.
Visualisations display the results of a search so can you not combine your searches into a single search and display that on a single map?
Environment : Distributed Splunk Enterprise (indexer cluster) Version: 9.0.5 Issue: After setting journalCompression to zstd in indexes.conf, we noticed that the setting is applied for warm but not... See more...
Environment : Distributed Splunk Enterprise (indexer cluster) Version: 9.0.5 Issue: After setting journalCompression to zstd in indexes.conf, we noticed that the setting is applied for warm but not for frozen buckets. The setting was applied months ago. In the following example, we can see that files timestamped from today are zst in warm and gzip in frozen. I did not find any related information in documentation indexesconf Is it an expected behavior or am I missing some setting in my configuration? Evidence: ## WARM BUCKETS [splunk@indexer (PROD) ~]$ ls -latr /var/lib/splunk/warm/<index_name> [...] drwx--x---. 3 splunk splunk 4096 Apr 30 11:19 db_1714450734_1714041906_2521_1B4FA1BE-AA81-459F-B38A-1FB23A018EDB [splunk@indexer (PROD) ~]$ ls -latr /var/lib/splunk/warm/<index_name>/db_1714450734_1714041906_2521_1B4FA1BE-AA81-459F-B38A-1FB23A018EDB/rawdata/ [...] -rw-------. 1 splunk splunk 113295494 Apr 30 11:19 journal.zst ## FROZEN BUCKETS [splunk@indexer (PROD) ~]$ ls -latr /var/lib/splunk/frozen/<index_name> [...] drwx------. 3 splunk splunk 29 Apr 30 11:20 rb_1709121660_1709115460_2204_3BF8DDF1-9874-4848-9DB4-880DA5EBA00F [splunk@indexer (PROD) ~]$ ls -latr /var/lib/splunk/frozen/<index_name>/rb_1709121660_1709115460_2204_3BF8DDF1-9874-4848-9DB4-880DA5EBA00F/rawdata/ [...] -rw-------. 1 splunk splunk 2342045 Feb 28 19:08 journal.gz
Firstly, this seems to be a different question. Secondly, haven't you already received and accepted a solution here 
Thanks, I will configure web ui with ssl certificate. Also HEC is running on port 8088
In a dashboard I am using 2 searches and in each search I am using geostats command to build a map and show results on the map. Can I point these 2 searches on 1 map . meaning i want that rather than... See more...
In a dashboard I am using 2 searches and in each search I am using geostats command to build a map and show results on the map. Can I point these 2 searches on 1 map . meaning i want that rather than using geostats on each panel search I want it to be common for every panel so that i don't have to write it in every panel search.
127.0.0.1 is an internal ip address that you can't reach from any external source.   localhost - Wikipedia Furthermore Port 8000 is not the default HEC Port but the default web port. So if you have... See more...
127.0.0.1 is an internal ip address that you can't reach from any external source.   localhost - Wikipedia Furthermore Port 8000 is not the default HEC Port but the default web port. So if you haven't change the default port for the web ui you must use another (high) port for HEC.  Set up and use HTTP Event Collector in Splunk Web - Splunk Documentation  
http://127.0.0.1:8000  as HEC variable I have declared this variable and calling it in the function. Can you suggest how can I mitigate this issue ?
Which endpoint have you defined in your lambda function? localhost 127.0.0.1 is not an ip address that is reachable from an external source.
Hi @Skins , this solution uses a Js that you can modify, but it was done to solve a requirement that now is present in the default feature: if you go in the panel Edit (the pensil in the panle, in ... See more...
Hi @Skins , this solution uses a Js that you can modify, but it was done to solve a requirement that now is present in the default feature: if you go in the panel Edit (the pensil in the panle, in the top of the column, you can define the coours of the ces of the column. Ciao. Giuseppe
Hi @anel , you could go in [Settings > Health Report Manager+ and change the threshold of the latency controls, even if in my mind the issue will disappear in few time. Ciao. Giuseppe
A restart of the SHC resolved the issue 
Hello ,  | fields guid start end duration status  is there  way to reformat a field for exemple here the start? i want to apply the format done by that : | eval start=strftime(strptime(start, "... See more...
Hello ,  | fields guid start end duration status  is there  way to reformat a field for exemple here the start? i want to apply the format done by that : | eval start=strftime(strptime(start, "%FT%T.%Q%Z"), "%F %T")   Thanks LAurent
Hello @jwhughes58 , Using REPORT, you generally call the transforms stanza that has the transformations written for extractions, routing, etc. You can simply write the regex that matches against _ra... See more...
Hello @jwhughes58 , Using REPORT, you generally call the transforms stanza that has the transformations written for extractions, routing, etc. You can simply write the regex that matches against _raw and extract the field required. If you use REPORT and call the transforms to extract the field, the same thing will happen.  If you necessarily want to use EXTRACT, you need to extract the Match_Details.match.properties.user field at index time and replace the (.) with (_). Then you'll be able to use EXTRACT parameter in props.conf for search time field extractions. Additionally, you can also consider using rex command in the SPL query to extract the fields at search time.  Relevant documents: https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Propsconf https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/SearchReference/Rex Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated.
Hey Guiseppe,  Thanks for replying. Yeah we did not have delayed searches in the last 24h.  One day later we still have exactly the same numbers.    
Hello @SewingMachine77, I do not think there is a possibility to create templates and use them in the alerts. However, you can definitely customize the message for the saved search. Additionally, yo... See more...
Hello @SewingMachine77, I do not think there is a possibility to create templates and use them in the alerts. However, you can definitely customize the message for the saved search. Additionally, you can also use the search job tokens and use them in the message. Refer to the following document for assistance with search job tokens: https://docs.splunk.com/Documentation/Splunk/9.2.1/Alert/EmailNotificationTokens   Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated. 
Hi, thanks for answering it's work perfectly with that  | eval latest_time=strftime(strptime(latest_time, "%FT%T.%Q%Z"), "%F %T")   Thanks again for your answer. Laurent