All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have a custom sourcetype that has the following advanced setting: Name/Value EXTRACT-app : EXTRACT-app field extraction/^(?P<date>\w+\s+\d+\s+\d+:\d+:\d+)\s+(?P<host>[^ ]+) (?P<service>[a-zA-... See more...
I have a custom sourcetype that has the following advanced setting: Name/Value EXTRACT-app : EXTRACT-app field extraction/^(?P<date>\w+\s+\d+\s+\d+:\d+:\d+)\s+(?P<host>[^ ]+) (?P<service>[a-zA-Z\-]+)_app  (?P<level>\w+)⏆(?P<controller>[^⏆]*)⏆(?P<thread>[^⏆]*)⏆((?P<flowId>[a-z0-9]*)⏆)?(?P<message>[^⏆]*)⏆(?P<exception>[^⏆]*)   I updated the regex to be slightly less restrictive about the white-space following the "_app" portion: Name/Value EXTRACT-app : EXTRACT-app field extraction/^(?P<date>\w+\s+\d+\s+\d+:\d+:\d+)\s+(?P<host>[^ ]+) (?P<service>[a-zA-Z\-]+)_app\s+(?P<level>\w+)⏆(?P<controller>[^⏆]*)⏆(?P<thread>[^⏆]*)⏆((?P<flowId>[a-z0-9]*)⏆)?(?P<message>[^⏆]*)⏆(?P<exception>[^⏆]*)   (So instead of matching on two-spaces exactly following `_app` we match on one or more white-spaces.) After saving this change, it appears Splunk cloud still uses the previous regex. (Events that include only a single space after "_app" don't get their fields extracted.) I thought perhaps I needed to wait a little while for the change to propagate, but I made the change yesterday and it still doesn't extract the fields today. Is there anything else I need to do to have the regex change take effect?
@gcusello @richgalloway  Please correct me if I'm mistaken, but the source is where the data begins, while the endpoint acts as the destination or host where the data is either stored or received.
Thank you @richgalloway  Is there a way to avoid losing alerts generated during the smtp server offline period?   Thank you, Andrea
Hi @marco_carolo , you should extract all the fields and then correlate them: <your_search> | rex "[^\[]*\[(?<extracted_pid>[^\]]*)\]\s*\[(?<extracted_job_name>[^\]]*)\]\s*\[(?<extracted_index>[^\]... See more...
Hi @marco_carolo , you should extract all the fields and then correlate them: <your_search> | rex "[^\[]*\[(?<extracted_pid>[^\]]*)\]\s*\[(?<extracted_job_name>[^\]]*)\]\s*\[(?<extracted_index>[^\]]+\]\s*)(?<msg>.*)" | stats earliest(_time) AS earliest latest(_time) AS latest BY talend_job_name | eval duration=latest-earliest, earliest=strftime(earliest,"%Y-%m-%d %H:%M:%S"), latest=strftime(latest,"%Y-%m-%d %H:%M:%S") | table talend_job_name earliest latest duration Ciao. Giuseppe
If the SMTP server is not available then email messages will not be sent.  There is no queueing of emails.
Hello,   I've the following situation: I've inside logs the ETL logs, I've already extracted some data via search fields. The log structure is the following: Fri Dec 1 16:00:59 2023 [extracted_p... See more...
Hello,   I've the following situation: I've inside logs the ETL logs, I've already extracted some data via search fields. The log structure is the following: Fri Dec 1 16:00:59 2023 [extracted_pid] [extracted_job_name] [extracted_index_operation_incremental] extracted_message Example Fri Dec 1 07:57:40 2023 [111111][talend_job_name] [100] End job Fri Dec 1 06:50:40 2023 [111111][talend_job_name] [70] Start job Fri Dec 1 06:50:39 2023 [111111][talend_job_name1] [69] End job Fri Dec 1 05:40:40 2023 [111111][talend_job_name1] [30] Start job Fri Dec 1 05:40:39 2023 [111111][talend_job_name2] [29] End job Fri Dec 1 02:50:40 2023 [111111][talend_job_name2] [1] Start job   Expected: PID          NAME                         EXEC_TIME 111111 talend_job_name 1h 7min 111111 talend_job_name1 1h 10min 111111 talend_job_name2 2h 50min   What I was requested to do is to extract a table containing the job name and the execution time, one for each pid (a job can be executed multiple times, but each time has a different PID) in order to have the data available. It is not necessary that the job starts with index 1, since all subjobs inside a job have a separated logged name (for example, the import all could contain 10 subjobs, each of one with different names) My idea of a query would be a query that involves the PID and the job name combined as primary key, considering the start time the lower extracted_index_operation_incremental for that specific PK and the end time the max value of extracted_index_operation_incremental for that PK. Any help?   Thanks for any reply.    
I am working on upgrading an instance of heavy forwarder that is running an out of support version of 7.3.3. In order to upgrade this to 9.0.1, is there another version level this must be upgraded to... See more...
I am working on upgrading an instance of heavy forwarder that is running an out of support version of 7.3.3. In order to upgrade this to 9.0.1, is there another version level this must be upgraded to prior to bringing it to version 9.0.1? I searched for upgrade path and no luck.    Thanks.
  Hello, we need to patch the OS of our Splunk Enterprise cluster distributed on 2 sites, A & B. We will start the activity on site A, which contains one Deployer Server, two SH, one MN, three Ind... See more...
  Hello, we need to patch the OS of our Splunk Enterprise cluster distributed on 2 sites, A & B. We will start the activity on site A, which contains one Deployer Server, two SH, one MN, three Indexer and three HF. Site B contains one SH, three Indexer and one HF and will be updated later. Considering that the patching of OS will require a restart of the nodes, can you please tell me Splunk Best Practice to restart the Splunk nodes? I'd start with the SH nodes then the Indexer nodes, Deployer, MN and HF. All one by one. Do I have to enable maintenance mode on each node, restart the node and disable maintenance mode, or is it sufficient to stop Splunk on each node and restart the machine? Thank you, Andrea
Worked, thanks
Hello Team, I got a weird issue, that I struggle to troubleshoot. A month ago, I realized that my WinEventLog logs were consuming too much of my licenses, so I decided to index them in the XmlWinEv... See more...
Hello Team, I got a weird issue, that I struggle to troubleshoot. A month ago, I realized that my WinEventLog logs were consuming too much of my licenses, so I decided to index them in the XmlWinEventLog format. To do this, I simply modified the inputs.conf file of my Universal Forwarder. I changed from this configuration : [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:(?!\sgroupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:(?!\sgroupPolicyContainer)" renderXml = false sourcetype = WinEventLog index = wineventlog To this configuration: [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:(?!\sgroupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:(?!\sgroupPolicyContainer)" renderXml = true sourcetype = XmlWinEventLog index = wineventlog Then I started receiving events and my license usage reduced, which made me happy. However, upon closer observation, I realized that I wasn't receiving all the events as before. Indeed, I now observe that the event frequency of the XmlWinEventLog logs is random. You can observe this on these timelines :   And in the metrics :   On the other hand, with the WinEventLog format, I have no issues:   I tried reinstalling the UF, there are no interesting errors in the splunkd.log, and I am out of ideas for troubleshooting. Thank you for your help.
@bowesmana Great, that works. This is what I have done.  Parameters for dashboard A earliest = $form.t_time.earliest$ latest = $form.t_time.latest$ Then on dashboard B my timepicker should refe... See more...
@bowesmana Great, that works. This is what I have done.  Parameters for dashboard A earliest = $form.t_time.earliest$ latest = $form.t_time.latest$ Then on dashboard B my timepicker should refer to dashboard A tokens, see below.  (leaving out the token name). Now you can adjust your default accordingly, if you want the default to be your token then use $earliest$ $latest$ . But by setting your default to 15 minutes when you directly go to dashboard B you will not receive an error of missing earliest.  <input type="time"> <label></label> <default> <earliest>-15m</earliest> <latest>now</latest> </default> </input>  
Hello, can you please tell me what happens to email alerts if the smtp used for email delivery is temporary offline? Is there a buffer where alerts are saved and then are sent once the smtp server ... See more...
Hello, can you please tell me what happens to email alerts if the smtp used for email delivery is temporary offline? Is there a buffer where alerts are saved and then are sent once the smtp server becomes available again? Is there a link to Splunk documentation about that? Thank you, Andrea
There's also this method to get a list of data sources | tstats count where index=* by source  
Hi @gcusello  this works fine, you can see Stephen_Sorkin answer at https://community.splunk.com/t5/Getting-Data-In/Summary-indexing-on-a-search-head/m-p/34175  Splunk expert told me I may test thi... See more...
Hi @gcusello  this works fine, you can see Stephen_Sorkin answer at https://community.splunk.com/t5/Getting-Data-In/Summary-indexing-on-a-search-head/m-p/34175  Splunk expert told me I may test this : https://community.splunk.com/t5/Getting-Data-In/Search-time-Mask/td-p/14363 [mysourcetype] EXTRACT-out = (?s)^(?:\<\d+\>)?(?<altraw>.*) FIELDALIAS-raw = altraw as _raw What do you think? Thanks for your help!
Hi @AL3Z , you could use one of these searches: list of endpoints: | tstats count WHERE index=* BY host list of data sources: | tstats count WHERE index=* BY sourcetype you can also gave both t... See more...
Hi @AL3Z , you could use one of these searches: list of endpoints: | tstats count WHERE index=* BY host list of data sources: | tstats count WHERE index=* BY sourcetype you can also gave both the information in pone search: | tstats values(sourcetype) AS sourcetype count WHERE index=* BY host Ciao. Giuseppe
Hi @roopeshetty , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @splunkreal , I suppose that you have your summary indexes on the Indexers, not on Search Heads! if not review this position, you can have the correct result centralizing summary indexes on the ... See more...
Hi @splunkreal , I suppose that you have your summary indexes on the Indexers, not on Search Heads! if not review this position, you can have the correct result centralizing summary indexes on the Indexers. So, you already have all that you need for anonymizing data. Let me know if I can help you more, otherwise, please accept one answer foer the other people of Community. Ciao. Giuseppe P.S.: Karma Points are appreciated
Hi @gcusello we already use summary indexing with local index on that particular search head. Thanks.
Hi, I am trying to get the information how many datasources and endpoints we have Integrated in to splunk.How can we get this information can anyone pls provide me a query to find this ..
I said "config files" followed by an actual config file path in my first post. But for clarification. I check it with `btool` and `show config`. I am also aware that the config files are not automati... See more...
I said "config files" followed by an actual config file path in my first post. But for clarification. I check it with `btool` and `show config`. I am also aware that the config files are not automatically active if I change them on disc. I do a restart (Not debug refresh) if I change anything on disk. I also keep track of the restarts. The SH is also not part of a SH cluster which could also be a source of confusion. I don't use any other remote managing agents which could change the files. About the two commands you kindly provided. Neither `splunk btool` or `splunk show config` has the indexes definition for index_a or index_b on the SH. Only on the IX. Authorization is set only for index_a in etc/systems/local/authorization.conf for a specific group. Please take note that I cant just post the outputs of the commands because there is some confidential information within.