All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @kc_prane , using a screenshot and masking your search we cannot help you! @bowesmana was saying that probably you don't need to use append and you can put both the searches in the main search, ... See more...
Hi @kc_prane , using a screenshot and masking your search we cannot help you! @bowesmana was saying that probably you don't need to use append and you can put both the searches in the main search, in this way you don't have any limit. Ciao. Giuseppe
Hi @Erbrown, which kind of ingestions are you speaking about: forwarders, syslog, HEC? if Forwarders, you can excrypt data between Forwarders and Indexers and there are checking technics inside Spl... See more...
Hi @Erbrown, which kind of ingestions are you speaking about: forwarders, syslog, HEC? if Forwarders, you can excrypt data between Forwarders and Indexers and there are checking technics inside Splunk. If you're speaking of syslog: I hint to use an rsyslog server and read files using a Universal Forwarders; I'm not sure that's possible to encrypt syslogs; in addition, you could use two UFs and a Load Balancer to avoid Single Point of Failures, If you're speaking of HEC, you can use https and the token is a securization of your ingestion; as syslogs, you should use two Forwarders and a Load Balancer. Ciao. Giuseppe
Hi @scout29 , please try this: | rex "Registrar:\s+(?<Registrar>[^\\]*)" that you can test at https://regex101.com/r/7PdpcJ/1 If it doesn't run on Splunk use three backslashes in the square paren... See more...
Hi @scout29 , please try this: | rex "Registrar:\s+(?<Registrar>[^\\]*)" that you can test at https://regex101.com/r/7PdpcJ/1 If it doesn't run on Splunk use three backslashes in the square parenthesis (sometimes Splunk is strange in regex extractions!). Ciao. Giuseppe
@bowesmana  I have added what you suggested in my search query as below: index= "abc" "ebnc event did not balanced for filename" sourcetype=600000304_gg_abs_dev source!="/var/log/messages" | rex "-... See more...
@bowesmana  I have added what you suggested in my search query as below: index= "abc" "ebnc event did not balanced for filename" sourcetype=600000304_gg_abs_dev source!="/var/log/messages" | rex "-\s+(?<Exception>.*)" |eval index_time=strftime(_indextime, "%F %T.%Q")| table Exception source host sourcetype _time index_time I am getting result as below:   @bowesmana  could you please guide me what should I change in my alert setting Should I change it from last 15 minutes to something else and also should I change this Cron schedule  */5 * * * * I want as soon as the events occurred in Splunk incident should get created at that time only.
Add in the \r\n to the regex, i.e. | rex "Registrar:\s(?<registrar>.*?)\\\r\\\n Registrar IANA" Note 3 slashes I assume those \r\n are literal characters rather than CR/LF?
I am trying to write a rex command that extracts the field "registrar" from the below four event examples. The below values in bold are what i am looking for to be the value for "registrar".  I am us... See more...
I am trying to write a rex command that extracts the field "registrar" from the below four event examples. The below values in bold are what i am looking for to be the value for "registrar".  I am using the following regex to extract the field and values, but i seem to be capturing the \r\n after the bold values as well.  How can i modify my regex to capture just the company name in bold leading up to \r\n Registrar IANA Current regex being used:   Registrar:\s(?<registrar>.*?) Registrar IANA   Expiry Date: 2026-12-09T15:18:58Z\r\n Registrar: ABC Holdings, Inc.\r\n Registrar IANA ID: 972 Expiry Date: 2026-12-09T15:18:58Z\r\n Registrar: Gamer.com, LLC\r\n Registrar IANA ID: 837 Expiry Date: 2026-12-09T15:18:59Z\r\n Registrar: NoCo MFR Ltd.\r\n Registrar IANA ID: 756 Expiry Date: 2026-12-09T15:18:59Z\r\n Registrar: Onetrust Group, INC\r\n Registrar IANA ID: 478
Hi @bowesman Thanks for the reply Please find the below snap shots for the query. I had masked my base search.. fyi  my base search is same for the subsearch as well.  
Assuming the fields are always in the same order, this should do it. | rex "Registrar: (?<registrar>.*?) Registar ID"
Hi @m_pham. I am using a standard source and sourcetype. sourcetype="xmlwineventlog" source="WinEventLog:Security" Thank you!
I need the count of events.
The problem is because you are using a change block which is run always and not on the submit button, so you are setting the token dependencies when you change the dropdown. You need to do it a bit ... See more...
The problem is because you are using a change block which is run always and not on the submit button, so you are setting the token dependencies when you change the dropdown. You need to do it a bit differently, so that a search is run based when the submit button is clicked and that search has a <done> clause that will set/unset the panel triggers to show/hide the panels. Note that it uses eval statements to cause the tokens to be set/unset. See this <form> <label>Submit</label> <init> <set token="loadsummary"></set> </init> <fieldset submitButton="true" autoRun="false"> <input token="field1" type="time" searchWhenChanged="false"> <label>Time Picker</label> <default> <earliest>-15m</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="subsummary" depends="$loadsummary$" searchWhenChanged="false"> <label>Summary Selection</label> <choice value="FUNC">Function Summary</choice> <choice value="MQ">MQ Summary</choice> </input> </fieldset> <row depends="$funcsummaryMQ$"> <panel depends="$funcsummaryMQ$"> <title>ABC</title> <table> <search> <query>index="SAMPLE" </query> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="wrap">true</option> </table> </panel> </row> <row depends="$hidden$"> <panel> <table> <search> <query> | makeresults | eval ss=$subsummary|s$ </query> <done> <eval token="funcsummary">if($result.ss$="FUNC","true",null())</eval> <eval token="funcsummaryMQ">if($result.ss$="MQ","true",null())</eval> </done> </search> </table> </panel> </row> </form>
Hi Folks,   I'm looking for a document that will help me understand my options for ensuring the integrity of data inbound to splunk from monitored devices, and any security options I may have there... See more...
Hi Folks,   I'm looking for a document that will help me understand my options for ensuring the integrity of data inbound to splunk from monitored devices, and any security options I may have there.  I know TLS is an option for inter-splunk traffic.  Unfortunately, I'm not having any luck with finding options to ensure the integrity and security of data when it's first received into splunk. Surely there's a way for me to secure that, what am I missing here?  
Your alert is searching for a 15 minute window but running every 5 minutes, so at 8:20 it will search 8:05 to 8:20 and at 8:25 it will search 8:10 to 8:25 and so on, so you will get duplicate alerts.... See more...
Your alert is searching for a 15 minute window but running every 5 minutes, so at 8:20 it will search 8:05 to 8:20 and at 8:25 it will search 8:10 to 8:25 and so on, so you will get duplicate alerts. If your event time is 8:20 but it is not getting indexed until 9:16, then you will not see that alert as when the search runs at 8:25 the data is not present and when the search runs at 9:20, the event time is 8:20, so not in the search window. If your events are arriving late, that needs to be checked with the how those events are being forwarded. You can look at event lag by adding | eval index_time=strftime(_indextime, "%F %T.%Q") before your table statement and then adding index_time field to your table statement, so you can see when that events was indexed. If you KNOW you have lag and there is nothing you can do about it, then you may need to adjust the time window of the search to something like earliest=-60m@m latest=--55m@m so that you are searching a 5 minute window 1 hour ago. The search window should generally match the frequency of the cron schedule.  
This is a really cavalier response to such a major change. It is not a simple task to 'update automation' in large organizations, where you also need to consider multiple legacy systems. As was ment... See more...
This is a really cavalier response to such a major change. It is not a simple task to 'update automation' in large organizations, where you also need to consider multiple legacy systems. As was mentioned above, Splunk has never officially supported the installation of both Enterprise and Forwarder on the same server, so who does this change benefit?
@Anud Your search is NOT doing what @FelixLeh suggested. The idea is that you load the SECOND lookup (fileB) first and then lookup the common field A to get the required field E from the FIRST looku... See more...
@Anud Your search is NOT doing what @FelixLeh suggested. The idea is that you load the SECOND lookup (fileB) first and then lookup the common field A to get the required field E from the FIRST lookup. Your example shows that fileB contains the data and fileA contains the missing field E (CAR/BUS) that is needed to enrich fileB data. Note that your actual search uses append with a subsearch - you should not do it that way, as inputlookup already has an append option and this method does not have the limitations of a subsearch, i.e. | inputlookup append=t file  
I am so glad I found this thread. You are completely spot on on everything you said. It is infuriating for us as admins and embarrassing for Splunk as a brand that such major changes are implemented... See more...
I am so glad I found this thread. You are completely spot on on everything you said. It is infuriating for us as admins and embarrassing for Splunk as a brand that such major changes are implemented in minor version releases with little to no notice or documentation.    Absolutely ridiculous to change default behavior of installer in a minor release. Period.  
append and subsearches have limitations and limits defined in limits.conf, so you cannot override these, but that number seems an odd number.  What is your search - there are often alternatives to a... See more...
append and subsearches have limitations and limits defined in limits.conf, so you cannot override these, but that number seems an odd number.  What is your search - there are often alternatives to append and a subsearch. Can you share your search
What do you need to retain from those events? eventstats is a slow operation as it will run on the search head, so the amount of information you need should be minimised before using that, so use the... See more...
What do you need to retain from those events? eventstats is a slow operation as it will run on the search head, so the amount of information you need should be minimised before using that, so use the fields command to limit only those fields you need beforehand. If that is still too slow, the subsearch approach may work for you   
Hi @Ryan.Paredez , I have sent Private message to @Cansel.OZCAN , and will check my messages time to time. Thank you, Deepak Paste 
In the Splunkbase site details tab there is a link to the github documentation https://github.com/sghaskell/maps-plus Also, there are lots of examples in the app itself, so you can look at those se... See more...
In the Splunkbase site details tab there is a link to the github documentation https://github.com/sghaskell/maps-plus Also, there are lots of examples in the app itself, so you can look at those searches to see how they are producing things.