All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @bowesman Thanks for the reply Please find the below snap shots for the query. I had masked my base search.. fyi  my base search is same for the subsearch as well.  
Assuming the fields are always in the same order, this should do it. | rex "Registrar: (?<registrar>.*?) Registar ID"
Hi @m_pham. I am using a standard source and sourcetype. sourcetype="xmlwineventlog" source="WinEventLog:Security" Thank you!
I need the count of events.
The problem is because you are using a change block which is run always and not on the submit button, so you are setting the token dependencies when you change the dropdown. You need to do it a bit ... See more...
The problem is because you are using a change block which is run always and not on the submit button, so you are setting the token dependencies when you change the dropdown. You need to do it a bit differently, so that a search is run based when the submit button is clicked and that search has a <done> clause that will set/unset the panel triggers to show/hide the panels. Note that it uses eval statements to cause the tokens to be set/unset. See this <form> <label>Submit</label> <init> <set token="loadsummary"></set> </init> <fieldset submitButton="true" autoRun="false"> <input token="field1" type="time" searchWhenChanged="false"> <label>Time Picker</label> <default> <earliest>-15m</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="subsummary" depends="$loadsummary$" searchWhenChanged="false"> <label>Summary Selection</label> <choice value="FUNC">Function Summary</choice> <choice value="MQ">MQ Summary</choice> </input> </fieldset> <row depends="$funcsummaryMQ$"> <panel depends="$funcsummaryMQ$"> <title>ABC</title> <table> <search> <query>index="SAMPLE" </query> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="wrap">true</option> </table> </panel> </row> <row depends="$hidden$"> <panel> <table> <search> <query> | makeresults | eval ss=$subsummary|s$ </query> <done> <eval token="funcsummary">if($result.ss$="FUNC","true",null())</eval> <eval token="funcsummaryMQ">if($result.ss$="MQ","true",null())</eval> </done> </search> </table> </panel> </row> </form>
Hi Folks,   I'm looking for a document that will help me understand my options for ensuring the integrity of data inbound to splunk from monitored devices, and any security options I may have there... See more...
Hi Folks,   I'm looking for a document that will help me understand my options for ensuring the integrity of data inbound to splunk from monitored devices, and any security options I may have there.  I know TLS is an option for inter-splunk traffic.  Unfortunately, I'm not having any luck with finding options to ensure the integrity and security of data when it's first received into splunk. Surely there's a way for me to secure that, what am I missing here?  
Your alert is searching for a 15 minute window but running every 5 minutes, so at 8:20 it will search 8:05 to 8:20 and at 8:25 it will search 8:10 to 8:25 and so on, so you will get duplicate alerts.... See more...
Your alert is searching for a 15 minute window but running every 5 minutes, so at 8:20 it will search 8:05 to 8:20 and at 8:25 it will search 8:10 to 8:25 and so on, so you will get duplicate alerts. If your event time is 8:20 but it is not getting indexed until 9:16, then you will not see that alert as when the search runs at 8:25 the data is not present and when the search runs at 9:20, the event time is 8:20, so not in the search window. If your events are arriving late, that needs to be checked with the how those events are being forwarded. You can look at event lag by adding | eval index_time=strftime(_indextime, "%F %T.%Q") before your table statement and then adding index_time field to your table statement, so you can see when that events was indexed. If you KNOW you have lag and there is nothing you can do about it, then you may need to adjust the time window of the search to something like earliest=-60m@m latest=--55m@m so that you are searching a 5 minute window 1 hour ago. The search window should generally match the frequency of the cron schedule.  
This is a really cavalier response to such a major change. It is not a simple task to 'update automation' in large organizations, where you also need to consider multiple legacy systems. As was ment... See more...
This is a really cavalier response to such a major change. It is not a simple task to 'update automation' in large organizations, where you also need to consider multiple legacy systems. As was mentioned above, Splunk has never officially supported the installation of both Enterprise and Forwarder on the same server, so who does this change benefit?
@Anud Your search is NOT doing what @FelixLeh suggested. The idea is that you load the SECOND lookup (fileB) first and then lookup the common field A to get the required field E from the FIRST looku... See more...
@Anud Your search is NOT doing what @FelixLeh suggested. The idea is that you load the SECOND lookup (fileB) first and then lookup the common field A to get the required field E from the FIRST lookup. Your example shows that fileB contains the data and fileA contains the missing field E (CAR/BUS) that is needed to enrich fileB data. Note that your actual search uses append with a subsearch - you should not do it that way, as inputlookup already has an append option and this method does not have the limitations of a subsearch, i.e. | inputlookup append=t file  
I am so glad I found this thread. You are completely spot on on everything you said. It is infuriating for us as admins and embarrassing for Splunk as a brand that such major changes are implemented... See more...
I am so glad I found this thread. You are completely spot on on everything you said. It is infuriating for us as admins and embarrassing for Splunk as a brand that such major changes are implemented in minor version releases with little to no notice or documentation.    Absolutely ridiculous to change default behavior of installer in a minor release. Period.  
append and subsearches have limitations and limits defined in limits.conf, so you cannot override these, but that number seems an odd number.  What is your search - there are often alternatives to a... See more...
append and subsearches have limitations and limits defined in limits.conf, so you cannot override these, but that number seems an odd number.  What is your search - there are often alternatives to append and a subsearch. Can you share your search
What do you need to retain from those events? eventstats is a slow operation as it will run on the search head, so the amount of information you need should be minimised before using that, so use the... See more...
What do you need to retain from those events? eventstats is a slow operation as it will run on the search head, so the amount of information you need should be minimised before using that, so use the fields command to limit only those fields you need beforehand. If that is still too slow, the subsearch approach may work for you   
Hi @Ryan.Paredez , I have sent Private message to @Cansel.OZCAN , and will check my messages time to time. Thank you, Deepak Paste 
In the Splunkbase site details tab there is a link to the github documentation https://github.com/sghaskell/maps-plus Also, there are lots of examples in the app itself, so you can look at those se... See more...
In the Splunkbase site details tab there is a link to the github documentation https://github.com/sghaskell/maps-plus Also, there are lots of examples in the app itself, so you can look at those searches to see how they are producing things.
Thanks Robert.  I would like to clarify the search as I need the events less than the p95 duration. Shouldn't the eval section be:  | eval search = "dur<"+p95Dur
Try this - I'm not the best at regex and someone else may come along and provide a more efficient one.   Registrar: (?<registrar>.+[^\s]).+Registrar ID     
Thanks for your response Rich.  Using eventstats took too long to complete to the point it wasn't usable.
i am looking for the field registrar to be extracted. There are three spaces after the registrar string - but i cant seem to write my regex to capture the full registrar name up to the three spaces. ... See more...
i am looking for the field registrar to be extracted. There are three spaces after the registrar string - but i cant seem to write my regex to capture the full registrar name up to the three spaces. I am using this but not getting the full string extracted  \sRegistrar:\s(?<registrar>\w+\s\w+)  
Try adding this to your search | rex field=_raw "Registrar ID: (?<registrar_id>\S+)"  Update: I misread your post, standby for an updated search to include all three field extraction - unless someo... See more...
Try adding this to your search | rex field=_raw "Registrar ID: (?<registrar_id>\S+)"  Update: I misread your post, standby for an updated search to include all three field extraction - unless someone else beat me to it. You can also use the "Extract New Fields" or "Event Actions" option when you run your search.
I masked the IP address in this reply.