All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Why can I not find any documentation on Firewall Investigator module of Splunk Enterprise?
Hello all,  I wanted to share my recently published Splunk extension for Chrome and Edge. This extension is free and enables several features: Code commenting with Ctrl + / Comment collapsing/fol... See more...
Hello all,  I wanted to share my recently published Splunk extension for Chrome and Edge. This extension is free and enables several features: Code commenting with Ctrl + / Comment collapsing/folding Saving and retrieving queries Inline command help You can check it out at Splunk Search Assistant (google.com) Julio
Hi @yuanliu , thank you for the inputs. As we have more number of alerts to be done. We want to go with CSV option. I will create CSV file and will add time,  date and month, but I am not sure how ... See more...
Hi @yuanliu , thank you for the inputs. As we have more number of alerts to be done. We want to go with CSV option. I will create CSV file and will add time,  date and month, but I am not sure how to link with that in query  Can you please help me on that 
Is there a way to change the _time field of imported data to be a custom extracted datetime field? Or at least some way to specify a different field used by the time picker? I have seen some so... See more...
Is there a way to change the _time field of imported data to be a custom extracted datetime field? Or at least some way to specify a different field used by the time picker? I have seen some solutions use props.conf but I am on Splunk Cloud 
Not sure if I understand.  What is "it" and what is "them" for which "it" works?  Again, this is not a search question.  Perhaps better suited for Splunk Dev forum.  It looks like related to some dat... See more...
Not sure if I understand.  What is "it" and what is "them" for which "it" works?  Again, this is not a search question.  Perhaps better suited for Splunk Dev forum.  It looks like related to some data, perhaps encrypted data. You can read /opt/splunk/etc/apps/search/bin/sendemail.py and its imported modules and diagnose this message.  (_csv is one of those modules.)  But volunteers will not have sufficient information to really pin point.
Mere count of field_AB is not particularly insightful. (Also, what are the counts?  Are they unique value counts or counts of events that has field_AB?  There are many other metrics for either of the... See more...
Mere count of field_AB is not particularly insightful. (Also, what are the counts?  Are they unique value counts or counts of events that has field_AB?  There are many other metrics for either of these to be useful.) Let me clarify something.  In your latest illustration of Inital Query Results, you are having problem populating field_D, field_C, and field_E corresponding to select values of field_AB.  Is this correct?  Your illustration cannot be a precise representation because it is impossible for groupby field_AB to have the same value "UniqueID" in every row.  This means that your observed missing values of field_D, field_C, and field_E has something to do with your actual data. In other words, coalesce, stats, and groupby are all functioning as designed.  All those "gaps" in field_D, field_C, and field_E only means that source_1 and source_b contain different sets of unique values of field_AB (field_A in source_1, field_B in source_2).  This is a simple test:   index=index_1 (sourcetype=source_1 field_D="Device" field_E=*Down* OR field_E=*Up*) OR (source="source_2" earliest=-30d@d latest=@m) | eval field_AB=coalesce(field_A, field_B) | eval source = case(source == "source_2", "source_2", sourcetype == "source_1", "source_1", true(), "mingled") | stats dc(field_AB) by source   Are the count the same for both sources? Are there any "mingled" counts? (Note one of your searches uses sourcetype as filter, the other uses source.  This is generally a bad strategy because they will confuse you three months from now.) If you need concrete help, you must post anonymized raw data, and show us which data set cause the problem.  Alternatively, you need to inspect data closely; specifically, how are field_D and field_E populated with field_A in source_1, and how is field_C populated with field_B in source_2.
Something like   |tstats count max(_time) as _time ``` you will need latest _time ``` where index=app-idx host="*abfd*" sourcetype=app-source-logs by host | eval dom = strftime(_time, "%m"), hod ... See more...
Something like   |tstats count max(_time) as _time ``` you will need latest _time ``` where index=app-idx host="*abfd*" sourcetype=app-source-logs by host | eval dom = strftime(_time, "%m"), hod = strftime(_time, "%H") | where NOT dom IN (10, 18, 25) AND (8 > hod OR hod > 11) ``` whether AND or OR depends on exact semantic ``` | fields - _time dom hod   Several points of discussion. Semantics of "on 10 , 18, 25 and during 8am to 11am" is very loose in English.  Two opposite interpretations can be conveyed by this same phrase.  So you need to tune the logic according to your intention. _time is taken as the latest in dataset by host.  Depending on your data density, you may want to take some other approach, such as info_endtime. It is probably better to code your exclusions in a CSV than hard code in search.  But that's out of the scope of this question.
Netapp products whch are running DataONTAP are being transitioned from ZAPI to REST.  Support for ZAPI wil be dropped in future OnTAP releases. Since the Splunk TA uses ZAPI, does it also support RE... See more...
Netapp products whch are running DataONTAP are being transitioned from ZAPI to REST.  Support for ZAPI wil be dropped in future OnTAP releases. Since the Splunk TA uses ZAPI, does it also support REST?  If it does not currently use REST, are there plans to deliver a future version which does? Thanks.
There are many possible ways of choosing time, so how do you want it to work if a) user selects from 00:00 yesterday to now - (bit more than one day) b) last 24 hours - with end time as "now" not b... See more...
There are many possible ways of choosing time, so how do you want it to work if a) user selects from 00:00 yesterday to now - (bit more than one day) b) last 24 hours - with end time as "now" not beginning of minute, so tiny bit more than one day c) yesterday + day before to now, 2.xx days and other permutations of part days. Using timechart would be the general way to go, which will handle the majority of cases, but then if you can define the conditions for the display of a single row, then all you need to do is set a token which contains this as the token value <set token="use_single_row">| addcoltotals | tail 1 | fields - _time</set> and then in your search, do ... | timechart span=1d sum(b) as b by env $use_single_row$ and to determine your conditions of the time range selected, make a search at the start of your XML like this <search> <query> | makeresults | addinfo | eval gap=if(info_max_time - info_min_time < (86400+60), 1, 0) </query> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> <done> <eval token="use_single_row">if($result.gap$=1, "| addcoltotals | tail 1 | fields - _time", "")</eval> </done> </search> which assumes your time picker token is called time_picker. This will run when the user selects the time and calculates the gap between search start and end and if it's less then 24 hours will set the new token to sum the fields. You could also do this as two different panels and show one or the other depending on the gap using the <panel depends="$token$"> construct, but that depends on how you end up displaying these results.
Thanks for the solution. However, there is an additional case to be handled. Your solution handles the 1 day timeframe. However, we also want to see totals for timeframes > 1 day by day, i.e., licens... See more...
Thanks for the solution. However, there is an additional case to be handled. Your solution handles the 1 day timeframe. However, we also want to see totals for timeframes > 1 day by day, i.e., license usage by environment by day. Is there a way to make the display of the stats dashboard conditional depending on the selected time frame? 
When you bin by time, the first part of the time specifier indicates the size of the bin, e.g. span=1w You are looking to make buckets of 7 days - that 7 days will start going back from today, which... See more...
When you bin by time, the first part of the time specifier indicates the size of the bin, e.g. span=1w You are looking to make buckets of 7 days - that 7 days will start going back from today, which here today is March 7, so counting back will give you 29 Feb 22 Feb 15 Feb 8 Feb However, when you use the @ specifier, this tells Splunk to "snap to" a specific time start point within the bucket, so when you do span=1w@w it tells Splunk to use a 7 day time bucket, but to snap to the beginning of the week, which is a Sunday. You can change this behaviour of the 'start of the week' by adding a relative indicator after that snap to value, so span=1w@w1 will tell Splunk to snap to the Monday as start of the week. When you do any kind of bucket creation with bin or span=1w in a timechart, it effectively changes the _time of all events to be the start of the bin - it has to do this because if Student 1 and 2 have different dates within the same week for their first grade, what date could it use for the single event it produces? So, when you see Student 1 showing as the week of the 4th Feb, it simply means that Student1 was seen at some time in that 7 day bucket. See this doc, which has a section on the snap to characteristics. https://docs.splunk.com/Documentation/Splunk/9.2.0/SearchReference/SearchTimeModifiers  
|tstats count where index=app-idx host="*abfd*" sourcetype=app-source-logs by host This is my alert query, i want to modify the query so that i wont receive alert at certain times. For example: Eve... See more...
|tstats count where index=app-idx host="*abfd*" sourcetype=app-source-logs by host This is my alert query, i want to modify the query so that i wont receive alert at certain times. For example: Every month like on 10 , 18, 25 and during 8am to 11am i don't want to get the alerts. Rest all for other days its should work as normal. how can i do it???
If you run timechart across a 24 hour window and you specify @d as the time bucket, it will count by the day,  so say you run the search at 10:00 am, it will give you the 24 hours window of yesterday... See more...
If you run timechart across a 24 hour window and you specify @d as the time bucket, it will count by the day,  so say you run the search at 10:00 am, it will give you the 24 hours window of yesterday from 10:00am to midnight and today from midnight to 10am. Currently your query will give you  yesterday_time env1_count env2_count ... total today_time env1_count env2_count ... total what is your intention to show this information - is time relevant? If you are just looking for a sum of (b) for each env, then just use stats, e.g. | stats sum(b) as sum by env | transpose 0 header_field=env  
Worked like a charm, thank you !
If that is the exact regex and you are talking about using the rex command, then    | rex "(?<new_field>(?<=\:\[)(.*)(?=\]))"   will extract the data between the [] into new_field
When writing regex, where in the regex string am I supposed to add the (?<new_field>) string ? I have included a sample regex string below, where in this string would I add (?<new_field>) ? (?<=\:\... See more...
When writing regex, where in the regex string am I supposed to add the (?<new_field>) string ? I have included a sample regex string below, where in this string would I add (?<new_field>) ? (?<=\:\[)(.*)(?=\]) Thanks !
Hi in many cases if you haven't done data onboarding correctly and setting TIME_FORMAT correctly Splunk can decide that 05/03/2024 is actually 3rd of May 2024 not 5th or March 2024. To check this y... See more...
Hi in many cases if you haven't done data onboarding correctly and setting TIME_FORMAT correctly Splunk can decide that 05/03/2024 is actually 3rd of May 2024 not 5th or March 2024. To check this you need to look if those events are in future. That needs that you add correct end data or actually enough long span into future e.g. latest=+10mon in your SPL query. You can also check if there is issues on those date parsing on MC and/or from internal logs. r. Ismo
Hour (7-21,0-9,12-21):  The range 7-21 includes hours from 7 AM to 9 PM. The additional ranges 0-9 and 12-21 ensure that hours from midnight to 9 AM and from noon to 9 PM are also included. Therefore... See more...
Hour (7-21,0-9,12-21):  The range 7-21 includes hours from 7 AM to 9 PM. The additional ranges 0-9 and 12-21 ensure that hours from midnight to 9 AM and from noon to 9 PM are also included. Therefore, the cron job runs every minute from 7 AM to 9 PM, excluding 10 PM and 11 PM.
@Santosh2 Can you try this  : * 7-21,0-9,12-21 * * *
Thanks, You are right!  Need to use back quotes