All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You need to clarify what does "to get the index"..  Do you mean to restrict the main search's index to values of index in the subsearch?  If that, all you need to do is [|datamodel Tutorial Client_e... See more...
You need to clarify what does "to get the index"..  Do you mean to restrict the main search's index to values of index in the subsearch?  If that, all you need to do is [|datamodel Tutorial Client_errors index | stats values(index) as index] <rest of your filters>
Hello Everyone, I would want to ask a question, is there any way for main search get the index return from subsearch?  Since the subsearch will be execute first. The results in the datamodel may ret... See more...
Hello Everyone, I would want to ask a question, is there any way for main search get the index return from subsearch?  Since the subsearch will be execute first. The results in the datamodel may return different indexes some of my example: index=[|datamodel Tutorial Client_errors index | return index]
Hi Team, I need to create 3 calculated fields | eval action= case(error="invalid credentials", "failure", ((like('request.path',"auth/ldap/login/%") OR like('request.path',"auth/ldapco/login/%")... See more...
Hi Team, I need to create 3 calculated fields | eval action= case(error="invalid credentials", "failure", ((like('request.path',"auth/ldap/login/%") OR like('request.path',"auth/ldapco/login/%")) AND (valid="Success")) OR (like('request.path',"auth/token/lookup-self") AND ('auth.display_name'="root")) ,"success") | eval app= case(action="success" OR action="failure", "appname_Authentication") | eval valid= if(error="invalid credentials","Error","Success") action field is dependant on valid app field is dependant on action I am unable to see app field in the splunk, may I know how to create it?
Hi, Is using status.hostIP not working for some reason?  I haven't tried it, but you might be able to just use spec.nodeName instead?
Hey, I had discovered you can emulate the mvexpand function to avoid its limitation configured by the limits.conf  You just have to stats by the multivalue field you were trying to mvexpand, like s... See more...
Hey, I had discovered you can emulate the mvexpand function to avoid its limitation configured by the limits.conf  You just have to stats by the multivalue field you were trying to mvexpand, like so:     | stats values(*) AS * by <multivalue_field>     That's it, (edit:) assuming each value is a unique value such as a unique identifier. You can make values unique using methods like foreach to pre-append a row-based number to each value, reverse join it, then use split and mvindex to remove the row numbers afterwards. (/Edit.) Stats splits up <multivalue_field> into its individual rows, and the use of values(*) copies data across all rows. As an added measure, you can make sure to avoid unnecessary _raw data to reduce memory use with an explicit fields just for it. It was in my experience, it turned out using | fields _time, * trick does not actually remove every single Splunk internal fields. Removing _raw had to be explicit.     | fields _time, xxx, yyy, zzz, <multivalue_field> | fields - _raw | stats values(*) AS * by <multivalue_field>     The above strategy minimizes your search's disk space as much as possible before expanding the multivalue field.
collect can collect events in the future, the issue is how the collect command handles _time. It will NOT use the _time field as _time. It has different behaviour depending on whether it's run as a ... See more...
collect can collect events in the future, the issue is how the collect command handles _time. It will NOT use the _time field as _time. It has different behaviour depending on whether it's run as a scheduled saved search or an add hoc search. The docs on collect are really bad and buggy. Using addtime is also problematic. We use this process via a macro when using collect and you need specific control over _time | eval _raw=printf("_time=%d", _time) | foreach "*" [| eval _raw=_raw.case(isnull('<<FIELD>>'),"", ``` Ignore null fields ``` mvcount('<<FIELD>>')>1,", <<FIELD>>=\"".mvjoin('<<FIELD>>',"###")."\"", ``` Handle MV fields just in case ``` ``` Concatenate the field with a quoted value and remove the original field ``` !isnum('<<FIELD>>') AND match('<<FIELD>>', "[\[\]<>\(\){\}\|\!\;\,\'\"\*\n\r\s\t\&\?\+]"),", <<FIELD>>=\"".replace('<<FIELD>>',"\"","\\\"")."\"", ``` if no breakers, then dont quote the field ``` true(), ", <<FIELD>>=".'<<FIELD>>') | fields - "<<FIELD>>" ] | fields _raw | collect index=bla addtime=f testmode=f It will ignore null fields, it will write unquoted fields when they do not contain major breakers (this allows for more performant searching using TERM() search techniques) and it will join multivalue fields together with ###  You can also use this similar but slightly different approach | foreach * [ eval _raw = if(isnull('<<FIELD>>'), _raw, json_set(coalesce(_raw, json_object()), "<<FIELD>>",'<<FIELD>>'))] | table _time _raw or you can use output_mode=hec, which I believe will get time correct.  
What is the output - do you want just 3 numbers, 30, 90 and 1y average mttr values or are you looking for a timechart which shows 3 lines with the 30 and 90 day rolling averages? Averages are easy t... See more...
What is the output - do you want just 3 numbers, 30, 90 and 1y average mttr values or are you looking for a timechart which shows 3 lines with the 30 and 90 day rolling averages? Averages are easy to calculate over multiple time windows as you can just collect counts and totals, so here's an example of doing the 30 day average and 90 day rolling average | makeresults count=730 | streamstats c | eval _time=now() - (86400 * (floor(c/2))) | eval mttr=random() % 100 | bin _time span=30d | stats count sum(mttr) as sum_mttr avg(mttr) as mttr_avg_30 by _time | streamstats window=3 sum(count) as count_90 sum(sum_mttr) as sum_90 | eval rolling_avg_90 = sum_90 / count_90 | eventstats sum(sum_mttr) as total_mttr sum(count) as total_count | eval annual_avg = total_mttr / total_count | fields - count_90 sum_90 count sum_mttr total_* this example generates 2 events per day over a year and takes the 30 day average as well as the count and sum of mttr, so it then uses streamstats to calculate the 90 day rolling average and then finally eventstats to calculate the annual average.  
There is no _time field after a tstats, so you either have to split by _time or add something like   | tstats max(_time) as _time...   but it depends on what you're trying to achieve as to what y... See more...
There is no _time field after a tstats, so you either have to split by _time or add something like   | tstats max(_time) as _time...   but it depends on what you're trying to achieve as to what you need to do You can also use | tstats latest_time(var1) as _time... which will give you the latest _time the var1 variable was seen
This evening decided to setup a test Splunk box in my lab to goof around with.  Been a while since I have done this part of the process. (Work cluster is up and going and has been for years now).  As... See more...
This evening decided to setup a test Splunk box in my lab to goof around with.  Been a while since I have done this part of the process. (Work cluster is up and going and has been for years now).  As I was looking at my local test box, I noticed the hard drive was not likely the best size to use. So since I have a syslog server running on this as well, and I am pulling those files into Splunk (Splunk will not always be running, hence not sending data direct to Splunk), wanted to try doing a line level destructive read. I did see where others were using a monitor and deleting a file on ingestion, but did not see if line level was being done. So, question is, has anyone done that, and if so, do you have some hints or pointers? Thanks
Hi @OriP  Pls try something similar from this post -  https://community.splunk.com/t5/Splunk-Search/How-do-you-use-the-streamstats-command-after-tstats-and-stats/m-p/388189  
If the app is installed on a heavy forwarder then all parsing is done there using the configurations in the app.  There is little need for the app also to be on the indexers unless you like to wear s... See more...
If the app is installed on a heavy forwarder then all parsing is done there using the configurations in the app.  There is little need for the app also to be on the indexers unless you like to wear suspenders (braces) with your belt. P.S.  I challenge the notion that almost every app uses script inputs.  Of the thousands of app in splunkbase, comparatively few use input scripts. 
Is there a way to set Show Current Time in a different time zone.  I tried:  |eval _time=relative_time(now(),"+9h+30m") but that did not work.  any ideas or thoughts.  This app works great.
Trying to understand what is the time field after tstats. We have the _time field for every event, thats how tstats finds latest event, but what is the latest for a stats that comes after tstats? ... See more...
Trying to understand what is the time field after tstats. We have the _time field for every event, thats how tstats finds latest event, but what is the latest for a stats that comes after tstats? for example | tstats latest(var1) as var1 by var2 var3 | eval var4 = ……….. | stats latest(var4) by var3
Hi @Sandivsu ... more details pls..  how you came to know the queues are full.. any warning/error msgs..  is it a production system or dev/test system..  is there any license issues, warnings, etc... See more...
Hi @Sandivsu ... more details pls..  how you came to know the queues are full.. any warning/error msgs..  is it a production system or dev/test system..  is there any license issues, warnings, etc.. 
Hi @thevikramyadav ... all the best for your splunk learning..  remember these 3 components...  1) Splunk Universal forwarder collects the logs and send it to Splunk indexer.  2) Splunk indexer,... See more...
Hi @thevikramyadav ... all the best for your splunk learning..  remember these 3 components...  1) Splunk Universal forwarder collects the logs and send it to Splunk indexer.  2) Splunk indexer, indexes(ingests) the logs(it reads the logs, word by word, and write it down to flat files for searching) 3) Splunk Search head - its the webserver which provides the Splunk GUI login page, it reads the search requests from the users and send it to indexer. and collects the results from indexer, consolidates, reports it.    From Splunk documentations: In a distributed search environment, a Splunk Enterprise instance that handles search management functions, directing search requests to a set of search peers and then merging the results back to the user. A Splunk Enterprise instance can function as both a search head and a search peer. A search head that performs only searching, and not any indexing, is referred to as a dedicated search head. Search head clusters are groups of search heads that coordinate their activities. Search heads are also required components of indexer clusters.
Can someone help me to get more idea on Splunk Search Head and how it work's ?
Thanks for your reply. I notice that almost every app uses script inputs (e.g., Splunk Add-on for Amazon Web Services, Splunk Add-on for Google Workspace, etc.). In what cases do I need to distribut... See more...
Thanks for your reply. I notice that almost every app uses script inputs (e.g., Splunk Add-on for Amazon Web Services, Splunk Add-on for Google Workspace, etc.). In what cases do I need to distribute the app to my indexers?
My org is pulling in vuln data using the Qualys TA and I am trying to put together a handful of searches and dashboards to see metrics quickly.  I'm using the following currently over the last 30 day... See more...
My org is pulling in vuln data using the Qualys TA and I am trying to put together a handful of searches and dashboards to see metrics quickly.  I'm using the following currently over the last 30 days:       index=qualys sourcetype=qualys:hostDetection SEVERITY=5 STATUS="FIXED" | dedup HOST_ID, QID | eval MTTR = ceiling(((strptime(LAST_FIXED_DATETIME, "%FT%H:%M:%SZ") - strptime(FIRST_FOUND_DATETIME, "%FT%H:%M:%SZ")) / 86400)) ```| bucket span=1d _time``` | timechart span=1d avg(MTTR) as AVG_MTTR_PER_DAY | streamstats window=7 avg(AVG_MTTR_PER_DAY) as 7_DAY_AVG         This gets me close, but I believe this is giving the average of averages, not the overall average. Using the month of May, I wouldn't have a calculated value until May 8th, which would use the data from May 1-7.  May 9th would be from May 2-8, etc.  Any help on how to calculate the overall average?
The Monitoring Console has that at the bottom of the overview page.  Click on the magnifying glass icon to see the SPL.
1.  Yes.  Such apps should be installed on a heavy forwarder. 2. Some preparation may be necessary, depending on the app.  Inputs.conf should be removed or all inputs disabled, for example.