All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I found a shorter example to display the result set.  Thank you for your efforts dtburrows3. |  bin span=10m _time  |  eval minute=strftime(_time,"%M") |  where minute>54 OR minute<6 |  stats cou... See more...
I found a shorter example to display the result set.  Thank you for your efforts dtburrows3. |  bin span=10m _time  |  eval minute=strftime(_time,"%M") |  where minute>54 OR minute<6 |  stats count by _time  
I also was looking for something that did this for a really long time and could never find anything.  I know about the CMD+SHIFT+E to expand macros on the UI but needed the same functionality inline... See more...
I also was looking for something that did this for a really long time and could never find anything.  I know about the CMD+SHIFT+E to expand macros on the UI but needed the same functionality inline in a search to use for meta-analysis (breaking down SPL to its components and analyzing). I feel like there is some way of doing this that exists somewhere but have not had much luck finding it.  So went ahead and tried making a custom command to do it and it actually seems to work out pretty well. I do want to note that this custom command is recursive in a sense that it expands the macros all the way down. Meaning that if there are nested macros that this will expand the nested ones as well all the way unil there are no more macros to expand. So end result should be a fully detailed SPL that is being executed. It will also replace the input args with the values it finds in the input field so it will also return that SPL that would run for that specific search with the given arguments. You can see an example of the output here (this particular example is derived from a dashboard, so input arguments are still tokenized and will be represented as such in the "expanded_spl" field): If you are still interested in this than you can give this a try, I think it will require entries in a commands.conf, searchbnf.conf metadata/local.meta and a custom python script in bin/ There is also a dependency on Splunk Python SDK. Send me a message and I can get it packed up in a custom app to share if you still are needing this functionality.
Hi, Does anyone out there use any archiving software to monitor, report and manage frozen bucket storage in an on-prem archive storage location? I have found https://www.conducivesi.com/splunk-arch... See more...
Hi, Does anyone out there use any archiving software to monitor, report and manage frozen bucket storage in an on-prem archive storage location? I have found https://www.conducivesi.com/splunk-archiver-vsl to fit our requirements but we are interested in looking at other options.  Thanks 
Hi,  I would like to use it as an alert, but a bit confused the trigger index=_internal group=per_index_thruput source=*metrics.log NOT series=_* | eval last_seen=now()-_time | stats max(last_seen)... See more...
Hi,  I would like to use it as an alert, but a bit confused the trigger index=_internal group=per_index_thruput source=*metrics.log NOT series=_* | eval last_seen=now()-_time | stats max(last_seen) as seconds_since_seen by series | rename series as index | where seconds_since_seen < 120 Specifically, a value for the 'seconds_since_seen', if most indices are about the 800 second range, I am not sure if a low value like 120 seconds going to cause false positives. Any suggestions for a proper value to monitor indices would be greatly appreciated. Cheers, Paul  
You can try utilizing a foreach mode=multivalue loop to gather deltas between the timestamps and then do descriptive statistics around the new delta MV field. Something like this: <base_search> ... See more...
You can try utilizing a foreach mode=multivalue loop to gather deltas between the timestamps and then do descriptive statistics around the new delta MV field. Something like this: <base_search> ``` sorting event_time mvfield values ``` | eval event_time=mvsort(event_time) ``` initializing field current for the nex foreach loop ``` | eval current=mvindex(event_time, 0) ``` loop through each value in event_time and subtract the preceding value to get a delta ``` | foreach mode=multivalue event_time [ | eval tmp_delta='<<ITEM>>'-'current', delta=mvappend(delta, tmp_delta), current='<<ITEM>>' ] ``` removing these fields as they are no longer needed ``` | fields - current, tmp_delta | eval ``` stripping off first entry from delta mvfield since it will always be zero and skew stats ``` delta=mvindex(delta, 1, -1), ``` calculate avergae delta betwwen timestamps ``` avg_delta=avg(delta), ``` diff is a temp field to assist with evaluating the standard deviation s=√(Σ((delta-avg_delta)^2/(n-1))) ``` diff=mvmap(delta, 'delta'-'avg_delta'), diff_2=mvmap(diff, pow(diff, 2)), stdev_delta=sqrt(sum(diff_2)/(mvcount(diff_2)-1)), ``` evaluate variance ``` variance_delta=pow('stdev_delta', 2) ``` remove diff fields as they were temporary to calculate standard deviation ``` | fields - diff, diff_2 ``` zscore for more detail ``` | eval zscore_delta=mvmap(delta, (('delta'-'avg_delta')/'stdev_delta')) ``` human readable format (duration) of deltas for context ``` | eval stdev_duration=tostring(stdev_delta, "duration"), range_delta=max(delta)-min(delta), range_duration=tostring(range_delta, "duration") ``` just sorting fields list for final display ``` | fields + event_time, delta, zscore_delta, avg_delta, stdev_delta, variance_delta, range_delta, stdev_duration, range_duration  It should return you a table that looks like this.  
@mark_groenveld Is it that you simply want a single value representing the 10 minute period from x:55 to x+1:05, so you have one row per hour, e.g. index=main (earliest=01/08/2024:08:55:00 latest=01... See more...
@mark_groenveld Is it that you simply want a single value representing the 10 minute period from x:55 to x+1:05, so you have one row per hour, e.g. index=main (earliest=01/08/2024:08:55:00 latest=01/08/2024:09:05:00) OR (earliest=01/08/2024:09:55:00 latest=01/08/2024:10:05:00) OR (earliest=01/08/2024:10:55:00 latest=01/08/2024:11:05:00) | bin _time span=10m aligntime=earliest | stats count by _time | sort _time If so, it's just the aligntime=earliest in the bin command
An alternative which will just return a single 'closest' value using foreach is your_search_and_lookup | eval diff=max(Amount_Hist) | foreach Amount_Hist mode=multivalue [ eval new_diff=min(diff, ab... See more...
An alternative which will just return a single 'closest' value using foreach is your_search_and_lookup | eval diff=max(Amount_Hist) | foreach Amount_Hist mode=multivalue [ eval new_diff=min(diff, abs(Amount-<<ITEM>>)), closest=if(new_diff<diff, <<ITEM>>, closest), diff=new_diff ] | fields - new_diff  
Not sure if you still need this answered but figure I'd give it a shot in case anybody else has a similar problem they need to solve. I think something like this would do what you are looking for.... See more...
Not sure if you still need this answered but figure I'd give it a shot in case anybody else has a similar problem they need to solve. I think something like this would do what you are looking for. <base_search> ``` lookup historical values for each user (potentially mv field) ``` | lookup <lookup_name> user OUTPUT Amount_Hist ``` loop through Amount_Hist values and finding the diff of each compared to the users current value, then take the minimum value and assign it to it's own field "Amount_Hist_min_diff" ``` | eval Amount_Hist_min_diff=min( case( mvcount(Amount_Hist)==1, abs('Amount_Hist'-'Amount'), mvcount(Amount_Hist)>1, mvmap(Amount_Hist, abs('Amount_Hist'-'Amount')) ) ), ``` loop back through historical values and only return the values whos diff is equal to the minimum diff value assigned previously ``` Closest_Amount_Hist_value=mvdedup( case( mvcount(Amount_Hist)==1, if(abs('Amount_Hist'-'Amount')=='Amount_Hist_min_diff', 'Amount_Hist', null()), mvcount(Amount_Hist)>1, mvmap(Amount_Hist, if(abs('Amount_Hist'-'Amount')=='Amount_Hist_min_diff', 'Amount_Hist', null())) ) )  You can see that the field "Closest_Amount_Hist_value" holds the value closest the the user's current value, this can potentially hold multiple values if they are are equidistant from the current value. As show below.    
The Microsoft Teams Add-On for Splunk shows a Microsoft Teams tab for Microsoft 365 App for Splunk, however I do not see that on the app. Has it been removed, or am I missing something? Possibly some... See more...
The Microsoft Teams Add-On for Splunk shows a Microsoft Teams tab for Microsoft 365 App for Splunk, however I do not see that on the app. Has it been removed, or am I missing something? Possibly something is available in On Prem but not Cloud?
What endpoint are you using in the custom action configuration?  I found that if i used something like /api/now/table/incident instead of /api/now/table/x_splu2_splunk_ser_u_splunk_incident(recommend... See more...
What endpoint are you using in the custom action configuration?  I found that if i used something like /api/now/table/incident instead of /api/now/table/x_splu2_splunk_ser_u_splunk_incident(recommended by the alert action gui), I would get new incidents every time.  The downside of using the recommended endpoint is that an intermediate table gets populated in ServiceNow.  The upside is that it does correlations based on the correlation ID sent over from the alert action.  That correlation ID is a hash of the alert name so any future actions that go across to ServiceNow will still correlate under that correlation ID (and originally created incident) and not create new incidents.
You can "cheat" a bit Just shift your timestamps 5  minutes forward/backward, do bin over 10 minutes, then shift back <your search> | eval _time=_time-300 | bin _time span=10m | eval _time... See more...
You can "cheat" a bit Just shift your timestamps 5  minutes forward/backward, do bin over 10 minutes, then shift back <your search> | eval _time=_time-300 | bin _time span=10m | eval _time=_time+300 You can also try to use the "bin" command with "align=earliest" to make it start the bin at the beginning of your search, instead of at the top of the hour.
Wait, wait, wait a second... Your single server has all these roles? Something is very wrong here. OK. I assume that maybe it's only having those roles designated in Monitoring Console but still...... See more...
Wait, wait, wait a second... Your single server has all these roles? Something is very wrong here. OK. I assume that maybe it's only having those roles designated in Monitoring Console but still... If a server seems to be both Cluster Master and Indexer, something is wrong with your setup. While you can sometimes in very small deployment mix some functionalities, a server which is supposed to be a CM, deployer, deployment server... that's a big no no! Are you absolutely sure this server does all that?
You answered questions about data format, i.e., Question 2, also Question 1 to some extent. (It would always be more useful for you to construct a mock results table than using words.)  You did not i... See more...
You answered questions about data format, i.e., Question 2, also Question 1 to some extent. (It would always be more useful for you to construct a mock results table than using words.)  You did not indicate any intention to use UpdateTime in any numeric comparison downstream, negating part of what you implied earlier.   I will assume that the answer to Question 4 is "no".  As to Question 3, your update implies a "yes".  The only change you want is precision.  And by specifying HH:MM without any other condition, I deduce that you trust that raw agent.status.policy_refresh_at all bear the same timezone. If the above is correct, you can use pure string manipulation to achieve what you wanted:   | eval agent.status.policy_refresh_at = split('agent.status.policy_refresh_at', "T") ``` separate date from time of day ``` | eval UpdateDate = mvindex('agent.status.policy_refresh_at', 0) | eval UpdateTime = split(mvindex('agent.status.policy_refresh_at', 1), ":") ``` break time of day by colon ``` | eval UpdateTime = mvjoin(mvindex(UpdateTime, 0, 1), ":") ``` reconstruct with first two elements only ```   Using the same emulation I constructed from your mock data, the output shoud be UpdateDate UpdateTime agent.status.policy_refresh_at host 2024-01-04 10:31 2024-01-04 10:31:35.529752Z CN**** 2024-01-04 10:31 2024-01-04 10:31:51.654448Z CN**** 2023-11-26 05:57 2023-11-26 05:57:47.775675Z gb**** 2024-01-04 10:32 2024-01-04 10:32:14.416359Z cn**** 2024-01-04 10:30 2024-01-04 10:30:32.998086Z cn**** Hope this helps
Splunk says that it is an "as yet published" "known issue" since v9.1.0.  They will not classify it as a "bug" until they can verify.  How can it be classified as a known issue, and simultaneously no... See more...
Splunk says that it is an "as yet published" "known issue" since v9.1.0.  They will not classify it as a "bug" until they can verify.  How can it be classified as a known issue, and simultaneously not verified? 
Ahh okay, Give this a try.   <base_search> | where ('_time'>=relative_time(_time, "@h-5m@m") AND '_time'<=relative_time(_time, "@h+5m@m")) OR ('_time'>=relative_time(_time, "+1h@h-5m@m") AND '... See more...
Ahh okay, Give this a try.   <base_search> | where ('_time'>=relative_time(_time, "@h-5m@m") AND '_time'<=relative_time(_time, "@h+5m@m")) OR ('_time'>=relative_time(_time, "+1h@h-5m@m") AND '_time'<=relative_time(_time, "+1h@h+5m@m")) | eval upper_hour_epoch=relative_time(_time, "+1h@h"), lower_hour_epoch=relative_time(_time, "@h"), upper_hour=strftime(relative_time(_time, "+1h@h"), "%Y-%m-%d %H:%M:%S"), lower_hour=strftime(relative_time(_time, "@h"), "%Y-%m-%d %H:%M:%S"), upper_hour_diff=abs('_time'-'upper_hour_epoch'), lower_hour_diff=abs('_time'-'lower_hour_epoch'), diff_minimum=min(upper_hour_diff, lower_hour_diff) | foreach *_diff [ | eval snap_hour=if( 'diff_minimum'=='<<FIELD>>', '<<MATCHSTR>>', 'snap_hour' ) ] | stats count as count, min(_time) as min_time, max(_time) as max_time by snap_hour | convert ctime(min_time), ctime(max_time)    The output should only include results +/- 5 minute window around each hour And if you need to differentiate between the event_counts that fall in the lower 5 minutes and upper 5 minutes you could do something like this. <base_search> | where ('_time'>=relative_time(_time, "@h-5m@m") AND '_time'<=relative_time(_time, "@h+5m@m")) OR ('_time'>=relative_time(_time, "+1h@h-5m@m") AND '_time'<=relative_time(_time, "+1h@h+5m@m")) | eval upper_hour_epoch=relative_time(_time, "+1h@h"), lower_hour_epoch=relative_time(_time, "@h"), upper_hour=strftime(relative_time(_time, "+1h@h"), "%Y-%m-%d %H:%M:%S"), lower_hour=strftime(relative_time(_time, "@h"), "%Y-%m-%d %H:%M:%S"), upper_hour_diff=abs('_time'-'upper_hour_epoch'), lower_hour_diff=abs('_time'-'lower_hour_epoch'), diff_minimum=min(upper_hour_diff, lower_hour_diff) | foreach *_diff [ | eval snap_hour=if( 'diff_minimum'=='<<FIELD>>', '<<MATCHSTR>>', 'snap_hour' ) ] | fields - diff_minimum, lower_hour, lower_hour_diff, lower_hour_epoch, upper_hour, upper_hour_diff, upper_hour_epoch | eval snap_hour_epoch=strptime(snap_hour, "%Y-%m-%d %H:%M:%S"), group=if( '_time'-'snap_hour_epoch'>=0, "+5m", "-5m" ) | stats count as count by snap_hour_epoch, group | sort 0 +snap_hour_epoch | rename snap_hour_epoch as _time  Example output:  
Thanks dtburrows3 for replying.  The chiefs requesting the data want 10 minute increments.
Hi @noka.wif, Please check out this AppD Docs page: https://docs.appdynamics.com/fso/cloud-native-app-obs/en/cloud-and-infrastructure-monitoring/google-cloud-platform-observability
Are you wanting the the 3 5-minute buckets added together or is okay if they are separated? You can try something like this but is still separated out into their respective 5 minute buckets. <bas... See more...
Are you wanting the the 3 5-minute buckets added together or is okay if they are separated? You can try something like this but is still separated out into their respective 5 minute buckets. <base_search> | bucket span=5m _time | stats count as count by _time | eval count=if( '_time'==relative_time(_time, "+1h@h-5m@m") OR '_time'==relative_time(_time, "@h+5m@m") OR '_time'==relative_time(_time, "@h"), 'count', null() ) | where isnotnull(count) | sort 0 +_time Example output:  
Hi @saurav.kosankar, Since the Community has not jumped in to help out, you can want to contact AppD Support or reach out to your AppD Rep for further On-Prem help. How do I submit a Support tic... See more...
Hi @saurav.kosankar, Since the Community has not jumped in to help out, you can want to contact AppD Support or reach out to your AppD Rep for further On-Prem help. How do I submit a Support ticket? An FAQ 
I am looking to represent stats for the 5 minutes before and after the hour for an entire day/timeperiod.  The search below will work but still breaks up the times into 5 minute chunks as it crosses ... See more...
I am looking to represent stats for the 5 minutes before and after the hour for an entire day/timeperiod.  The search below will work but still breaks up the times into 5 minute chunks as it crosses the top of the hour. Is there a better way to search? index=main (earliest=01/08/2024:08:55:00 latest=01/08/2024:09:05:00) OR (earliest=01/08/2024:09:55:00 latest=01/08/2024:10:05:00) OR (earliest=01/08/2024:10:55:00 latest=01/08/2024:11:05:00) | bin _time span=10m | stats count by _time Received results