All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@mark_groenveld Is it that you simply want a single value representing the 10 minute period from x:55 to x+1:05, so you have one row per hour, e.g. index=main (earliest=01/08/2024:08:55:00 latest=01... See more...
@mark_groenveld Is it that you simply want a single value representing the 10 minute period from x:55 to x+1:05, so you have one row per hour, e.g. index=main (earliest=01/08/2024:08:55:00 latest=01/08/2024:09:05:00) OR (earliest=01/08/2024:09:55:00 latest=01/08/2024:10:05:00) OR (earliest=01/08/2024:10:55:00 latest=01/08/2024:11:05:00) | bin _time span=10m aligntime=earliest | stats count by _time | sort _time If so, it's just the aligntime=earliest in the bin command
An alternative which will just return a single 'closest' value using foreach is your_search_and_lookup | eval diff=max(Amount_Hist) | foreach Amount_Hist mode=multivalue [ eval new_diff=min(diff, ab... See more...
An alternative which will just return a single 'closest' value using foreach is your_search_and_lookup | eval diff=max(Amount_Hist) | foreach Amount_Hist mode=multivalue [ eval new_diff=min(diff, abs(Amount-<<ITEM>>)), closest=if(new_diff<diff, <<ITEM>>, closest), diff=new_diff ] | fields - new_diff  
Not sure if you still need this answered but figure I'd give it a shot in case anybody else has a similar problem they need to solve. I think something like this would do what you are looking for.... See more...
Not sure if you still need this answered but figure I'd give it a shot in case anybody else has a similar problem they need to solve. I think something like this would do what you are looking for. <base_search> ``` lookup historical values for each user (potentially mv field) ``` | lookup <lookup_name> user OUTPUT Amount_Hist ``` loop through Amount_Hist values and finding the diff of each compared to the users current value, then take the minimum value and assign it to it's own field "Amount_Hist_min_diff" ``` | eval Amount_Hist_min_diff=min( case( mvcount(Amount_Hist)==1, abs('Amount_Hist'-'Amount'), mvcount(Amount_Hist)>1, mvmap(Amount_Hist, abs('Amount_Hist'-'Amount')) ) ), ``` loop back through historical values and only return the values whos diff is equal to the minimum diff value assigned previously ``` Closest_Amount_Hist_value=mvdedup( case( mvcount(Amount_Hist)==1, if(abs('Amount_Hist'-'Amount')=='Amount_Hist_min_diff', 'Amount_Hist', null()), mvcount(Amount_Hist)>1, mvmap(Amount_Hist, if(abs('Amount_Hist'-'Amount')=='Amount_Hist_min_diff', 'Amount_Hist', null())) ) )  You can see that the field "Closest_Amount_Hist_value" holds the value closest the the user's current value, this can potentially hold multiple values if they are are equidistant from the current value. As show below.    
The Microsoft Teams Add-On for Splunk shows a Microsoft Teams tab for Microsoft 365 App for Splunk, however I do not see that on the app. Has it been removed, or am I missing something? Possibly some... See more...
The Microsoft Teams Add-On for Splunk shows a Microsoft Teams tab for Microsoft 365 App for Splunk, however I do not see that on the app. Has it been removed, or am I missing something? Possibly something is available in On Prem but not Cloud?
What endpoint are you using in the custom action configuration?  I found that if i used something like /api/now/table/incident instead of /api/now/table/x_splu2_splunk_ser_u_splunk_incident(recommend... See more...
What endpoint are you using in the custom action configuration?  I found that if i used something like /api/now/table/incident instead of /api/now/table/x_splu2_splunk_ser_u_splunk_incident(recommended by the alert action gui), I would get new incidents every time.  The downside of using the recommended endpoint is that an intermediate table gets populated in ServiceNow.  The upside is that it does correlations based on the correlation ID sent over from the alert action.  That correlation ID is a hash of the alert name so any future actions that go across to ServiceNow will still correlate under that correlation ID (and originally created incident) and not create new incidents.
You can "cheat" a bit Just shift your timestamps 5  minutes forward/backward, do bin over 10 minutes, then shift back <your search> | eval _time=_time-300 | bin _time span=10m | eval _time... See more...
You can "cheat" a bit Just shift your timestamps 5  minutes forward/backward, do bin over 10 minutes, then shift back <your search> | eval _time=_time-300 | bin _time span=10m | eval _time=_time+300 You can also try to use the "bin" command with "align=earliest" to make it start the bin at the beginning of your search, instead of at the top of the hour.
Wait, wait, wait a second... Your single server has all these roles? Something is very wrong here. OK. I assume that maybe it's only having those roles designated in Monitoring Console but still...... See more...
Wait, wait, wait a second... Your single server has all these roles? Something is very wrong here. OK. I assume that maybe it's only having those roles designated in Monitoring Console but still... If a server seems to be both Cluster Master and Indexer, something is wrong with your setup. While you can sometimes in very small deployment mix some functionalities, a server which is supposed to be a CM, deployer, deployment server... that's a big no no! Are you absolutely sure this server does all that?
You answered questions about data format, i.e., Question 2, also Question 1 to some extent. (It would always be more useful for you to construct a mock results table than using words.)  You did not i... See more...
You answered questions about data format, i.e., Question 2, also Question 1 to some extent. (It would always be more useful for you to construct a mock results table than using words.)  You did not indicate any intention to use UpdateTime in any numeric comparison downstream, negating part of what you implied earlier.   I will assume that the answer to Question 4 is "no".  As to Question 3, your update implies a "yes".  The only change you want is precision.  And by specifying HH:MM without any other condition, I deduce that you trust that raw agent.status.policy_refresh_at all bear the same timezone. If the above is correct, you can use pure string manipulation to achieve what you wanted:   | eval agent.status.policy_refresh_at = split('agent.status.policy_refresh_at', "T") ``` separate date from time of day ``` | eval UpdateDate = mvindex('agent.status.policy_refresh_at', 0) | eval UpdateTime = split(mvindex('agent.status.policy_refresh_at', 1), ":") ``` break time of day by colon ``` | eval UpdateTime = mvjoin(mvindex(UpdateTime, 0, 1), ":") ``` reconstruct with first two elements only ```   Using the same emulation I constructed from your mock data, the output shoud be UpdateDate UpdateTime agent.status.policy_refresh_at host 2024-01-04 10:31 2024-01-04 10:31:35.529752Z CN**** 2024-01-04 10:31 2024-01-04 10:31:51.654448Z CN**** 2023-11-26 05:57 2023-11-26 05:57:47.775675Z gb**** 2024-01-04 10:32 2024-01-04 10:32:14.416359Z cn**** 2024-01-04 10:30 2024-01-04 10:30:32.998086Z cn**** Hope this helps
Splunk says that it is an "as yet published" "known issue" since v9.1.0.  They will not classify it as a "bug" until they can verify.  How can it be classified as a known issue, and simultaneously no... See more...
Splunk says that it is an "as yet published" "known issue" since v9.1.0.  They will not classify it as a "bug" until they can verify.  How can it be classified as a known issue, and simultaneously not verified? 
Ahh okay, Give this a try.   <base_search> | where ('_time'>=relative_time(_time, "@h-5m@m") AND '_time'<=relative_time(_time, "@h+5m@m")) OR ('_time'>=relative_time(_time, "+1h@h-5m@m") AND '... See more...
Ahh okay, Give this a try.   <base_search> | where ('_time'>=relative_time(_time, "@h-5m@m") AND '_time'<=relative_time(_time, "@h+5m@m")) OR ('_time'>=relative_time(_time, "+1h@h-5m@m") AND '_time'<=relative_time(_time, "+1h@h+5m@m")) | eval upper_hour_epoch=relative_time(_time, "+1h@h"), lower_hour_epoch=relative_time(_time, "@h"), upper_hour=strftime(relative_time(_time, "+1h@h"), "%Y-%m-%d %H:%M:%S"), lower_hour=strftime(relative_time(_time, "@h"), "%Y-%m-%d %H:%M:%S"), upper_hour_diff=abs('_time'-'upper_hour_epoch'), lower_hour_diff=abs('_time'-'lower_hour_epoch'), diff_minimum=min(upper_hour_diff, lower_hour_diff) | foreach *_diff [ | eval snap_hour=if( 'diff_minimum'=='<<FIELD>>', '<<MATCHSTR>>', 'snap_hour' ) ] | stats count as count, min(_time) as min_time, max(_time) as max_time by snap_hour | convert ctime(min_time), ctime(max_time)    The output should only include results +/- 5 minute window around each hour And if you need to differentiate between the event_counts that fall in the lower 5 minutes and upper 5 minutes you could do something like this. <base_search> | where ('_time'>=relative_time(_time, "@h-5m@m") AND '_time'<=relative_time(_time, "@h+5m@m")) OR ('_time'>=relative_time(_time, "+1h@h-5m@m") AND '_time'<=relative_time(_time, "+1h@h+5m@m")) | eval upper_hour_epoch=relative_time(_time, "+1h@h"), lower_hour_epoch=relative_time(_time, "@h"), upper_hour=strftime(relative_time(_time, "+1h@h"), "%Y-%m-%d %H:%M:%S"), lower_hour=strftime(relative_time(_time, "@h"), "%Y-%m-%d %H:%M:%S"), upper_hour_diff=abs('_time'-'upper_hour_epoch'), lower_hour_diff=abs('_time'-'lower_hour_epoch'), diff_minimum=min(upper_hour_diff, lower_hour_diff) | foreach *_diff [ | eval snap_hour=if( 'diff_minimum'=='<<FIELD>>', '<<MATCHSTR>>', 'snap_hour' ) ] | fields - diff_minimum, lower_hour, lower_hour_diff, lower_hour_epoch, upper_hour, upper_hour_diff, upper_hour_epoch | eval snap_hour_epoch=strptime(snap_hour, "%Y-%m-%d %H:%M:%S"), group=if( '_time'-'snap_hour_epoch'>=0, "+5m", "-5m" ) | stats count as count by snap_hour_epoch, group | sort 0 +snap_hour_epoch | rename snap_hour_epoch as _time  Example output:  
Thanks dtburrows3 for replying.  The chiefs requesting the data want 10 minute increments.
Hi @noka.wif, Please check out this AppD Docs page: https://docs.appdynamics.com/fso/cloud-native-app-obs/en/cloud-and-infrastructure-monitoring/google-cloud-platform-observability
Are you wanting the the 3 5-minute buckets added together or is okay if they are separated? You can try something like this but is still separated out into their respective 5 minute buckets. <bas... See more...
Are you wanting the the 3 5-minute buckets added together or is okay if they are separated? You can try something like this but is still separated out into their respective 5 minute buckets. <base_search> | bucket span=5m _time | stats count as count by _time | eval count=if( '_time'==relative_time(_time, "+1h@h-5m@m") OR '_time'==relative_time(_time, "@h+5m@m") OR '_time'==relative_time(_time, "@h"), 'count', null() ) | where isnotnull(count) | sort 0 +_time Example output:  
Hi @saurav.kosankar, Since the Community has not jumped in to help out, you can want to contact AppD Support or reach out to your AppD Rep for further On-Prem help. How do I submit a Support tic... See more...
Hi @saurav.kosankar, Since the Community has not jumped in to help out, you can want to contact AppD Support or reach out to your AppD Rep for further On-Prem help. How do I submit a Support ticket? An FAQ 
I am looking to represent stats for the 5 minutes before and after the hour for an entire day/timeperiod.  The search below will work but still breaks up the times into 5 minute chunks as it crosses ... See more...
I am looking to represent stats for the 5 minutes before and after the hour for an entire day/timeperiod.  The search below will work but still breaks up the times into 5 minute chunks as it crosses the top of the hour. Is there a better way to search? index=main (earliest=01/08/2024:08:55:00 latest=01/08/2024:09:05:00) OR (earliest=01/08/2024:09:55:00 latest=01/08/2024:10:05:00) OR (earliest=01/08/2024:10:55:00 latest=01/08/2024:11:05:00) | bin _time span=10m | stats count by _time Received results  
Hello @Eduardo.Rosa, You will need to have your Account Admin give you the proper permissions to make those changes. Please check out this TKB - https://community.appdynamics.com/t5/Knowledge-Base/... See more...
Hello @Eduardo.Rosa, You will need to have your Account Admin give you the proper permissions to make those changes. Please check out this TKB - https://community.appdynamics.com/t5/Knowledge-Base/How-do-I-manage-Accounts-Management-Portal-users-as-an-Admin/ta-p/23286
Rather than 2 events, I see 11.  Eight of them contain a single value and the remainder contain different numbers of values.  A well-formed CSV file will have the same number of values in each event... See more...
Rather than 2 events, I see 11.  Eight of them contain a single value and the remainder contain different numbers of values.  A well-formed CSV file will have the same number of values in each event.  Any values with embedded CRLFs must be enclosed in quotation marks.
Hi @Joseph.McNellage, I would suggest searching the community for existing content. If you don't find anything, you can always reach out to AppD Support.  How do I submit a Support ticket? An FAQ... See more...
Hi @Joseph.McNellage, I would suggest searching the community for existing content. If you don't find anything, you can always reach out to AppD Support.  How do I submit a Support ticket? An FAQ 
I had tried the way you told but I got the same error as above. These are the last two values of my csv file in notepad + +    Does every line should have CRLF ?    
Hi @scottbrion   - I’m a Community Moderator in the Splunk Community.  This question was posted 3 years ago, so it might not get the attention you need for your question to be answered. We recomme... See more...
Hi @scottbrion   - I’m a Community Moderator in the Splunk Community.  This question was posted 3 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you!