Hi,
i have 10 stats codes from 200 to 210, i need to set up an alert. That alert will look at the last 10 mins, if a stats code was not generated in last 10 min, Splunk should send an alert. How could i build a search for this?
Like this (for real this time):
index="YouShouldAlwaysSpecifyAnIndex" sourcetype="AndSourcetypeToo" status_code>=200 AND status_code<=210
| appendpipe
[| gentimes start=200 end=210
| streamstats count AS status_code
| eval status_code = status_code + 199
| fields status_code]
| stats count BY status_code
| eval count=count-1
| where count=0
@vemurisurya can you explain the query you used to solve this problem and what if I have to monitor only one status code say 200 and want the same thing you stated
Like this (for real this time):
index="YouShouldAlwaysSpecifyAnIndex" sourcetype="AndSourcetypeToo" status_code>=200 AND status_code<=210
| appendpipe
[| gentimes start=200 end=210
| streamstats count AS status_code
| eval status_code = status_code + 199
| fields status_code]
| stats count BY status_code
| eval count=count-1
| where count=0
The trick is that you cannot count events that didn't happen so you have to pretend that everything happened once to instantiate a counter for each one and then back down by 1.
@woodcock can you explain this query please?
@woodcock can you please explain this query and what if I have to monitor only 200 status code not a range of codes..what modified query can work in that case?if you could help please
Like this:
index="YouShouldAlwaysSpecifyAnIndex" sourcetype="AndSourcetypeToo" status_code>=200 AND status_code<=210
| stats count BY status_code
| where count=0
Schedule this to run every 5 minutes
over the last 10 minutes
and set to alert when number of events
and is greater than 0
.
Huh? Oh, he wanted an alert when NONE of those events had been generated. Nope, still doesnt' work for me.
Ah, never mind, you posted the correct code on a new answer.
Here's the code for checking if ANY of those eleven event numbers had failed to be produced.
index="YouShouldAlwaysSpecifyAnIndex" sourcetype="AndSourcetypeToo" status_code>=200 AND status_code<=210
| fields status_code
| append [|stats count | eval status_code =mvrange(200,211) | mvexpand status_code |table status_code]
| stats count BY status_code
| eval count=count-1
| where count=0
Like this:
index="YouShouldAlwaysSpecifyAnIndex" sourcetype="AndSourcetypeToo" status_code>=200 AND status_code<=210
| stats count
| where count=0
Schedule this to run every 5 minutes
over the last 10 minutes
and set to alert when number of events
and is greater than 0
I need the alert report like, if any code not generate any code with in the last hour that should be send alert like
Status_code count
200 0
201 1
202 5
203 0
like this if any stats count was 0 then alert will triggered
That is different and not the plain reading of what you asked. See my new answer (soon).
this will shows only status code exist in the events , if no stats code in the events , it doesn't shows any value, so here no visibility of the stats for the stats_code
HA HA HA! I am an idiot! This is a "how many dogs didn't bark last night" problem!
If possible, Unaccept
this answer and Accept
my other one, which has the full/correct answer (which this one alludes to).
I think he wants the case where there are no events. He stated in the question:
if a stats code was not generated in last 10 min
yeah correct if no status code were generated in 10 min there splunk need to triggered an alert
That's what it does (the logic is in the search; I ALWAYS put my logic in the search and use when number of events
and is greater than 0
).
Oops, yeah. I've got egg on my face. 🙂 I didn't realize that was the result until you pointed that out to me. I think the explanation helps the understanding of your method. Thanks!
Figuring you are using the GUI to enter the search, use -10m@m
to now
in the time picker.
Then use something like this for the search:
... stats_code>199 stats_code<211
Then when you create the alert, set the alert condition to alert if the count of events = 0.