We have a very simple search that looks for a value and if that value is not greater than 0 for ten minutes it sends an alert. Very simple, if that value is 0 for ten minutes, send an alert. The search is literally just {value}>0 and the alert has a cron expression of */10 5-23 * * *, the number of results field is set to trigger if the number of results is equal to 0, the time range is set to a custom time of -11m@m to -1m@m. We occasionally get false positives on this alert and we have no idea why. We run the search on that time frame and we see plenty of values. Does anyone have any insight into why this is happening?
Hi JoRodriguez,
could you share your search? probably the problem is inside your search.
did you used something like this?
my_search earliest=-11m@m latest=-m@m
| stats max(my_field) AS check
| where check=0
if there are results, there's an alert
Your cron expression should be */10 5-23 * * *
Bye.
Giuseppe
Sure. My search is just "order.concessions_quantity">0
and then I created an alert from that search. I did this through the Splunk enterprise web gui. My cron job is actually */10 5-23 * * *
. The message just took out my asterisks
Did you use something like
my_search "order.concessions_quantity">0
| ...
or something like my search, using stats count?
If the first, try my hint.
Bye.
Giuseppe
Nope. I literally just have that search and then I configure my custom time and then I save it as an alert. I'm not really doing any of the alerting configuration within the query.