Very new to Splunk, but have what I think should be a pretty straightforward task. I have a search that results in a simple timechart. I want to create an alert if a result in the timechart equals something specific. The example in the 'eval' man page is close enough to serve as the framework. Example search:
source=eqs7day-M1.csv | eval Description=case(Depth<=70, "Shallow", Depth>70 AND Depth<=300, "Mid", Depth>300 AND Depth<=700, "Deep") | table Datetime, Region, Depth, Description
What would I append, or what conditional search would I use to create an alert if the table had a value equal to say, "Shallow"?
From the search, click Save As --> Alert, set trigger condition to Custom, and set value to search Depth<=70. Schedule your search to run on a regular interval (ie every hour) and have it search for the past hour.
If you want "real-time" you can do a real-time alert, but that's costly. Instead, set schedule to 5 minutes (run on cron schedule as */5 * * * *), set your earliest to -6m@m and latest to -1m@m. This will run a search every 5 minutes for the last 5 full minutes of events (with 1 minute buffer to ensure events are all processed) and the alert will trigger if the field "Depth" is greater than or equal to 70.
From there, just decide what you want the action to be.
Thanks for the feedback, I appreciate the insight. I just tried setting the custom value to search Depth<=70, but now I'm getting an alert constantly. I'm wondering if the example I used wasn't close enough. Here is (essentially) my actual query:
So my resulting timechart shows me what I want - I see the number of Accepted messages per host, and then the percentage of the total that the two servers grabbed. If one falls below 40% I see Error in the timechart, but the alert wasn't generating. When I set the custom condition to: search Error I got alerts continually. Any thoughts?
Ok, update your Alert search with by adding | search Result1="Error" OR Result2="Error" and then change the Trigger Condition to number of results greater than 0.
Also, make sure the alert is set Trigger Once, not For Each Result. Let's say you're scheduled the alert for every 5 minutes looking back at the last 5 minutes...with "For Each Result" set, if your search returns 20 events below 40%, you'll get 20 alert messages. If you set to "Trigger Once", you'll get 1 alert if the search returns any events below 40% within the last 5 minutes. This way, at most you'll get 1 alert every 5 minutes, but only if Result1 or Result2 went below 40% once within the last 5 minutes.