Hi-
I am setting up search pooling on splunk 5.0.7 and testing alerts.
I have two search heads in the pool behind a load balancer.
When i set up the alert on one of the heads, it runs on both (which is the expected behavior), but I receive two copies of the same alert one from each search head in my mailbox.
Since i have configured the alert_actions to use my load balancer hostname instead of the search heads, if i click on the link in the email and get redirected to the search head that did not trigger the alerts, I get "The search you requested could not be found" message.
Is there something I should do to avoid duplicate alerts sent?
Thanks!
N~
PS. I have seen a couple of this same question posted to splunk answers in the past, with no answer.
I did follow the advice above and created a new search head within the same pool, and then i disabled the scheduler on all other search heads except for this one. So the alerts created by users on any search head in the pool will run only on this one instance. I kept this instance off of the load balancer pool, to dedicate it to scheduled jobs.
This solved my problem.
Here is how to disable the scheduler:
in $SPLUNK_HOME/etc/system/local/default-mode.conf, add/change the following stanza:
[pipeline:scheduler]
disabled = true
restart splunk.
N~
Almost sounds to me like pooling isn't configured correctly. We have 10 pooled searchheads with 2 dedicated a jobs servers. The other 8 have scheduled searches disabled. Pooling should "lock" the scheduled search so that it only runs on 1 server at a time. Check your configuration for pooling.
I did follow the advice above and created a new search head within the same pool, and then i disabled the scheduler on all other search heads except for this one. So the alerts created by users on any search head in the pool will run only on this one instance. I kept this instance off of the load balancer pool, to dedicate it to scheduled jobs.
This solved my problem.
Here is how to disable the scheduler:
in $SPLUNK_HOME/etc/system/local/default-mode.conf, add/change the following stanza:
[pipeline:scheduler]
disabled = true
restart splunk.
N~
What does this actually do? If the scheduler is disabled doesn't that mean that all scheduled jobs (not just alerts) on that particular search head will not run?
Generally speaking, alerting is not an activity that is best hosted on a search-head pool, as it:
a) Doesn't typically require direct user interaction (unlike, say, dashboards)
b) Running many real-time or historical searches to produce alerts can have a non-negligible impact on the network storage that provisions the NFS mount on which search-head pooling relies
It is typically considered a best practice to define a standalone search-head as a job server outside of your pool, to conduct a subset of activities that don't require the type of horizontal, user count-driven scalability that search-head pooling does. The two main ones there are alerting and summary indexing.
good answer, but I tend to slightly differ on having separate job server coz
- Redudancy. Most of SHeads are in multiples and thus putting managing apps from them provides "free" redudancy.
- Many alerting logic comes from same app that displays the dashboard. Hence it is logical to implement the alerting functionality within same app. Could save performance using postprocesses etc..
- Maintenance mode : What will happen if there is single job server and have to do maintenance on it? In our case the alerts are so critical that it cannot afford even 5 min downtime.