We got a scenario whereby there are multiple search heads. (Say 2x of them). The main reason being load balancing (both active), providing redudancy etc. Hence logging from end-user seems fine.
The challenge with us is, alerting configuration. Splunk searches bulk of data every 2 minutes and sends alerts to Tivoli. Inorder to bring consistency, the same app package is deployed into all Search Heads. Thus the alerting functionality is active at all SHeads. But as a result of this configuration , the alerts are duplicated. (i.e. same alerts pushed from all SHeads)
Is there any way/method to prevent this? We can't disable schedule searches as we don't know in case of a failure no one notices it.
Search Head Pooling should do what you need, plus no need to manually deploy the apps to both servers.
For a scheduled search, that includes alerts, exactly one search head in the pool will run an invocation along with any actions executed on the results.
http://docs.splunk.com/Documentation/Splunk/6.1.2/DistSearch/Configuresearchheadpooling
@somesoni2 , but that means the single job server is a single point of failure ?
Search Head Pooling should do what you need, plus no need to manually deploy the apps to both servers.
For a scheduled search, that includes alerts, exactly one search head in the pool will run an invocation along with any actions executed on the results.
http://docs.splunk.com/Documentation/Splunk/6.1.2/DistSearch/Configuresearchheadpooling
@martin_mueller thanks again
Yeah, at least that's what supposed to happen. Not sure how Lucas' reported bug may interfere though.
@martin_mueller, thanks for that. "For a scheduled search, that includes alerts, exactly one search head in the pool will run an invocation along with any actions executed on the results". Will Splunk automatically switch to next available SH if one of the Head is down for maintenance?
I'm finding that certain real time alerts will run independently on each search head pool. When an email is sent they will be send from each host in the pool.
I have a feeling this is a bug in the search head pooling scheduler (logging a ticket). Disabling scheduling on all search heads except for a specific one feels, well, very resource inefficient.
We also have 8 search heads working for user load balancing. For Splunk object which requires only one instance to be run like summary index searches or alert, we have configured separate single job server, so that we don't run into the very same situation. You might go for something like that instead of putting it on search heads.