Splunk Alerting rate



I am looking at setting up alerting in splunk, at the moment I don't know the expected frequency or volumes of alerts, are there any performance issues I should consider. We have a 3 node search cluster, 4* indexers.

Are the searches spread across the searchheads? is it possible to fix them to a single Searchhead?

Appreciate any advice.

0 Karma



Normally the captain helps distribute the searches around to the search head members.

To answer your question, you could restrict some of your search heads to only run adhoc searches:

I think it's a good idea to have at least four search heads in your cluster. That way you can take one down without disturbing the cluster.

To keep an eye on your cluster you can use the search head clustering dashboard

You can also get a lot of information from the monitoring console.

I use the monitoring console to alert me when scheduled searches are getting skipped, for example. We also use it to alert us when the captain changes (frequent changes might indicate some kind of problem.)

You can also view the length of time of your long running schedules tasks etc.


Thanks, I had considered adding a 4th but was looking to fix this particular set of searches to it. If I am reading the doc correctly I could consider setting a couple to adhoc_searchhead = true then monitor for skipped searches occurring on the remaining 1 (or 2 if I and another) searchheads.

0 Karma
Get Updates on the Splunk Community!

Using Machine Learning for Hunting Security Threats

WATCH NOW Seeing the exponential hike in global cyber threat spectrum, organizations are now striving more for ...

New Learning Videos on Topics Most Requested by You! Plus This Month’s New Splunk ...

Splunk Lantern is a customer success center that provides advice from Splunk experts on valuable data ...

How I Instrumented a Rust Application Without Knowing Rust

As a technical writer, I often have to edit or create code snippets for Splunk's distributions of ...