I have a scheduled search/alert. It validates that for every Splunk event of type A, there is a type B. If it doesn't see a corresponding B, it will alert. Occasionally I am getting false alerts because Splunk is not able to reach one or more indexers. I'll see the message "The following error(s) occurred while the search ran. Therefore, search results might be incomplete. " along with additional details. That means the search doesn't get back all the events, which will include a type B event and cause a false alert to fire.
Since Splunk knows it wasn't able to communicate to all the indexers, I'd like to abort the search. Is there anything sort of like the "addinfo" command were I can add information about whether getting all the data was successful so that I can do a where clause on it and remove all my rows if there were errors?
How can I prevent an alert from firing if I didn't get all the results back from the indexers?
There might be a more fluid way to do this, but one idea would be to make your alert a two-step process:
1) Add " | addinfo " to your search to get the search SID, and have the alert log an event with that SID instead of sending email.
2) Create the alert and make your alert decision by searching for the new event log, and either using the " | rest /services/search/jobs/<SID> ", or searching the _internal or _audit indexes to get metadata about that search.