We recently moved from a stand-alone ES splunk search head to a clustered splunk ES search head, and we've started to see doubling, and in some cases tripling up of some of our correlation search results where we've configured throttling that we have not seen on the stand-alone machine.
correlation search scheduled to run 23 minutes after the hour every 6 hours. search looks back 24 hours to now(). Throttling is set to 1 day.
Search runs, generates notable events. 12 hours later, search generates notable for the same events it found in the first run, implying that the search likely ran once on the same search head, and on a different search head the second time.
Is there a way to confirm that all search heads have the same criteria for what should be throttled and for how long?