I have a two search heads - but they perform different tasks. One head is for running scheduled searches and the other is for interactive searches. I'd like to utilize search head pooling - but I don't want to share any of the savedsearches.conf files. Is this possible?
Search Head Pooling (SHP) is an all or nothing option at the moment. Once you enable it (splunk pooling enable
However, since your users will not be logging into the other Search Head (Job Server) there should be no saved searches on that server to push to the SHP.
If you are using it for two distinctly different purposes, what reasons do you have for enabling SHP?
I'd like to use SHP as a means of keeping my eventtypes.conf and tags.conf in sync. Sometimes it's a bit tiresome to continually ask developers to create their tags/eventtypes in both locations. If someone has any other ideas I'm all ears.
This is probably not a supported way of handling this but we hacked this behavior by shutting off the scheduler search processor for the interactive search head and pooling with another search head that was left as the "job server". This job server would pick up and run scheduled searches while the interactive server could still be used to schedule searches. Not a perfect solution and there are other issues like trying to change scheduled search run times from the interactive search head.
You can't "partially" pool. however, you can disable the scheduler on the one that isn't supposed to run jobs. @hdre did this, but did this in a dangerous way. The right way to do this is to stick this in default-mode.conf:
[pipeline:scheduler] disabled_processors = LiveSplunks
This does the same thing as hdre suggested, but more safely (e.g., it won't get overwritten on a patch or upgrade).