I would like to have a dedicated job scheduling host for Splunk.
ie. A host that routinely processes all scheduled searches in the background(something an end user doesn't see), leaving any visible search heads able to respond to adhoc requests).
So I've created a new Splunk instance added it to the existing search head pool and then disabled the scheduled searching on the end user visible search head via:
disabled = true
This works great - only my dedicated scheduler run jobs now, but since the other search heads have the scheduler role disabled they display 'NONE' as the next scheduled time for a job to run even though another host will pick up the scheduled search...
So I guess this isn't the right way to do this; I haven't found any docuementation on this either.
How am I best to set up a dedicated job scheduler (to take load of existing search heads)?
So everything is working as it should be; scheduled searches are all run by the dedicated search head and other search heads don't run any. If you login to search head that IS running scheduler you should also see the correct scheduled time for your searches. I think this is feature not a bug in your config and everything seems to be the way you wanted. After all those normal search heads are not going to run any of scheduled searches so in that sense NONE is the correct answer for them.
Thanks for your response.The partitioning of roles is how I want it to be.
The problem is that each app(lets take deployment monitor as an example) has lots and lots of searches. It's impossible to know which of these searches are scheduled searches or not(as they all have 'NONE') displayed. Only by then editing the search can one identify that it is scheduled. This doesn't make much sense to me
Off course the scheduled times are displayed correclty on the hosts with the scheduler role enabled. Being a dedicated job scheduler, I don't intend for anyone to log into it or even know its there.
We do this the same way. Yes that is the same way it works for us. We have people create their own search and save it. Then they submit a request to have it scheduled. It would be nice to have a kind of calendar view so that we could visually "see" when searches are scheduled. You are kind of taking load of the searchheads but since the jobs server still is hitting the indexers you will still have the load hitting the indexers.
More recently I took the jobs server out of the pool (disabled pooling) and copied all of the apps and user directories local (etc/users and etc/apps). I then created a cron job to rsync the pooledlocation/etc/users and pooledlocation/etc/apps from the shared location to the local location. I also do an rsync from the local/etc/users to pooledlocation/etc/users so that when the search is updated it will get reflected in the pooled area. I still have some troubleshooting with this but basically it took I/O load off of the pooled location. We have 20,000-30,000 scheduled searches per day.