We have a situation whereby we have to run an app (a script within an app) individually on each Servers of Search Head in a Search Head Cluster. This script mainly looks into various OS parameters and hence has to be individually run on each physical servers which has SH enabled
But AFAIK in a SHC, the script is run by Captain on one server at a time and is decided by captain and not on all servers.
Is there any option to override shcluster just for this app so that the script is run on all servers on a scheduled basis?
(In indexers it is ok, if we put directly into ./etc/apps)
Edit: Requirement is to run as scripted input
My question is similar to: https://answers.splunk.com/answers/294093/where-do-i-deploy-scripted-inputs-in-a-splunk-624.html , but wanted to collect data from Individual Search Heads
Does it have to be a scripted input or could you run it in CRON / Task Scheduler?
Something to test too... can you have search heads listening to a search head deployer and a regular deployment server... i think you can. If so, you can create a server class for your search heads, and deploy the app the "traditional" way.
Many options here... im just spewing them...
You could make your scripted input use SSH / Windows Remoting so that it ran from one server but always queries all three+ search heads. Using SSH keys, to connect, etc.
@jkat54 . I've updated query as we have to run this as scripted input
2nd point: This is the real problem: Even if we use regular deployment server, the apps will come to $SPLUNK_HOME/etc/apps , i.e. same location as if we do via shcluster
Unix option is limited as we have security requirement of not using SSH keys etc..
ah yes I see the issue with using a regular deployment server. What happens if you put the inputs under Splunkhome/etc/system/local instead?
well.. this is a whole app itself with multiple lookups etc. So wanted to keep it separate