All,
I have about 200 machines with UF installed. I want to monitor bash_history and a few other Linux /home items. The challenge is on about half the machines the home directory is an NFS mount and the other half are local file system. Monitoring the NFS every every end point is IO prohibitive and double indexes the same data.
Is there a way in Splunk to programmatically handle this? That is, I only need to gather the files/logs from one host if it's mounted from NFS but if it's local file system I need to run the input on each machine.
Any recommendations? I was thinking of writing a script input in a Splunk app that creates and manages an app in the UF app folder. But seems very clunky.
There are not programmatic settings in inputs.conf or any other Splunk config file.
If you use a third-part tool like Ansible to manage your UFs then your script idea might work It would be even better if Ansible could make the NFS-is-used decision and selectively install your input app.
If you use the Splunk deployment server to manage UFs then don't manipulate the app locally. That will cause the UF to re-install the app from the DS.