Is there a high-availability or multi-node configuration for Splunk forwarders?
I have a small RHEL cluster writing data to shared storage. Any cluster node could be writing to any one of the files on the shared storage. I need to monitor the data written to the shared storage at all times, even if one or more cluster members goes down, however I don't want to index multiple copies of the data. Can forwarders be used in this way? Two or more forwarders monitoring the same data but not sending duplicate data? Or possibly some kind of primary/backup configuration for the forwarders? Any help/ideas would be greatly appreciated!
This is a good question!
I've got some ideas but need to build a test case for this.
I'm thinking that if I make the GUID and "serverName" on each UF the exact same, it MAY de-duplicate the data because from the indexer(s) perspective, it is coming from the same place (but of course this is just a theory at the moment)
This is interesting... I wonder if it will work? I'll have to try setting this up and testing it out as well. Thanks for the suggestion!
Splunk does not provide any layer of cluster aware solution for file monitoring and forwarding.
If your system cluster is an active / passive cluster, the easiest solution is creating a resource which will start / stop the Universal Forwarding instance on the same node than your applications.
Ideally you would use a block cluster file system like drdb such that the file system containing the instance and your fishbuckets will be available when the resource is migrated from one node to another.
But that's just an idea 😉 However I have done it already with a drbd / Pacemaker cluster, and this is perfectly possible and works fine if correctly configured.
I like this idea! We are using a glusterfs / Pacemaker cluster and this sounds exactly like what we need. I had been thinking about sym-linking the fishbucket but this is a more robust solution. I'll have to get in touch with the server admins and see if we can get this going. Thanks for the idea!
It isn't a feature present in Splunk, you could use three workarounds: