Getting Data In
Highlighted

Splunk forwarders on clustered nodes monitoring the same files

Motivator

Is there a high-availability or multi-node configuration for Splunk forwarders?

I have a small RHEL cluster writing data to shared storage. Any cluster node could be writing to any one of the files on the shared storage. I need to monitor the data written to the shared storage at all times, even if one or more cluster members goes down, however I don't want to index multiple copies of the data. Can forwarders be used in this way? Two or more forwarders monitoring the same data but not sending duplicate data? Or possibly some kind of primary/backup configuration for the forwarders? Any help/ideas would be greatly appreciated!

0 Karma
Highlighted

Re: Splunk forwarders on clustered nodes monitoring the same files

Communicator

This is a good question!

I've got some ideas but need to build a test case for this.

I'm thinking that if I make the GUID and "serverName" on each UF the exact same, it MAY de-duplicate the data because from the indexer(s) perspective, it is coming from the same place (but of course this is just a theory at the moment)

0 Karma
Highlighted

Re: Splunk forwarders on clustered nodes monitoring the same files

Motivator

This is interesting... I wonder if it will work? I'll have to try setting this up and testing it out as well. Thanks for the suggestion!

0 Karma
Highlighted

Re: Splunk forwarders on clustered nodes monitoring the same files

SplunkTrust
SplunkTrust

Hi,

Splunk does not provide any layer of cluster aware solution for file monitoring and forwarding.

If your system cluster is an active / passive cluster, the easiest solution is creating a resource which will start / stop the Universal Forwarding instance on the same node than your applications.

Ideally you would use a block cluster file system like drdb such that the file system containing the instance and your fishbuckets will be available when the resource is migrated from one node to another.

But that's just an idea 😉 However I have done it already with a drbd / Pacemaker cluster, and this is perfectly possible and works fine if correctly configured.

0 Karma
Highlighted

Re: Splunk forwarders on clustered nodes monitoring the same files

Motivator

I like this idea! We are using a glusterfs / Pacemaker cluster and this sounds exactly like what we need. I had been thinking about sym-linking the fishbucket but this is a more robust solution. I'll have to get in touch with the server admins and see if we can get this going. Thanks for the idea!

0 Karma
Highlighted

Re: Splunk forwarders on clustered nodes monitoring the same files

Legend

Hi wpreston,
It isn't a feature present in Splunk, you could use three workarounds:

  • if you haven't license consuption problems, you could index data from both the forwarders filtering duplicated data in your searches;
  • if you don't want to twice index your data, you could create a batch on your servers that start/stop forwarder when the node is active or passive (it's the easiest way);
  • at least you could use an ntermediate Heavy forwarder to filter duplicated date before sending to the indexers (but you index twice your data) .

Bye.
Giuseppe

0 Karma