I have an application that logs to a shared clustered file system.
What happens when I install the fowarder (via deployment server and identical configuation) on on each of the nodes to monitor the logs on the this file system?
Do I get duplicates for each of the hosts or can splunk identify that they are dupes even though they come from different hosts?
Would crcsalt help here?
The tracking of duplicate input files is done by the individual forwarders. Since each forwarder does not know what other forwarders have processed, you will get duplicates.
1: Forcing an identical hostname, would that help the indexer to identify incoming dupes?
2: Using a heavy forwarder inbetween to filter out dupes.
I really want to avoid #2, that would mean I either add additional burden to a box or need a new box.
What you really should do is avoid having more than one forwarder read a given file.
Yup, avaoiding that would be best. I am currently trying to figure out whether the forwarder can be startet / stopped with the application, so there might be some minimal overlap, but overall only one of them is active.