How do you prevent Splunk from indexing duplicate events forwarded from different forwarders? The monitored log files are recording the same events but in different servers. The requirement is needed for maintaining the availability of the monitored events, even when one of the servers is powered off.
Thank you.
Effectively no, universal forwarders are not aware of other universal forwarders.
In fact Splunk enterprise instances are not aware of each other, each heavy forwarder would also be standalone.
Therefore you would have to build a script or find a way to only monitor the file when the instance should be running it...(or use another trick)
At the Splunk indexing tier it's also impossible to de-duplicate data on the way in, at least upto 7.2.x so far
Effectively no, universal forwarders are not aware of other universal forwarders.
In fact Splunk enterprise instances are not aware of each other, each heavy forwarder would also be standalone.
Therefore you would have to build a script or find a way to only monitor the file when the instance should be running it...(or use another trick)
At the Splunk indexing tier it's also impossible to de-duplicate data on the way in, at least upto 7.2.x so far
Ok, thank you for the help
Please click on accept answer so this question is marked as answered when you are ready (feel free to wait for more answers)...thanks!
@gjanders - Can we do some config change on forwarder end to stop sending duplicate data?
@rashi83 it would depend on what is causing it! The UF does not de-duplicate data, so if multiple files have some level of duplicate content you may get duplicates in Splunk...
If you monitor unique files on the UF you should not be seeing duplicates in Splunk outside issues with performance and the useACK setting...
To prevent data loss, you probably want to index the duplicate events and remove the duplicates at search time.
Thank you ,but the goal is to not index the duplicated events. Any other idea?