We have tried using the Universal Forwarder for sending logs from one of our servers to our Splunk indexer cluster using auto load balancing, but it appears to be putting too much of a load on the server causing application availability issues. This same server was running perfectly when logs were being collected via a directory monitor by a standalone Splunk indexer.
Is using a directory monitor a supported configuration for inputs in a cluster, or do I need to do something like set up a standalone UF that monitors the folders on that particular server remotely and sends them to our indexers?
Generally speaking I would suggest, in a clustered environment, separation of data collection from cluster management. The idea of clustering is redundancy and replace-ability. Collecting data on a single clustering node conflicts with that idea and goal.
A Universal Forwarder causing outages is pretty surprising, though its certainly possible. Usually local log collection is more reliable than collection over network filesystems or similar, but remote log collection is a valid strategy.
Generally speaking I would suggest, in a clustered environment, separation of data collection from cluster management. The idea of clustering is redundancy and replace-ability. Collecting data on a single clustering node conflicts with that idea and goal.
A Universal Forwarder causing outages is pretty surprising, though its certainly possible. Usually local log collection is more reliable than collection over network filesystems or similar, but remote log collection is a valid strategy.
I don't know what a directory monitor means in this context.
Do you mean an inputs.conf [monitor:///...]
stanza? If so, what were you doing instead with your Universal Forwarder?
Yes, that's what I mean by directory monitor. I have implemented the monitor from a dedicated machine and that seemed to resolve the performance issues encountered when running the UF locally on the File Server.
When the Splunk Forwarder service was running locally we saw a spike in network latency that was impacting IIS sites that use said server for Shared Configuration files. Now with the logs being collected remotely the latency has disappeared.
I am stuck with the architecture, the server in question is already a receptacle for over 80GB of logs per day from a proprietary application consisting of 10 separate IIS and application servers. The software vendor insists that the application log to a shared server (single point of failure IMO). I believe adding the collecting of those logs locally with the forwarder was just the straw that broke the camel's back. I am fighting to change the architecture, but that takes time.