In our environment, we have one application that uses 32 blades. On each of those 32 blades, we are looking to install splunk, and have them all configured to monitor the same directories and all forwarding data to one central repository.
Instead of manually installing and configuring splunk on each server's own filesystem, I was wondering if there is some way to use a generic install of splunk on an NFS mount that is shared between all the servers. The problem I know we'd face is that splunk does it's logging and configuration in the same directory as the installation, so this wouldn't work with 32 servers sharing the same configuration directory.
What I was wondering is if there a way to specify a separate local directory that is not at all tied in with the binaries needed to run splunk? This directory would house any local configurations and logs, which has a very small disk footprint and can just be on each server's local filesystem.
Has this been done before? How are other users installing splunk on very distributed environments?
Set up a custom splunk install that points to a splunk deployment server, tar it up, untar and start on each machine, then using the deployment server push out the changes. easy as.
It doesn't sound like you need splunk on the machines, just the splunk light forwarders. There are other ways to do this- for instance, syslog-ng is very easy to set up to tail logs, receive syslogs, etc.
it's just an alternative. syslog-ng is easier to set up in many environments as it's more available and has been deployed in a much larger scale.
You're correct that we only would need light forwarders on each box. What would be the advantage to using syslog-ng over a light forwarder?
On *nix most people use puppet to deploy forwarders to a number of servers.
Check the following thread for a start: http://answers.splunk.com/questions/345/does-splunk-play-nice-with-puppet