4 high performance PC server, I want them all to be INDEXERs
Logs are uploaded to one of them, not by any FORWARDER
I want set 1 server as forwarder, to distribute Logs to the other 3, 1/4 of the logs each. meanwhile let itself keep 1/4 to eat. What should do with those .confs?
and I want to backup splunk-eaten-data, maybe 'splunk/var', through scripts, daily. what targets should I choose to backup? Since I will not assign dedicated MASTER-PEER to a 64GB memory instance.
You can certainly install a universal forwarder on the machine where the files get uploaded, as if the forwarder had been a separate machine. Then define the four indexers in your outputs.conf in the forwarder instance. Just make sure that only the forwarder monitors the files, not the indexer instance on the same host. I think that this is the easiest (perhaps only) way to ensure that the events are evenly distributed between the indexers.
Not too sure about what you mean with your backup question. But you can always read up on what you may want to back up in the docs;
i have to let Splunk do some “route and filter” work, It seems universalforwarder is not able to do that.
i have to let indexer to do it？
Routing and filtering (as described in http://docs.splunk.com/Documentation/Splunk/6.0/Forwarding/Routeandfilterdatad ) needs to be done on a Heavy Forwarder (or Indexer) - if you want to do anything but the most basic routing.