I have a forwarder that has almost a TB of data sitting in its monitored directory, which seems to be slowing down the forwarders ability to send the data on to the indexer. I'm aware of the batch stanza's ability to delete the data after its sent, but we have a 12 month data retention policy, and need to keep it. Is there a way to configure batch to move the sent data to another directory instead of deleting it? Sinkhole appears to be the only option for the move_policy attribute.
Is your preference not to use Splunk as your 12-month datastore? Splunk can retain all or any data for as long as you want (provided you have adequate storage capacity). It is simple to set a time-based retention policy instructing Splunk to retain the data for no less than 12 months.
If you want to retain the data outside of Splunk, then there is no way to configure the batch processor to index and not delete. Your original use of the monitor input is the better option in this case.
Are you by chance using the Light Forwarder? If so, it has a setting to limit the size of output stream. In $SPLUNK_HOME/etc/apps/SplunkLightForwarder/default/limits.conf:
maxKBps = 256
This could be why you are seeing very slow uptake of the data in your monitored directory. You can set this higher to increase the output rate.
Also, you might want to check the number of files in the monitor directory. Are they compressed? The number of files and whether they are compressed will also have an impact on the processing.