- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It appears the batch processor during each iteration only deletes the log files (lined up in the queue for each iteration) right after it completes opening/seeking and forwarding. This requires us to throttle our log generation facility considerably. Can you recommend best practices so we can cope our log generation facility. TIA.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Well yes. Until the data has been indexed, it can't be deleted, and must be stored somewhere. While Splunk has internal queues that can hold some amount of data, there's no advantage to using those rather than simply leaving them on the file system in the batch directory. I don't see why you feel you need to throttle your log generation, or what you think the forwarder would do if you're generating data faster than it can be sent.
Of course you can increase throughput by raising the forwarder max thruput soft limit, and if that isn't sufficient, then you must install additional indexing capacity.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Well yes. Until the data has been indexed, it can't be deleted, and must be stored somewhere. While Splunk has internal queues that can hold some amount of data, there's no advantage to using those rather than simply leaving them on the file system in the batch directory. I don't see why you feel you need to throttle your log generation, or what you think the forwarder would do if you're generating data faster than it can be sent.
Of course you can increase throughput by raising the forwarder max thruput soft limit, and if that isn't sufficient, then you must install additional indexing capacity.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Already set to 0. Does organizing into sub-directories factor?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

the setting is [thruput] maxKBps in limits.conf
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We increased indexing capacity. It appears to work.
Any hints how to raise forwarder max thruput soft limit?
