Getting Data In

Moving data with the batch stanza?

carmackd
Communicator

I have a forwarder that has almost a TB of data sitting in its monitored directory, which seems to be slowing down the forwarders ability to send the data on to the indexer. I'm aware of the batch stanza's ability to delete the data after its sent, but we have a 12 month data retention policy, and need to keep it. Is there a way to configure batch to move the sent data to another directory instead of deleting it? Sinkhole appears to be the only option for the move_policy attribute.

Tags (1)
0 Karma
1 Solution

hulahoop
Splunk Employee
Splunk Employee

Is your preference not to use Splunk as your 12-month datastore? Splunk can retain all or any data for as long as you want (provided you have adequate storage capacity). It is simple to set a time-based retention policy instructing Splunk to retain the data for no less than 12 months.

If you want to retain the data outside of Splunk, then there is no way to configure the batch processor to index and not delete. Your original use of the monitor input is the better option in this case.

Are you by chance using the Light Forwarder? If so, it has a setting to limit the size of output stream. In $SPLUNK_HOME/etc/apps/SplunkLightForwarder/default/limits.conf:

[thruput]
maxKBps = 256

This could be why you are seeing very slow uptake of the data in your monitored directory. You can set this higher to increase the output rate.

Also, you might want to check the number of files in the monitor directory. Are they compressed? The number of files and whether they are compressed will also have an impact on the processing.

View solution in original post

hulahoop
Splunk Employee
Splunk Employee

Is your preference not to use Splunk as your 12-month datastore? Splunk can retain all or any data for as long as you want (provided you have adequate storage capacity). It is simple to set a time-based retention policy instructing Splunk to retain the data for no less than 12 months.

If you want to retain the data outside of Splunk, then there is no way to configure the batch processor to index and not delete. Your original use of the monitor input is the better option in this case.

Are you by chance using the Light Forwarder? If so, it has a setting to limit the size of output stream. In $SPLUNK_HOME/etc/apps/SplunkLightForwarder/default/limits.conf:

[thruput]
maxKBps = 256

This could be why you are seeing very slow uptake of the data in your monitored directory. You can set this higher to increase the output rate.

Also, you might want to check the number of files in the monitor directory. Are they compressed? The number of files and whether they are compressed will also have an impact on the processing.

Get Updates on the Splunk Community!

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...

Built-in Service Level Objectives Management to Bridge the Gap Between Service & ...

Wednesday, May 29, 2024  |  11AM PST / 2PM ESTRegister now and join us to learn more about how you can ...

Get Your Exclusive Splunk Certified Cybersecurity Defense Engineer Certification at ...

We’re excited to announce a new Splunk certification exam being released at .conf24! If you’re headed to Vegas ...