Getting Data In

Moving data with the batch stanza?

carmackd
Communicator

I have a forwarder that has almost a TB of data sitting in its monitored directory, which seems to be slowing down the forwarders ability to send the data on to the indexer. I'm aware of the batch stanza's ability to delete the data after its sent, but we have a 12 month data retention policy, and need to keep it. Is there a way to configure batch to move the sent data to another directory instead of deleting it? Sinkhole appears to be the only option for the move_policy attribute.

Tags (1)
0 Karma
1 Solution

hulahoop
Splunk Employee
Splunk Employee

Is your preference not to use Splunk as your 12-month datastore? Splunk can retain all or any data for as long as you want (provided you have adequate storage capacity). It is simple to set a time-based retention policy instructing Splunk to retain the data for no less than 12 months.

If you want to retain the data outside of Splunk, then there is no way to configure the batch processor to index and not delete. Your original use of the monitor input is the better option in this case.

Are you by chance using the Light Forwarder? If so, it has a setting to limit the size of output stream. In $SPLUNK_HOME/etc/apps/SplunkLightForwarder/default/limits.conf:

[thruput]
maxKBps = 256

This could be why you are seeing very slow uptake of the data in your monitored directory. You can set this higher to increase the output rate.

Also, you might want to check the number of files in the monitor directory. Are they compressed? The number of files and whether they are compressed will also have an impact on the processing.

View solution in original post

hulahoop
Splunk Employee
Splunk Employee

Is your preference not to use Splunk as your 12-month datastore? Splunk can retain all or any data for as long as you want (provided you have adequate storage capacity). It is simple to set a time-based retention policy instructing Splunk to retain the data for no less than 12 months.

If you want to retain the data outside of Splunk, then there is no way to configure the batch processor to index and not delete. Your original use of the monitor input is the better option in this case.

Are you by chance using the Light Forwarder? If so, it has a setting to limit the size of output stream. In $SPLUNK_HOME/etc/apps/SplunkLightForwarder/default/limits.conf:

[thruput]
maxKBps = 256

This could be why you are seeing very slow uptake of the data in your monitored directory. You can set this higher to increase the output rate.

Also, you might want to check the number of files in the monitor directory. Are they compressed? The number of files and whether they are compressed will also have an impact on the processing.

Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...