Getting Data In

Moving data with the batch stanza?

carmackd
Communicator

I have a forwarder that has almost a TB of data sitting in its monitored directory, which seems to be slowing down the forwarders ability to send the data on to the indexer. I'm aware of the batch stanza's ability to delete the data after its sent, but we have a 12 month data retention policy, and need to keep it. Is there a way to configure batch to move the sent data to another directory instead of deleting it? Sinkhole appears to be the only option for the move_policy attribute.

Tags (1)
0 Karma
1 Solution

hulahoop
Splunk Employee
Splunk Employee

Is your preference not to use Splunk as your 12-month datastore? Splunk can retain all or any data for as long as you want (provided you have adequate storage capacity). It is simple to set a time-based retention policy instructing Splunk to retain the data for no less than 12 months.

If you want to retain the data outside of Splunk, then there is no way to configure the batch processor to index and not delete. Your original use of the monitor input is the better option in this case.

Are you by chance using the Light Forwarder? If so, it has a setting to limit the size of output stream. In $SPLUNK_HOME/etc/apps/SplunkLightForwarder/default/limits.conf:

[thruput]
maxKBps = 256

This could be why you are seeing very slow uptake of the data in your monitored directory. You can set this higher to increase the output rate.

Also, you might want to check the number of files in the monitor directory. Are they compressed? The number of files and whether they are compressed will also have an impact on the processing.

View solution in original post

hulahoop
Splunk Employee
Splunk Employee

Is your preference not to use Splunk as your 12-month datastore? Splunk can retain all or any data for as long as you want (provided you have adequate storage capacity). It is simple to set a time-based retention policy instructing Splunk to retain the data for no less than 12 months.

If you want to retain the data outside of Splunk, then there is no way to configure the batch processor to index and not delete. Your original use of the monitor input is the better option in this case.

Are you by chance using the Light Forwarder? If so, it has a setting to limit the size of output stream. In $SPLUNK_HOME/etc/apps/SplunkLightForwarder/default/limits.conf:

[thruput]
maxKBps = 256

This could be why you are seeing very slow uptake of the data in your monitored directory. You can set this higher to increase the output rate.

Also, you might want to check the number of files in the monitor directory. Are they compressed? The number of files and whether they are compressed will also have an impact on the processing.

Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...