In this log have a look at batch inputs. A batch input will delete the file you want to read afterwards.
What do you mean by log size? Are ww talking boit a splunk log like splunkd.log? Or are you referring to a log you want to monitor?
[batch://<path>] move_policy = sinkhole <attrbute1> = <val1> <attrbute2> = <val2>
HI, splunk can handle logs with big size too. Depends more on your queue size and your network, how fast it will ingest the data.
You are monitoring with a universal forwarder?
You can set the batch input your self, in any inputs.conf in $SPLUNK_HOME/splunk/etc/apps
just create a new app with a local folder and within the local folder create an inputs.conf.
Then paste the code I gave you above and replace
://path with your file path. Then restart splunk.
well it is possible, you can configure index size limit as the logs are stored in indexes.
have a look at this doc:
let me know if it helps!