We just had an application bug that spewed millions of duplicate messages into a Splunk monitored logfile. This caused a license violation to be triggered, before anyone could do anything about it.
In order to prevent this type of violation from happening we are considering putting a hard limit on the maximum logfile size Splunk can index. Most of our logfiles are normally apx < 300MB daily. So if we have a logfile that has > 500MB, we want to ignore anything else that is logged to the file after 500MB.
Is it possible to put such a hard byte limit on a monitored logfile or input source?
I've seen some answers that suggested putting a maxKBps datatransfer per second limit (in limits.conf), this one option, but perhaps some other option in Splunk could be used as well?