If the gaps/spikes are related to the moments the log is rotated, then increasing the file size in log4j config would reduce the rotation frequency and as such the frequency of those gaps.
But from the list of files you shared and the screenshot, I wouldn't immediately draw the conclusion that the delays are related to the rotation. The list shows roughly 1 file each hour, while the graph only shows gaps every so many hours. Did you confirm this somehow, that the delays happen exactly when the file is rotated? Have you checked there are no queueing issues somewhere in your landscape around the times those gaps occur?
If it is related to the rotation, then the question is: what keeps Splunk from noticing the new application.log after rotating the previous file to log.1. Is this forwarder very busy processing other inputs? Perhaps you can enable an additional pipeline if the forwarder server has sufficient resources left.
I've seen forwarders process far more busy files (generated by a syslog daemon receiving data from many sources), into the GBs per hour, without such behavior.
... View more