I've installed Splunk build 4.1.2 and this upgrade has introduced the following in the splunkd.log:
05-24-2010 12:58:34.695 INFO TailingProcessor - File descriptor cache is full, trimming...
05-24-2010 12:58:37.695 INFO TailingProcessor - File descriptor cache is full, trimming...
05-24-2010 12:58:40.956 INFO TailingProcessor - File descriptor cache is full, trimming...
05-24-2010 12:58:43.753 INFO TailingProcessor - File descriptor cache is full, trimming...
05-24-2010 12:58:46.465 INFO TailingProcessor - File descriptor cache is full, trimming...
05-24-2010 12:58:48.832 INFO TailingProcessor - File descriptor cache is full, trimming...
05-24-2010 12:58:51.807 INFO TailingProcessor - File descriptor cache is full, trimming...
Now that we are using the "time_before_close" parameter within most of the input stanza's we are leaving files open for longer. This I imagine is putting more pressure on Splunk to be able to open and close files as required.
As you can see from the log above, whilst not an error it is an indicator that we need to tune our Splunk instance better to cope with the large number of files that Splunk has to monitor (I would imagine around 800 per day)
can you please look into this and advise me on how to better tune our Splunk instance?
I have also tested setting various different OS values for the number of allowed open file descriptor (ulimit -n) and the Splunk max_fd (within limits.conf) but they have not resolved the issue.
... View more