Monitoring Splunk

Delay of monitoring thousands of file

katalinali
Path Finder

I monitored several thousands of file in splunk but I find it indexes the new events for more than 30 minutes. I have set the lines:

[inputproc] max_fd = 256 time_before_close = 2

but it can't improve the situation. Are there any other methods to solve it?

Mick
Splunk Employee
Splunk Employee

Yes, remove the files that are no longer being updated, or blacklist files that you are not actually interested in.

The monitor input was designed to pick up data as it is added to a file, so simply enabling it for thousands of static files is actually using it in the wrong way, as it will always go back and check files to see if they have been updated.

Using this method for a first time load is fine, as long as you update your inputs once that initial data-load is complete. Leaving it in place for a few hundred files is also fine, as Splunk can check this many files relatively quickly. As you increase the number of files being monitored however, you are slowing down how quickly new data is picked up.

I suspect that you are actually monitoring more files than you think, or perhaps you are using a NFS mount - network latency is also an important factor

Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Can’t Make It to Boston? Stream .conf25 and Learn with Haya Husain

Boston may be buzzing this September with Splunk University and .conf25, but you don’t have to pack a bag to ...

Splunk Lantern’s Guide to The Most Popular .conf25 Sessions

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Unlock What’s Next: The Splunk Cloud Platform at .conf25

In just a few days, Boston will be buzzing as the Splunk team and thousands of community members come together ...