Getting Data In
Highlighted

Can you delay a Universal Forwarder from ingesting files ?

Engager

I have a minor issue whereby my Linux UF (an NFS server) is generating TailReader warnings in splunkd.log due to insufficient file permissions. It seems that the file permissions across the NFS mount are not being set quickly enough after creation for the Splunk user to read them on first pass. (the files are subsequently ingested) Example log messages:
alt text

The files are created on the NFS client, the UIDs are not matched between server and client, and perms are set 644. Whenever I look at the files on the Forwarder, some time after file creation, the files are all readable.

What I think is needed is a short (possibly a second or two) delay between Splunk detecting the presence of the file, and trying to read it. Is such functionality available ? I've searched the documentation and Answers here, but not found anything appropriate.

Highlighted

Re: Can you delay a Universal Forwarder from ingesting files ?

Influencer

Hi,

had the same issue with a antivirus software blocking the read of a file, I didn´t find a solution for this either, so I think theres not.

But I would be happy to here there is a way 🙂

0 Karma
Highlighted

Re: Can you delay a Universal Forwarder from ingesting files ?

Super Champion

There’s no way to do this. You should either fix the real issue, which is default group/other ownership of the parent folders, or look at tuning of the NFS settings at the kernel level. ( Typically you shouldn’t see this kind of issue if your nfs Mount is performant.)

View solution in original post

0 Karma
Highlighted

Re: Can you delay a Universal Forwarder from ingesting files ?

Engager

Thanks for your response, esix. I can't argue with the reasoning! What I am hoping for is indeed a workaround, rather than a fix. However, performance is not needed for this NFS implementation, and the effort and time likely to be spent performance troubleshooting (or providing environment-wide UID alignment) outweighs the benefit in seeing some warning messages disappear from our Splunk logs. At the end of the day, the implementation is working. If I was responsible for the network and/or server infrastructure, I might consider exploring further...

0 Karma
Highlighted

Re: Can you delay a Universal Forwarder from ingesting files ?

Esteemed Legend

Not directly but see my answer. There is a way.

0 Karma
Highlighted

Re: Can you delay a Universal Forwarder from ingesting files ?

Esteemed Legend

This answer is cloned from another Q/A here:
https://answers.splunk.com/answers/309910/how-to-monitor-a-folder-for-newest-files-only-file.html

You might think to try ingnoreOlderThan but if you do, beware that it does not work the way most people think that it does: once Splunk ignores the file the first time, it is in a blacklist and it will never be examined again, even if new data goes into it! It is the opposite of what you need anyway. Here is an interesting read on that feature:

http://answers.splunk.com/answers/242194/missing-events-from-monitored-logs.html

Also read here, too:

http://answers.splunk.com/answers/57819/when-is-it-appropriate-to-set-followtail-to-true.html

I have used the following hack to solve this problem:

Create a new directory somewhere else (/destination/path/) and point the Splunk forwarder there. Then setup a cron job that creates selective soft links to files in the real directory (/source/path/) for any file that has been touched in the last 5 minutes (or whatever your threshold is), like this:

*/5 * * * * cd /source/file/path/ && /bin/find . -maxdepth 1 -type f -mmin -5 | /bin/sed "s/^..//" | /usr/bin/xargs -I {} /bin/ln -fs /source/path/{} /destination/path/{}

The nice thing about this hack is that you can create a similar cron job to remove files that have not been changed in a while (because if you have too many files to sort through, even if they have no new data, your forwarder will slow WAY down) and if they ever do get touched, the first cron will add them back!
Don't forget to setup a 2nd cron to delete the softlinks, too, with whatever logic allows you to be sure that the file will never be used again, or you will end up with tens of thousands of files here, too.

0 Karma