Getting Data In

Monitor a File That's Being Purged Regularly

jepoyyyy
Explorer

Hi All,

I have a multi-tiered Splunk deployment and I am having some serious indexing lag from a remote host.

We have configured a forwarder to monitor a file that is being purged every 30 minutes. After the said interval, the contents of the file are being written in an archive directory. The problem is, we have a significant amount of lag before it becomes searchable in Splunk. We sometimes experience as far as 5 hour indexing lag from that particular source. Upon checking on it now, it is down to 45 minutes lag. So the lag varies from time to time.

We're pretty sure that it is not being caused by an undersized Splunk infrastracture because we are also collecting *nix stats (cpu, ram, disk, etc) and these events come in in near-realtime.

Upon checking the logs from the forwarder, we see this line from time to time.

WatchedFile - Checksum for seekptr didn't match, will re-read entire file="/some/file/name/file.log".

Is there an inputs.conf parameter that I should make use to monitor a file that is being flushed regularly?

Any help would greatly be appreciated.

Kindest regards,
Jeff

0 Karma

jepoyyyy
Explorer

I found the root cause of this already. The file that was being monitored was just too big for the default bandwidth limit of the forwarder.

I modified the maxKbps in limits.conf to adjust it and accommodate the volume.

I hope this helps someone someday.

Kindest regards,
Jef

0 Karma
Get Updates on the Splunk Community!

Splunk Observability Cloud | Unified Identity - Now Available for Existing Splunk ...

Raise your hand if you’ve already forgotten your username or password when logging into an account. (We can’t ...

Index This | How many sides does a circle have?

February 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

Registration for Splunk University is Now Open!

Are you ready for an adventure in learning?   Brace yourselves because Splunk University is back, and it's ...