Getting Data In

Monitor a File That's Being Purged Regularly

jepoyyyy
Explorer

Hi All,

I have a multi-tiered Splunk deployment and I am having some serious indexing lag from a remote host.

We have configured a forwarder to monitor a file that is being purged every 30 minutes. After the said interval, the contents of the file are being written in an archive directory. The problem is, we have a significant amount of lag before it becomes searchable in Splunk. We sometimes experience as far as 5 hour indexing lag from that particular source. Upon checking on it now, it is down to 45 minutes lag. So the lag varies from time to time.

We're pretty sure that it is not being caused by an undersized Splunk infrastracture because we are also collecting *nix stats (cpu, ram, disk, etc) and these events come in in near-realtime.

Upon checking the logs from the forwarder, we see this line from time to time.

WatchedFile - Checksum for seekptr didn't match, will re-read entire file="/some/file/name/file.log".

Is there an inputs.conf parameter that I should make use to monitor a file that is being flushed regularly?

Any help would greatly be appreciated.

Kindest regards,
Jeff

0 Karma

jepoyyyy
Explorer

I found the root cause of this already. The file that was being monitored was just too big for the default bandwidth limit of the forwarder.

I modified the maxKbps in limits.conf to adjust it and accommodate the volume.

I hope this helps someone someday.

Kindest regards,
Jef

0 Karma
Get Updates on the Splunk Community!

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

March Community Office Hours Security Series Uncovered!

Hello Splunk Community! In March, Splunk Community Office Hours spotlighted our fabulous Splunk Threat ...

Stay Connected: Your Guide to April Tech Talks, Office Hours, and Webinars!

Take a look below to explore our upcoming Community Office Hours, Tech Talks, and Webinars in April. This post ...