Getting Data In

delay forwarder ingesting file

tmarlette
Motivator

I have a script that creates a file, thoguh the command that is run, has a very long output, and it takes about 20 seconds to build the input. This script is ran every 60 seconds, and I use crcSALT = <SOURCE> to capture the input, due to this just being an output of a command with no timestamp.

I am getting discrepancies in data between a simply 'grep | wc -l' and splunk, and I'm pretty sure it's due to timing. I think the forwarder is ingesting the file before it is fully populated, so it only gets a portion of the data, but it still misses some things.

My question is, how do I get the forwarder to either wait the 10 seconds it takes to populate that file every minute, or to know that the who file hasn't loaded yet, and wait until the file itself has fully populated before ingestion?

so far in inputs.conf I have attempted:
followTail = 0
time_before_close = 0

neither of them had an effect.
Thank You!

0 Karma

woodcock
Esteemed Legend

You need to increase time_before_close in inputs.conf:

time_before_close = Modtime delta required before Splunk can close a file on EOF. Tells the system not to close files that have been updated in past seconds. * Defaults to 3.

http://docs.splunk.com/Documentation/Splunk/latest/admin/Inputsconf

0 Karma

tmarlette
Motivator

I did try this, though this didn't work. I am still having the same issue. I set this for time_before_close=10

0 Karma

woodcock
Esteemed Legend

If you already know it takes at least 20 seconds, why do you think that waiting 10 seconds will change anything? Try changing it to something way too big like 30 and walk it back from there.

0 Karma

Richfez
SplunkTrust
SplunkTrust

Could you just increase the size of the buffer you are using to write the file? Here's a post about that if you are using Python. Other docs are available, and if you are using some other scripting language you'll have to look it up. But if you made your buffer larger than your typical file size, that would make the file write be one operation and should get around this problem.

0 Karma
Get Updates on the Splunk Community!

Stay Connected: Your Guide to July Tech Talks, Office Hours, and Webinars!

What are Community Office Hours?Community Office Hours is an interactive 60-minute Zoom series where ...

Updated Data Type Articles, Anniversary Celebrations, and More on Splunk Lantern

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

A Prelude to .conf25: Your Guide to Splunk University

Heading to Boston this September for .conf25? Get a jumpstart by arriving a few days early for Splunk ...