Getting Data In

Files not indexing due to fast rotation

Path Finder

Hi All,

Hope you are doing good.

I have come across a difficult situation in indexing a file. We have few Universal Forwarders, on which files will be rotated very fast (within seconds) during mid night. Once they reach the specified size limit, they will be gzipped and moved to archive folder (we are not monitoring this folder). Due to this fast rotation, we are unable to see the logs from those files at that particular time (not indexing may be). The inputs.conf stanza is configured as below:

blacklist = (.\d+|.gz)
index = index
sourcetype = sourcetype
recursive = true

We have default value for the throughput on the Universal Forwarders. Could you please help me in resolving this issue?

Thanks in advance.

0 Karma

Super Champion

Basically splunk reads a file as fast as it can (usually less than 5 seconds).

one more idea - why dont you index the gzipped file?

from another post, my understanding is that interval is only for modular and scripted inputs. Monitor inputs are basically "as fast as we can"

There two main elements to monitor inputs, the File Update Notifier(FUN) and the "Reader"
- FUN will notify the TailReader when it sees a change in a file at OS level. FUN will then stop monitoring the file till the readers (Tail or Batch) have finished reading the file.
- The TailReader looks at the file size, calculates the bytes to be read and passes it to the Batch Reader if need be (>20MB).
- The readers start reading the file in 64KB chunks per iteration, HOWERVER the readers will keep reading ( keeps iterating ) till it finds the end of the file.
- If the file has more data (more then the first time we saw a change in the file) we will read that new appended data.
- Which means if the file is growing really fast we read more data then we actually planned to. This is the reason why we sometimes see >100% reads in the input status page for files that are growing very fast.
- The other thing to note is the readers block on the Parsing queue to insert Pipeline Data. That is if the queue is full, the readers will wait till the queue frees up, before reading more data.
- Once the readers are done reading till the EOF, they notify FUN to start monitoring for changes again.

So the "interval" is really the iteration time, which is dependant on:
- How "active" the files are, how much data is written to the files and the speed of cpu and disk.
- How fast the we process the Pipeline Data further down the pipeline.

0 Karma

Path Finder

Hi @inventsekar,

Thank you for brief explanation. I thought, monitoring the gz files which are formed after rotation of log files will make the Splunk to index twice. That is the reason I have blacklisted the gz files.

I have gone through the internal logs of those Universal Forwarders, but I didn't find any blocked queues at that time. My main concern is that whether the logs are rotated before indexing fully resulting in missing of logs. Could you please suggest me how to overcome this situation?

Thanks in advance.

0 Karma

Super Champion

My main concern is that whether the logs are rotated before indexing fully resulting in missing of logs. ///

Hi Siva, i dont think there is any straight way to find out whether the logs got fully indexed before rotated.
maybe, you can index the gz file and then find the duplicates and delete the duplicate copy.

0 Karma
Get Updates on the Splunk Community!

Splunk Observability Cloud | Customer Survey!

If you use Splunk Observability Cloud, we invite you to share your valuable insights with us through a brief ...

.conf23 | Get Your Cybersecurity Defense Analyst Certification in Vegas

We’re excited to announce a new Splunk certification exam being released at .conf23! If you’re going to Las ...

Starting With Observability: OpenTelemetry Best Practices

Tech Talk Starting With Observability: OpenTelemetry Best Practices Tuesday, October 17, 2023   |  11AM PST / ...