Getting Data In

Splunk re-indexing rolled over log file causing duplicate (two) copies of data

Path Finder

We have a log file rotation policy that rolls over based on size (64MB). For some reason, every now and then (frequent but not all the time), splunk forwarder thinks the rolled over file is a new file and ships it again causing duplicates in the indexer.

We would find the same event from filenames blah and blah.0 (rolledover file name).

Any clues what might be causing this issue?


How are you performing your roll-over? It is a rename or a copy?

0 Karma

Path Finder

It is a rename.
Note: ZFS is underlying FS
Each process has a lot of threads that
write to log files protected by a mutex. So only one thread can write at a time.
When we see the file growing to exceed this size (~64MB), we acquire the mutex
blocking any writes to the file, closing the file, deleting oldest generation N (BLAHFILENAME.N)
then for (n = 0; n < N; n++) rename BLAHFILENAME.N to BLAHFILENAME.N+1
then finally renaming current log file BLAHFILENAME to BLAHFILENAME.0
then creating a new empty log file BLAHFILENAME and releasing the mutex
allowing all threads to write to the new file.

0 Karma

Path Finder

We don't have any crcSalt settings set. Also this does not happen all the time i.e All rolled over versions of the same log file are not duplicated.

0 Karma


Do you have any particular crcSalt settings set in inputs.conf for this particular source?

0 Karma
Get Updates on the Splunk Community!

Observability | How to Think About Instrumentation Overhead (White Paper)

Novice observability practitioners are often overly obsessed with performance. They might approach ...

Cloud Platform | Get Resiliency in the Cloud Event (Register Now!)

IDC Report: Enterprises Gain Higher Efficiency and Resiliency With Migration to Cloud  Today many enterprises ...

The Great Resilience Quest: 10th Leaderboard Update

The tenth leaderboard update (11.23-12.05) for The Great Resilience Quest is out &gt;&gt; As our brave ...