Getting Data In

how to avoid duplicate data when a file is created with the same name which was delete earlier.

soniaraj13
New Member

Hi,

I see duplicate data getting ingested when a file which was already ingested is being recreated upon a system failure with the existing data plus new data in the file.

For example, lets say test.csv has the following data,

a b c

When the file is deleted and recreated with the same name but with the following additional data,

a b c
1 2 3
4 5 6

it ingests a b c again besides 1 2 3...

Can someone help me with the correct stanza to be added in inputs.conf or any other solution to avoid data being duplicated as per an example mentioned above.

Thanks.

Tags (2)
0 Karma

tom_frotscher
Builder

Hi,

by default splunk reads the first few lines of a file (256 bytes) and calculates a hashvalue over those lines. When a new file with the same hash appears it isn't read. In your case, those first few lines are the same, therefore the file is reindexed again and you get your duplicates.

You can adjust the amount of bytes that are read with this value in the inputs.conf:

initCrcLength = <integer>

Here is the link to the examples and spec of the inputs.conf file.
You can find additional details for initCrcLength there, and also much more configuration options for your inputs.

Greetings

Tom

0 Karma
Get Updates on the Splunk Community!

Index This | What is broken 80% of the time by February?

December 2025 Edition   Hayyy Splunk Education Enthusiasts and the Eternally Curious!    We’re back with this ...

Unlock Faster Time-to-Value on Edge and Ingest Processor with New SPL2 Pipeline ...

Hello Splunk Community,   We're thrilled to share an exciting update that will help you manage your data more ...

Splunk MCP & Agentic AI: Machine Data Without Limits

Discover how the Splunk Model Context Protocol (MCP) Server can revolutionize the way your organization uses ...