Getting Data In

Experiencing duplicate indexing- How do I solve this problem?

lostcauz3
Path Finder

I have a directory that is being monitored on a splunk heavy forwarder.

/app_monitoring      

The above directory will receive a file everyday called Report.csv

there may be duplicate data in it that is already indexed, how to prevent duplicate indexing in this case?

do i have to change anything in the inputs.conf in the app folder? please advise.



Labels (2)
0 Karma
1 Solution

PickleRick
SplunkTrust
SplunkTrust

There is no built-in deduplication of events on ingestion. As simple as that. If you receive or read the same event twice, it will be ingested and indexed twice. As simple as that.

Having said that - the file monitoring input does remember a hash of a file along with how far it already read it so it will not re-read each monitored file after every restart. The file hash is calculated from the beginning of the file so it stays the same even after some data is appended to the file. And even if the file is renamed (for example - by logrotate), the checksum calculated from the beginning of the file stays the same so the file will not be read again. Adding the crcsalt=<SOURCE> adds a filename to the calculated checksum so two files with different names but the same checksum calculated from the beginning of the file would both get indexed.

View solution in original post

0 Karma

PickleRick
SplunkTrust
SplunkTrust

There is no built-in deduplication of events on ingestion. As simple as that. If you receive or read the same event twice, it will be ingested and indexed twice. As simple as that.

Having said that - the file monitoring input does remember a hash of a file along with how far it already read it so it will not re-read each monitored file after every restart. The file hash is calculated from the beginning of the file so it stays the same even after some data is appended to the file. And even if the file is renamed (for example - by logrotate), the checksum calculated from the beginning of the file stays the same so the file will not be read again. Adding the crcsalt=<SOURCE> adds a filename to the calculated checksum so two files with different names but the same checksum calculated from the beginning of the file would both get indexed.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @lostcauz3,

if you're speaking of duplicate files, Splunk doesn't index twice a file containing the same data, also with a different filename; inmstead Splunk cannot identify that some eventa are already present.

So you cannot discard duplicates before indexing, you can only dedup results in search.

Ciao.

Giuseppe

0 Karma

lostcauz3
Path Finder

if i add crcSalt = <SOURCE> to the inputs.conf file what will this do in my case ?

I'm very confused about this

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @lostcauz3,

no crcSalt = <SOUCE> permits to index again an already indexed file.

it isn't possible to filter some already indexed logs.

Splunk doesn't index twice an intere file already indexed, not a part of it.

Ciao.

Giuseppe

0 Karma
Get Updates on the Splunk Community!

Splunk Observability Cloud's AI Assistant in Action Series: Auditing Compliance and ...

This is the third post in the Splunk Observability Cloud’s AI Assistant in Action series that digs into how to ...

Splunk Community Badges!

  Hey everyone! Ready to earn some serious bragging rights in the community? Along with our existing badges ...

What You Read The Most: Splunk Lantern’s Most Popular Articles!

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...