Getting Data In

Experiencing duplicate indexing- How do I solve this problem?

lostcauz3
Path Finder

I have a directory that is being monitored on a splunk heavy forwarder.

/app_monitoring      

The above directory will receive a file everyday called Report.csv

there may be duplicate data in it that is already indexed, how to prevent duplicate indexing in this case?

do i have to change anything in the inputs.conf in the app folder? please advise.



Labels (3)
0 Karma
1 Solution

PickleRick
SplunkTrust
SplunkTrust

There is no built-in deduplication of events on ingestion. As simple as that. If you receive or read the same event twice, it will be ingested and indexed twice. As simple as that.

Having said that - the file monitoring input does remember a hash of a file along with how far it already read it so it will not re-read each monitored file after every restart. The file hash is calculated from the beginning of the file so it stays the same even after some data is appended to the file. And even if the file is renamed (for example - by logrotate), the checksum calculated from the beginning of the file stays the same so the file will not be read again. Adding the crcsalt=<SOURCE> adds a filename to the calculated checksum so two files with different names but the same checksum calculated from the beginning of the file would both get indexed.

View solution in original post

0 Karma

PickleRick
SplunkTrust
SplunkTrust

There is no built-in deduplication of events on ingestion. As simple as that. If you receive or read the same event twice, it will be ingested and indexed twice. As simple as that.

Having said that - the file monitoring input does remember a hash of a file along with how far it already read it so it will not re-read each monitored file after every restart. The file hash is calculated from the beginning of the file so it stays the same even after some data is appended to the file. And even if the file is renamed (for example - by logrotate), the checksum calculated from the beginning of the file stays the same so the file will not be read again. Adding the crcsalt=<SOURCE> adds a filename to the calculated checksum so two files with different names but the same checksum calculated from the beginning of the file would both get indexed.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @lostcauz3,

if you're speaking of duplicate files, Splunk doesn't index twice a file containing the same data, also with a different filename; inmstead Splunk cannot identify that some eventa are already present.

So you cannot discard duplicates before indexing, you can only dedup results in search.

Ciao.

Giuseppe

0 Karma

lostcauz3
Path Finder

if i add crcSalt = <SOURCE> to the inputs.conf file what will this do in my case ?

I'm very confused about this

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @lostcauz3,

no crcSalt = <SOUCE> permits to index again an already indexed file.

it isn't possible to filter some already indexed logs.

Splunk doesn't index twice an intere file already indexed, not a part of it.

Ciao.

Giuseppe

0 Karma
Get Updates on the Splunk Community!

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...

Built-in Service Level Objectives Management to Bridge the Gap Between Service & ...

Wednesday, May 29, 2024  |  11AM PST / 2PM ESTRegister now and join us to learn more about how you can ...

Get Your Exclusive Splunk Certified Cybersecurity Defense Engineer Certification at ...

We’re excited to announce a new Splunk certification exam being released at .conf24! If you’re headed to Vegas ...