Getting Data In

How to handle a daily changing CSV file and avoid indexing duplicate events/rows?



I have daily growing CSV file that I want to index. Just importing it every day would end up in a lot a duplicate events. I've read about the followTail option, but also that this option is not recommended. How I can avoid duplicate events? My first thought was to create a daily scheduled search to delete all "old" files and only keep the last indexed file, but I hope there is a better possibility.


Tags (3)
0 Karma


Maybe you should consider looking at the kv store. I believe it has an upsert capability through a RESTful interface.


If below is your case:

If you have ten records.
If you have ten+7 records. (7 new records and 10 records regenerated and are same as the previous file had)
You have 25 records in total. (10 from Sep,1,2015. 7 from Sep,2,2015 and 8 new records from Sep,3,2015)

Then, you can choose to monitor the file continuously (while indexing) and make sure you copy paste all the data from the new file into old file (if you want to do it manually). This way you don't have duplicate records.

If your scenario is different, then just use index=something sourcetype=csv source=path/filename.csv | dedup _raw | your analysis code

Hope this is helpful for you.


| dedup _raw is good a first workaround


0 Karma
Get Updates on the Splunk Community!

Dashboard Studio Challenge - Learn New Tricks, Showcase Your Skills, and Win Prizes!

Reimagine what you can do with your dashboards. Dashboard Studio is Splunk’s newest dashboard builder to ...

Introducing Edge Processor: Next Gen Data Transformation

We get it - not only can it take a lot of time, money and resources to get data into Splunk, but it also takes ...

Take the 2021 Splunk Career Survey for $50 in Amazon Cash

Help us learn about how Splunk has impacted your career by taking the 2021 Splunk Career Survey. Last year’s ...