- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
How to delete duplicates from Lookup csv file ?
Hello, We have a CSV Lookup file that is getting populated by a saved search. We are noticing there are lot of duplicate rows getting created every other day. The file doesn't open in Lookup Editor App as its size is > 10MB. Can someone pls advise how to delete duplicates via a query ?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Change the saved search or post-process the saved search to remove duplicates before writing the csv.
There a number of ways to remove duplicates depending on your criteria. For example, when there is a "duplicate", is it completely duplicated across all fields or a subset? If it is a subset, which version takes priority, e.g. first, last, max, min, etc.? If it is not a subset, is the order in anyway significant (unlikely if being used as a lookup but worth considering anyway)?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have actually updated the problem scenario in another post and tagged you in it. Just' realized its not really duplicates but results getting appended to data in previous row. Pls see below. Can you help ?
https://community.splunk.com/t5/Splunk-Search/How-to-make-a-Search-NOT-append-results-from-previous-...
