I suspect that I may have duplicate events indexed by Splunk. The cause may be my originating files having dupes OR my Splunk configuration may be indexing some events twice or more times.
To be sure, what search can I run to find all my duplicate events currently within my Splunk index?
Try appending this search string to your current search to find duplicates:
| transaction fields="_time,_raw" connected=f keepevicted=t | search linecount > 1
This won't work if the original data is multiline. But you could fix that with
| rename duration as original_duration | transaction _time,_raw | search duration=* The
transaction will also be rather more efficient if you set
maxopentxn=1 if your duplicates will be consecutive.
Actually now that I think about it:
| stats count by _time,_raw | rename _raw as raw | where count > 1 might be better. But an ER for search command to showdupes might be best.
I think it's safe to assume that if an event is duplicated (same value for
_raw) than the duplicates and the original should have the same timestamp. Therefore, it should be possible to include
maxspan=1s, like so:
... | eval dupfield=_raw | transaction dupfield maxspan=1s keepevicted=true
I'm not sure about Gerald's comment about multi-line events, since my de-dedup catching was limited to single line events, but it seems to me that some kind of
sed trick could be used, like so:
... | eval dupfield=_raw | rex mode=sed field=dupfield "s/[\r\n]/<EOL>/g" | transaction dupfield maxspan=1s keepevicted=true
BTW, I found the
transaction based approach to be much faster than using
stats approach suggested in the comments above and much less restrictive. (It seems like
stats has a a 10,000 entry limit on the "by" clause)
Also, in my case I was trying to not only get a count of duplicate events but figure out the extra volume (in bytes) that could have been avoided if the data was de-duped externally before being loaded. I used a search like this:
sourcetype=my_source_type | rename _raw as raw | eval raw_bytes=len(raw) | transaction raw maxspan=1s keepevicted=true | search eventcount>1 | eval extra_events=eventcount-1 | eval extra_bytes=extra_events*raw_bytes | timechart span=1d sum(extra) as exta_events, sum(eval(extra_bytes/1024.0/1024.0)) as extra_mb
This shows you the impact in megabytes per day.
Lowell is absolutely right that this transaction will be MUCH, MUCH faster than anything involving stats because of its favorable eviction policy. Transaction, especially with maxspan set, will only keep data for the current second in memory, as search scans backwards through time.
Stephen it would be nice if there was a search command that could remove duplicates -1, I'm not what the impact would be.
* | tag_dupes | delete
Original fixed due to some typos:
sourcetype=* | rename raw as raw | eval rawbytes=len(raw) | transaction raw maxspan=1s keepevicted=true | search eventcount>1 | eval extraevents=eventcount-1 | eval extrabytes=extraevents*rawbytes | timechart span=1s sum(extraevents) as extraevents, sum(eval(extrabytes/1024.0/1024.0)) as extramb
To show number of events and size by sourcetype:
sourcetype=* | rename raw as raw | eval rawbytes=len(raw) | transaction raw maxspan=1s keepevicted=true | search eventcount>1 | eval extraevents=eventcount-1 | eval extrabytes=extraevents*rawbytes |stats sum(extraevents) as extraevents, sum(eval(extrabytes/1024.0/1024.0)) as extramb by host,sourcetype