Splunk Search

Dedup duplicates

HeinzWaescher
Motivator

Hi,

what is the easiest way to filter out event duplicates without adding every field in the dedup command?
Is
| dedup _raw

the correct approach?

BR

Tags (2)
1 Solution

Ayn
Legend

dedup _raw should work just fine, yes.

View solution in original post

HeinzWaescher
Motivator

I've got two additional questions regarding this topic:

  1. How can I search for the count of events that have duplicates?
  2. How can I search for the total number of duplicates?

BR

Heinz

0 Karma

HeinzWaescher
Motivator

Unfortunately I don't have an unique identifier for each event like your proposed session_id

0 Karma

Rocket66
Communicator

You can count duplicated event by using the "transaction" command. And then count the events by using "eventcount"

eg.:

eventtype="*" | transaction session_id | Where eventcount>1 | stats count by eventcount

to find out how many duplicates occured

or:

eventtype="*" | transaction session_id | Where eventcount>1 | stats count(eventcount)

to count how many different duplicated events occured

or ...

0 Karma

Ayn
Legend

dedup _raw should work just fine, yes.

HeinzWaescher
Motivator

great, thanks

0 Karma

ITUser1
Explorer

When I try and enter the "|dedup _raw" command at the end of my search parameter I end up with no matches but when I take it off the end I end up with thousands. I can see that they are duplicates(same IP address, name, and port) but it still doesn't work. any suggestions?

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...