Getting Data In

How to avoid indexing duplicate events with files being rotated and compressed?

srenou
New Member

Hello,

We have a weblogic instance that writes its log file using log rotation as well as compression of the file.
When the box is under strong load, we can see in Splunk some data appearing twice as they get indexed when the file access.log is processed and reindexed when the file access.log.1.gz is being processed.
We also see some data that were indexed only in file access.log.1.gz.

Ignoring the gz file would mean that we would lose those extra data, but in the current situation, we are getting duplicate data being processed.
Is there a workaround to that situation to not lose any data and avoid processing duplicate data?
I saw some proposal to remove the duplicates after the processing (| dedup _raw`), but this sounds like an after the fact item.

It also appears that this does not occur if the system is not under stress, so Splunk is able to identify on occasion that the file 1.gz is the compressed version of the previously indexed access.log.

Thanks for any help as I am trying to get all my data and no duplicates.

0 Karma

nettrigger
Explorer

I have the same problem and to this day, Splunk has not been able to give me a professional and real answer. Disappointing.

Get Updates on the Splunk Community!

Dashboard Studio Challenge - Learn New Tricks, Showcase Your Skills, and Win Prizes!

Reimagine what you can do with your dashboards. Dashboard Studio is Splunk’s newest dashboard builder to ...

Introducing Edge Processor: Next Gen Data Transformation

We get it - not only can it take a lot of time, money and resources to get data into Splunk, but it also takes ...

Take the 2021 Splunk Career Survey for $50 in Amazon Cash

Help us learn about how Splunk has impacted your career by taking the 2021 Splunk Career Survey. Last year’s ...