Getting Data In

How to avoid indexing duplicate events with files being rotated and compressed?

srenou
New Member

Hello,

We have a weblogic instance that writes its log file using log rotation as well as compression of the file.
When the box is under strong load, we can see in Splunk some data appearing twice as they get indexed when the file access.log is processed and reindexed when the file access.log.1.gz is being processed.
We also see some data that were indexed only in file access.log.1.gz.

Ignoring the gz file would mean that we would lose those extra data, but in the current situation, we are getting duplicate data being processed.
Is there a workaround to that situation to not lose any data and avoid processing duplicate data?
I saw some proposal to remove the duplicates after the processing (| dedup _raw`), but this sounds like an after the fact item.

It also appears that this does not occur if the system is not under stress, so Splunk is able to identify on occasion that the file 1.gz is the compressed version of the previously indexed access.log.

Thanks for any help as I am trying to get all my data and no duplicates.

0 Karma

nettrigger
Explorer

I have the same problem and to this day, Splunk has not been able to give me a professional and real answer. Disappointing.

Get Updates on the Splunk Community!

Index This | What is broken 80% of the time by February?

December 2025 Edition   Hayyy Splunk Education Enthusiasts and the Eternally Curious!    We’re back with this ...

Unlock Faster Time-to-Value on Edge and Ingest Processor with New SPL2 Pipeline ...

Hello Splunk Community,   We're thrilled to share an exciting update that will help you manage your data more ...

Splunk MCP & Agentic AI: Machine Data Without Limits

Discover how the Splunk Model Context Protocol (MCP) Server can revolutionize the way your organization uses ...