Getting Data In

Why am I seeing duplicate events?

davidpaper
Contributor

I'm seeing the following two log messages on my UF. I'm also seeing big spikes in events every few minutes from this log file. What's going on?

06-06-2017 13:55:47.047 -0400 WARN TcpOutputProc - Possible duplication of events with channel=source::/logs/mylogs/log4j/my-java-logs.log|host::myhost|log4j_6|16384, streamId=12699096867673601155, offset=48369192 onhost=10.217.104.156:9997

06-06-2017 13:58:45.293 -0400 INFO WatchedFile - Logfile truncated while open, original pathname file='/logs/mylogs/log4j/my-java-logs.log', will begin reading from start.

0 Karma
1 Solution

davidpaper
Contributor

The cause of both messages is the /logs/mylogs/log4j/my-java-logs.log is being written to, and instead of rolled, its being truncated (equivalent of cat /dev/null > my-java-logs.log) and re-written as it grows and reaches 50MB.

To find this, we used a tool called watch.

/usr/bin/watch -n 1 ls -l /logs/mylogs/log4j/my-java-logs.log

And we noticed that the file would grow up to just under 50MB and then it would reset back to 0 bytes and write data into the same file.

The solution was to go back to the developer and convince them to change the logic to roll the log file to my-java-logs.log.1 and open a new my-java-logs.log for writing, instead of truncating.

We also noticed that this large file was triggering the Batch reader. We updated the limits.conf: [default] min_batch_size_bytes up from 20 to 100 MB.

View solution in original post

0 Karma

davidpaper
Contributor

The cause of both messages is the /logs/mylogs/log4j/my-java-logs.log is being written to, and instead of rolled, its being truncated (equivalent of cat /dev/null > my-java-logs.log) and re-written as it grows and reaches 50MB.

To find this, we used a tool called watch.

/usr/bin/watch -n 1 ls -l /logs/mylogs/log4j/my-java-logs.log

And we noticed that the file would grow up to just under 50MB and then it would reset back to 0 bytes and write data into the same file.

The solution was to go back to the developer and convince them to change the logic to roll the log file to my-java-logs.log.1 and open a new my-java-logs.log for writing, instead of truncating.

We also noticed that this large file was triggering the Batch reader. We updated the limits.conf: [default] min_batch_size_bytes up from 20 to 100 MB.

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Tech Talk Recap | Mastering Threat Hunting

Mastering Threat HuntingDive into the world of threat hunting, exploring the key differences between ...

Observability for AI Applications: Troubleshooting Latency

If you’re working with proprietary company data, you’re probably going to have a locally hosted LLM or many ...

Splunk AI Assistant for SPL vs. ChatGPT: Which One is Better?

In the age of AI, every tool promises to make our lives easier. From summarizing content to writing code, ...