Getting Data In

Duplicate Siebel log events from Universal Forwarders

wmuselle
Explorer

Hi we are getting duplicates on log events

Events are :

- multiline

- large (to very large)

- also the files can grow to very large

- open for long time

 

symptoms:

- when oneshotting the file > no duplicates

- duplicates may arrive on different indexers, or same indexer

- duplicates may arrive in short time span or even up to an hour

- we get them both from Solaris and Redhat hosts

we have played with useAck, increasing filedescriptors, use of parallel mode, increasing bandwidth to try and rule out it being the flow towards the HF.

From logs we do see files being read entirely up to 3 times, but this is not the case for all, so there may be several root causes.

I am now thinking in the direction of writebuffers, but am not sure because I do also see smaller events being duplicated.

we are using these settings for props

UF:
EVENT_BREAKER_ENABLE = true
EVENT_BREAKER = ([\r\n]+)(?:\w+\s+\w+\s+\d{1})

HF: 

TRUNCATE = 99999    # because of large messages   
TIME_PREFIX = ^(?:[^\t]+?\t){4} # Format is 2021-01-18 16:03:27.118885 so 26 characters
MAX_TIMESTAMP_LOOKAHEAD = 26
TIME_FORMAT = %Y-%m-%d %H:%M:%S.%6N
LINE_BREAKER = ([\r\n]+)(?:\w+\s+\w+\s+\d{1}) # newline characters can be part of the message
SHOULD_LINEMERGE = false

any pointers from your experience on how to further troubleshoot, or just extra pair of eyes/brain is more than appreciated.

 

thanks

0 Karma
Get Updates on the Splunk Community!

Splunk Forwarders and Forced Time Based Load Balancing

Splunk customers use universal forwarders to collect and send data to Splunk. A universal forwarder can send ...

NEW! Log Views in Splunk Observability Dashboards Gives Context From a Single Page

Today, Splunk Observability releases log views, a new feature for users to add their logs data from Splunk Log ...

Last Chance to Submit Your Paper For BSides Splunk - Deadline is August 12th!

Hello everyone! Don't wait to submit - The deadline is August 12th! We have truly missed the community so ...