Getting Data In

Duplicate Siebel log events from Universal Forwarders

wmuselle
Path Finder

Hi we are getting duplicates on log events

Events are :

- multiline

- large (to very large)

- also the files can grow to very large

- open for long time

 

symptoms:

- when oneshotting the file > no duplicates

- duplicates may arrive on different indexers, or same indexer

- duplicates may arrive in short time span or even up to an hour

- we get them both from Solaris and Redhat hosts

we have played with useAck, increasing filedescriptors, use of parallel mode, increasing bandwidth to try and rule out it being the flow towards the HF.

From logs we do see files being read entirely up to 3 times, but this is not the case for all, so there may be several root causes.

I am now thinking in the direction of writebuffers, but am not sure because I do also see smaller events being duplicated.

we are using these settings for props

UF:
EVENT_BREAKER_ENABLE = true
EVENT_BREAKER = ([\r\n]+)(?:\w+\s+\w+\s+\d{1})

HF: 

TRUNCATE = 99999    # because of large messages   
TIME_PREFIX = ^(?:[^\t]+?\t){4} # Format is 2021-01-18 16:03:27.118885 so 26 characters
MAX_TIMESTAMP_LOOKAHEAD = 26
TIME_FORMAT = %Y-%m-%d %H:%M:%S.%6N
LINE_BREAKER = ([\r\n]+)(?:\w+\s+\w+\s+\d{1}) # newline characters can be part of the message
SHOULD_LINEMERGE = false

any pointers from your experience on how to further troubleshoot, or just extra pair of eyes/brain is more than appreciated.

 

thanks

0 Karma
Get Updates on the Splunk Community!

ATTENTION!! We’re MOVING (not really)

Hey, all! In an effort to keep this Slack workspace secure and also to make our new members' experience easy, ...

Splunk Admins: Build a Smarter Stack with These Must-See .conf25 Sessions

  Whether you're running a complex Splunk deployment or just getting your bearings as a new admin, .conf25 ...

AppDynamics Summer Webinars

This summer, our mighty AppDynamics team is cooking up some delicious content on YouTube Live to satiate your ...