Getting Data In

How to resolve TailReader errors and data loss using universal forwarder (bug during applyPendingMetadata, header processor does not own the indexed extractions confs)?

Behnam67
New Member

I've been dealing with this TailReader error for a while and was not able to fix it despite reading all answers and similar questions. I'm still experiencing data loss every day.

As you can see in below .confs I already disabled indexed_extraction since the universal forwarder doesn't extract field at index time, but still getting that error.

I was told to migrate to a heavy forwarder, but I prefer to solve it on UF if possible.

I appreciate any help.

inputs.conf

[monitor:///home/audit/oracle/*/v1[12]*.log]
disabled = 0
index = ora
sourcetype = oracle:audit:json
blacklist = (ERROR|lost|ORA|#|DONE)
crcSalt = 
initCrcLength = 1000
ignoreOlderThan = 4h
alwaysOpenFile = 1
interval = 30

props.conf

[oracle:audit:json]
DATETIME_CONFIG = CURRENT
#INDEXED_EXTRACTIONS = JSON
KV_MODE = none
MAX_EVENTS = 5
TRUNCATE = 0
TRANSFORMS-TCP_ROUTING_GNCS = TCP_ROUTING_GNCS
TRANSFORMS-hostoverride = hostoverride
TRANSFORMS-HOST_JSON = HOST_JSON
TRANSFORMS-sourcetype_json11 = sourcetype_json11
TRANSFORMS-sourcetype_json12 = sourcetype_json12
TRANSFORMS-sourcetype_sql11 = sourcetype_sql11
TRANSFORMS-sourcetype_sql12 = sourcetype_sql12
Labels (3)
0 Karma
Get Updates on the Splunk Community!

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

Get Inspired! We’ve Got Validation that Your Hard Work is Paying Off

We love our Splunk Community and want you to feel inspired by all your hard work! Eric Fusilero, our VP of ...

What's New in Splunk Enterprise 9.4: Features to Power Your Digital Resilience

Hey Splunky People! We are excited to share the latest updates in Splunk Enterprise 9.4. In this release we ...