Getting Data In

How to resolve TailReader errors and data loss using universal forwarder (bug during applyPendingMetadata, header processor does not own the indexed extractions confs)?

Behnam67
New Member

I've been dealing with this TailReader error for a while and was not able to fix it despite reading all answers and similar questions. I'm still experiencing data loss every day.

As you can see in below .confs I already disabled indexed_extraction since the universal forwarder doesn't extract field at index time, but still getting that error.

I was told to migrate to a heavy forwarder, but I prefer to solve it on UF if possible.

I appreciate any help.

inputs.conf

[monitor:///home/audit/oracle/*/v1[12]*.log]
disabled = 0
index = ora
sourcetype = oracle:audit:json
blacklist = (ERROR|lost|ORA|#|DONE)
crcSalt = 
initCrcLength = 1000
ignoreOlderThan = 4h
alwaysOpenFile = 1
interval = 30

props.conf

[oracle:audit:json]
DATETIME_CONFIG = CURRENT
#INDEXED_EXTRACTIONS = JSON
KV_MODE = none
MAX_EVENTS = 5
TRUNCATE = 0
TRANSFORMS-TCP_ROUTING_GNCS = TCP_ROUTING_GNCS
TRANSFORMS-hostoverride = hostoverride
TRANSFORMS-HOST_JSON = HOST_JSON
TRANSFORMS-sourcetype_json11 = sourcetype_json11
TRANSFORMS-sourcetype_json12 = sourcetype_json12
TRANSFORMS-sourcetype_sql11 = sourcetype_sql11
TRANSFORMS-sourcetype_sql12 = sourcetype_sql12
Labels (3)
0 Karma
Get Updates on the Splunk Community!

Enter the Agentic Era with Splunk AI Assistant for SPL 1.4

  🚀 Your data just got a serious AI upgrade — are you ready? Say hello to the Agentic Era with the ...

Feel the Splunk Love: Real Stories from Real Customers

Hello Splunk Community,    What’s the best part of hearing how our customers use Splunk? Easy: the positive ...

Data Management Digest – November 2025

  Welcome to the inaugural edition of Data Management Digest! As your trusted partner in data innovation, the ...